Elon Musk-backed machine created, deemed too dangerous for the world

A group of scientists backed by Elon Musk have designed a predictive text machine that is so eerily good its creators are worried about releasing it to the world.

Designed by OpenAI, a non-profit artificial intelligence research organization backed by the eccentric billionaire, the machine can take a piece of writing and spit out many more paragraphs in the same vein.

Called the GPT-2, it was trained on a dataset of 8 million web pages and is so good at mimicking the style and tone of a piece of writing that it has been described as a text version of deepfakes — the emerging AI-based video technology that can realistically replicate celebrities or world leaders in video footage.

You would be familiar with this sort of thing. Google’s Gmail service has predictive text technology and even offers up a selection of pre-written responses to the emails that you receive. But this is on a whole other level.

OpenAI produced a paper, demonstrating the prowess of its predictive text software, including examples of its work.

In one example, the machine was fed these two sentences:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

And here’s how it continued the story:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Not bad, right?

It goes on like that for six more paragraphs, complete with fake quotes and a perfectly convincing narrative.

The software has difficulty with “highly technical or esoteric types of content” but otherwise is able to produce “reasonable samples” just over 50 percent of the time, researchers said.

A couple of journalists from The Guardian were given the chance to take the technology for a spin and were suitably concerned by its power.

“AI can write just like me. Brace for the robot apocalypse,” reads the headline by journalist Hannah Jane Parkinson.

The OpenAI computer was fed an article or hers and “wrote an extension of it that was a perfect act of journalistic ventriloquism”, she said.

“This AI has the potential to absolutely devastate. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media,” she warned.

For decades, machines have struggled with the subtleties and nuance of human language and reproducing realistic imitations. Research like this shows real progress is being made on that front.

But can we handle it?

Elon Musk has consistently said artificial intelligence is one of the major potential threats facing the future of humanity. The OpenAI organization has the goal of developing the technology in a safe and responsible way.

The organization usually releases the full extent of its research but has withheld the totality of its latest project out of fear it could be abused or misused — a high likelihood given the increased weaponization of “fake news” thanks to social media.

Policy Director at OpenAI, Jack Clark, said the idea not to make the GPT-2 publicly available was not about hyping the research.

“The main thing for us here is not enabling malicious or abusive uses of the technology. We’ve published a research paper and small model. Very tough balancing act for us,” he wrote on Twitter.

The organization said it was concerned such software could be used to generate misleading news articles, impersonate others online, automate abusive or fake content on social media and automate email scams.

Ultimately, it will likely be a question of whether the good outweighs the bad.

“These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations,” the paper’s abstract says.

And on that note, it’s probably time I looked for a new job.

This story originally appeared in news.com.au.