Despite Elon Musk's continued warnings, evil machines won’t take over the world, two experts said this week.
Artificial intelligence (AI) could be destined to turn against humanity, Musk has argued. The tech exec, who in addition to running high-profile companies such as Tesla and SpaceX, is a co-founder of OpenAI, a non-profit AI research company “discovering and enacting the path to safe artificial general intelligence.”
However, other executives in Silicon Valley have taken issue with Musk's comments, including the leader of Google's artificial intelligence efforts.
“I’m definitely not worried about the AI apocalypse,” said Google's John Giannandrea, when speaking at TechCrunch Disrupt SF. “I just object to the hype and soundbites that some people are making,” he added.
Some publications, including TechCrunch and Bloomberg, believed he was referring to Elon Musk – though he didn’t mention Musk by name. Giannandrea's comments echo other tech executives like Facebook's Mark Zuckerberg, who also disagrees with Musk.
When reached for this story by Fox News, Musk's OpenAI had no comment.
Musk has been direct about the topic previously. In August, he tweeted, “If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.”
Earlier this month, Musk again sounded the alarm, saying that war “may be initiated not by the country leaders, but one of the AI's, if it decides that a prepemptive strike is most probable path to victory.”
Others have also come out against Musk's comments, including several in academia.
In a Wired article entitled, “Elon Musk is wrong. The AI singularity won't kill us all,” Toby Walsh, a professor of artificial intelligence at the University of New South Wales, wrote: “It seems you can’t open a newspaper without Elon Musk predicting that artificial intelligence (AI) needs regulating – before it starts World War III.”
Walsh disagrees with the so-called "technological singularity…a tipping point, when machine intelligence snowballs away…[and they] use their superior intelligence to take over the planet.”
“AlphaGo isn’t going to wake up tomorrow and decide humans are useless at Go…And it is certainly not going to wake up and decide to take over the planet. It’s not in its code,” he wrote.
AlphaGo is an AI computer program that plays the board game Go and was developed by Google’s DeepMind. In 2016, AlphaGo beat Lee Sodol, a Go world champion.
Risks still exist
However, Walsh is worried about so-called stupid AI. “We do have to worry, as Musk has recently warned in an open letter to the UN, about lethal autonomous weapons,” he told Fox News in an email. “Even with stupid AI, these will be weapons of mass destruction that will fall into the hands of terrorists and be a terrible, terrifying and unnecessary escalation to how we fight war.”
And despite Giannandrea's comments, Google isn’t completely pushing back against the danger of AI. “We’re doing research early on to understand and address potential risks in case they become issues later, precisely because we think AI has such positive potential overall. We want to ensure it’s useful and helpful for everyone,” a Google spokesperson told Fox News in an email.
OpenAI cautions on its website that when human-level AI comes within reach, "it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."