If you look up in the sky at night, you might notice one of over 1,300 active satellites orbiting the Earth. (There’s also another 3,000 up there that don’t work anymore.)
If you believe they are all controlled by human operators, think again.
Artificial intelligence is becoming more prevalent in our daily lives, from every day tasks, such as analyzing stock trading data to helping power and maneuver satellites thousands of miles away from Earth.
Vehicles can now sense a lane marking and move you back into position automatically. Amazon’s Alexa voice assistant can not only order a USB charger for your phone but tell you the score to last night’s Golden State Warriors game.
As various neural networks continue to get smarter, human control over countless tasks is likely to lessen, say experts.
Robotics and machine learning experts argue that our current state of “weak” artificial intelligence (something like Siri on your iPhone) is only the beginning. Someday, a “general intelligence” (a computer that could write the next Harry Potter book) might spring to life, one that has a “master algorithm” at its disposal.
One expert says it’s possible in the next 30-50 years, suggesting something that looks and functions like Skynet from the "Terminator" movies.
This begs the question though: What if an early version of Skynet has already been created and what if was built by Mark Zuckerberg?
The idea is far-fetched. Zuckerberg is working on a personal project to create an AI for his home that dims the lights and closes the garage door. In a Facebook post, he explained what he intends to do. One user jokingly commented: “Just make sure you don’t accidentally create Skynet.”
That’s a completely absurd notion. Or is it? Could an AI exist today that’s smarter than us?
“It’s already happening and it’s by design,” says Dom Price, a futurist and the Head of R&D at software giant Atlassian. “But it will be a long time before AI and robots can replicate the complexity of trust and one-to-one human interaction, especially when it comes to teamwork.”
Price says 51% of the U.S. population trust humans over machines, while only 27% fully trust an AI. It will be many years before humanity gives up control. “We are creating a bit of a Skynet right now, but we have the means to control and ensure that it won't rise up against us,” he says.
A few experts feel a full-fledged Skynet is closer than many people think.
Former CIA officer Henderson Cooper tells Fox News we’ve been creating advanced cyber technology for “decades” and that while it has aided society in many ways, there is also a dark side.
“There can and certainly will come a time when our tools are smarter than our users,” Cooper says. “When machines become our key tools, and the machine surpass our ability to think and react, we begin to approach a point where we could in fact lose control. When those machines begin to act autonomously from the humans then we are in an area of the unknown.”
Elliot Schrock, who runs an AI startup called Thryv, tells Fox News that our current “weak” machine learning is focused on making lives better. A Roomba can vacuum autonomously, a chatbot can book your travel for you in Facebook Messenger using only a text interface.
“If you asked IBM Watson to drive your car or Google’s DeepMind to write a report for you, they would fail miserably,” he says. At the same time, Schrock mentions how there is a lot of work going into combining our own biology with a robot.
Elon Musk recently announced a startup called Neuralink that will connect our brains to a computer. That sounds a bit like Skynet, right?
Some experts like futurist Jared Ficklin have suggested robots could abuse their power. For example, they might only recommend unhealthy food. Design theorist Eliezer Yudkowsky has suggested that robots could slowly poison us to death.
We might not think of a simple AI in Mark Zuckerberg’s home as dangerous, but there could easily be something like an early version of Skynet that locks the doors and turn up the heat to annoy us or worse. The bot might choose self-preservation over assistance.
And yet, none of this should cause any alarm, say most experts..
“I think we can build AI so it works for us and helps us," says Zuckerberg in a reply to a comment about his AI. "Some people fear-monger about how AI is a huge danger, but that seems far-fetched to me and much less likely than disasters due to widespread disease [and] violence."
Maybe he’s right about that. We can only hope.