Much has been made in recent months about the potential benefits of artificial intelligence for humanity – stopping terrorist content before it's seen, letting people experience events they otherwise never could or performing menial tasks, such as tagging photos.
Despite all of that, artificial intelligence also has a negative side. Luminaries such as Elon Musk, the late Stephen Hawking and others have warned of a "robot apocalypse" and AI being the root cause of World War III one day in the not-too-distant future.
But what if the fear humans have for artificial intelligence isn't robots or a sentient AI system similar to Skynet in the "Terminator" movies – but humans themselves?
Sometimes known as adversarial AI, artificial intelligence can be exploited by hackers in everyday situations, such as getting money out of an ATM, criminals sneaking across borders or taking over your smart speaker and wreaking havoc with your bank account.
"As with any new technology, hackers are going to look for a way to exploit it for their gain," Koos Lodewijkx, VP and CTO of Security Operations and Response, IBM Security said in an email to Fox News.
Lodewijkx added: "Our research sees cybercriminals being able to fool voice systems to make purchases on their behalf or even attack autonomous vehicles through stickers on stop signs to confuse them. A whole new field of research at universities and tech companies is looking at ways to defend AI systems from these sort of attacks before they become a reality."
Via research working with universities and other tech companies, IBM has spotted some of the more common ways hackers can corrupt AI systems and turn them against humanity. For example, a sticker covering a street sign that a road is closed could throw off an autonomous vehicle and cause it to not stop, whereas a human driver would know the difference.
Other examples include streaming songs on a smart speaker with one of the songs tweaked to hide audio commands a smart speaker could follow that could clean out a bank account. Another possibility involves fooling facial recognition algorithms by using smart glasses that could misidentify the hacker wearing them, something that researchers at Carnegie Mellon University did in fact create to show the downside of AI systems.
One notable example has already happened, even if the problem was caught rather quickly. In 2016, Microsoft built a Twitter bot using artificial intelligence, Tay, which was designed to mimic and converse with users in real time. However, it became corrupted and caused significant controversy when it posted racist and offensive tweets, as people took advantage of its machine learning capabilities. Tay was shut down just 16 hours after it was originally launched.
As hackers look for new ways to exploit AI systems, researchers are looking at ways to defend them before they are even attacked. IBM recently released an open-source software library to help secure AI systems. While this framework may help in the future, there is the more immediate need of what the average person can do to protect themselves now, given the prevalence of artificial intelligence in our everyday lives.
Lodewijkx suggests taking some basic precautions, similar to what we already do now with our devices, including making sure software is up-to-date and keeping an eye out for odd behavior from your devices.
"If your computer started acting sluggish and emailing random friends, you would probably suspect a computer virus and seek help," Lodewijkx said. "Most end-users won't know and don't care what's under the hood until something goes wrong, so preventative maintenance and security awareness are still the best advice."
Follow Chris Ciaccia on Twitter @Chris_Ciaccia