Artificial intelligence has come a long way in recent years, and algorithms with machine-learning applications are proving skillful at things like playing poker, lip-reading, and, unfortunately, being biased.
For their study, the researchers adapted a word-pairing test used to gauge bias in humans to do the same for the GloVe system. The upshot? Every single human bias they tested showed up, they report in the journal Science. "It was astonishing to see all the results that were embedded in these models," one researcher tells Live Science.
They found examples of ageism, sexism, racism, and more—everything from associating men more closely with math and science and women with arts to seeing European-American names as more pleasant than African-American ones.
More From Newser
"We have learned something about how we are passing on prejudices that we didn't even know we were doing," says another researcher. Just as Twitter users taught Microsoft's chatbot Tay to unleash neo-Nazi rants on social media last year, so, too, does this oft-used algorithm learn from our own behaviors, regardless of whether they are good or bad.
(Elon Musk calls AI our "biggest existential threat.")
This article originally appeared on Newser: Machine Learning Has a Weakness: Humans