Social media can be an unforgiving place.
When country star Carrie Underwood, for example, posted on Instagram about her recent injury that required 40-50 stitches to her face, some followers accused her of over-exaggeration and using the incident as a publicity stunt.
With trolls everywhere, what if there was a way for users to block harmful attacks?
At a recent Congressional hearing, Mark Zuckerberg noted how AI could possibly help eradicate hate speech and online abuse, but it was still 5-10 years away and in development. He said the problem is too “nuanced” -- especially when you consider language variances.
Experts argue that AI could help identify abuse and hate speech, but the issue is one of focus, resources, and motivation. The technology is not there yet, but it could progress faster.
Bob Pearson, the co-author of the book Countering Hate and the chief innovation officer of W2O Group, tells Fox News that AI is able to identify hate speech today. It’s a matter of training the AI algorithms to understand bias in language and what humans consider to be harmful.
“All human beings follow patterns online,” Pearson said. “You can see what language, content, channel and people matter to them. You can see which words trigger information seeking, which language is most associated with hate topics or sites, which people are the most important influencers and you can see a range of behavioral characteristics.”
The problem he says, is that Facebook and other companies have not taken up the charge to make the battle against hate speech a major priority. They are distracted by other topics.
“A media platform can identify bias, hate and extremist speech just as easily as it can identify your needs for advertisers. It is just a matter of focus,” he said.
Consumer analyst Rob Enderle says Facebook is too focused on developing AI technology in house, but could partner with a company like IBM to solve the crisis.
“This is a function of investment as much as it is development,” Enderle said. “For instance, you could take IBM Watson today and likely have it able to address this problem within 12 months if you were willing to spend what was needed to train the system. But it doesn’t look like Facebook is willing to use something off the shelf, adding years to the process.”
Dr. Danny Paskin, an associate professor in the Department of Journalism and Mass Communication at California State University in Long Beach, told Fox News that spotting hate speech, troll-like behavior, and online abuse in an automated way is extremely difficult. One term of endearment to one group could be a form of incredible shame and abuse to another.
Yet, he says there’s a serious drawback for companies like Facebook, Twitter, and Instagram taking on this issue -- namely, that it doesn’t exactly help them grow their user base. “Facebook and Twitter are still about having more and more users, and making users active on the site,” he said. “If you start blocking too many people, and scaring people away, that's counterintuitive to what they do. So they walk that fine line between doing too little and doing too much.”
Curiously, he says the turning point might come when enough users cry foul.
Maybe the #deletefacebook campaign could work after all.