How Twitter could use artificial intelligence to cut online harassment

“Why don’t you jump off a bridge and kill yourself? I bet you’d make a big splash.”

That was an actual tweet someone sent me a few years ago.

I still remember it, because it seemed vicious -- and how did this person even know I’m the size of a football player? Over the years, especially writing for, I’ve had to deal with my fair share of trolls and online abusers, people who take great pleasure in making others feel miserable online. It hasn’t changed much in recent years.

In fact, it seems to be getting worse.

One issue, said former CIA and LAPD officer Henderson Cooper, is that it is often hard to determine if there is criminal intent. And, free speech makes Twitter an open forum.


Twitter has decided to deal with the issue in a delicate way. The company (which declined to comment on the record) uses algorithms that can spot online abusers, but for the most part, this technique only flags users and allows you to block them from your feed.

Earlier this month, Ed Ho -- the VP of Engineering at Twitter -- announced that Twitter would start “limiting the functionality” of online abusers if they see repeated offenses. One technique is to block the user so that only direct followers see their posts.

What’s missing is real-time analysis -- someone could easily send you a death threat today out of the blue, and Twitter would only block that account if the social network noticed a pattern.

Experts say artificial intelligence could be the answer, however. Using machine learning -- and analyzing massive number of tweets -- AI could reduce or even eliminate Twitter abuse.

“AI can help stop harassment before it happens,” said Darren Campo, adjunct professor at the NYU Stern School of Business, explaining how an AI could weed through the millions of posts in ways a human operator never could. Short of hiring thousands of employees (the entire company only has around 3,800), artificial intelligence could “add to the line of defense” against online abuse.


Consumer analyst Rob Enderle said an AI would have to be highly intelligent. Trolls who abuse online often figure out ways around any safeguards, and for them it is a learned behavior. He says the underlying motive -- intent to cause harm in a public forum -- hasn’t been addressed.

“An AI can analyze the behavior, suggest a viable response, identify and track repeated offenders by their behavior, and flag those that need aggressive punitive action,” noted Enderle, who works for the Enderle Group. “It could be a major part of a complete solution to eventually eliminate much of this behavior and, at scale, there is really nothing else we have short of massive staffing that Twitter couldn’t afford to begin to address this.”

The “intent to harm” analysis is hard, but experts say it’s possible -- there are millions of tweets available online to analyze and dissect. Machine learning could at least stem the tide.

Richard Baraniuk, an IEEE Fellow and Professor of Electrical and Computer Engineering at Rice University, said an AI could use similar techniques used to spot spam and malware today.

“The algorithm would simply count the frequencies of occurrence of words in a tweet and then compare them to the frequencies in tweets already judged to be harassing and the frequencies in tweets already judged to be non-harassing," Baraniuk claimed. "Once we create a collection of tweets that are judged to be harassing and non-harassing, then we can apply the same approach to new tweets in order to rapidly remove abusers from the system."

For now, Twitter seems to be the one on the defense. The firm posts quite often about tackling the issue. Meanwhile, posts are still left unmonitored and abuse is still rampant.

“It is far more difficult to execute [effective real-time artificial intelligence] and firms often feel they can get off the hook for a fix if they just look like they are attempting a fix -- that is, by looking busy,” Enderle said. “Twitter is currently looking busy but if they don’t correct the cause the adverse impact on their service from this behavior will continue.”