Machine learning system can detect jokes to help police focus on actual terrorist threats

Humans often have trouble discerning the true emotional sentiment expressed by another human in a text message, instant message, email, or social media post -- so maybe a computer learning system can help. A computer science student in Israel is working on exactly that, and the consequences may reach beyond the realms of social interaction into filtering out noise for counterterrorism efforts and suicide prevention.

Eden Saig, a computer science student at the Technion -- Israel Institute of Technology has developed a machine learning system to accurately detect and identify emotions in electronic communications, as detailed in his paper Sentiment Classification of Texts in Social Networks, which recently won the Amdocs Best Project Contest. The key to the system: analyzing humorous Facebook groups.

He applied machine-learning algorithms to more than 5,000 posts on tongue-in-cheek Hebrew-language Facebook pages for "superior and condescending people" and "ordinary and sensible people," since they had content that "could provide a good database for collecting homogeneous data that could, in turn, help 'teach' a computerized learning system to recognize patronizing sounding semantics or slang words and phrases in text," Saig said.

The accuracy of sentiment identification was improved by merging keyword searches, grammatical structural analysis, and the number of "likes" a post receives.

"Now, the system can recognize patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant," according to Saig.

He sees this kind of machine learning system as a helpful tool for helping police ignore social media posts that joke about planning terrorist attacks and avoid using resources for false alarms.

Saig also sees an application for depression, suicide, and cyberbullying. A machine learning system could help distinguish between jokes and actual threats or cries for help.

"I hope that ultimately I can develop a mechanism that would demonstrate to the writer how his or her words could be interpreted by readers thereby helping people to better express themselves and avoid being misunderstood," Saig said.