Women on Twitter are sent abusive or problematic content every 30 seconds, according to a new investigation from Amnesty International.
The human rights group partnered with a global artificial intelligence software firm called Element AI to survey millions of tweets received by 778 female journalists and politicians from the U.S. and the U.K. in 2017. With the aid of machine learning, the study found that abuse is very widespread and targets black women the most.
A total of 7.1 percent of tweets sent to women in the study were considered "problematic" or "abusive." Although the social platform has a definition of abusive content, the "problematic" label was defined by Amnesty International as content that is "hurtful or hostile, especially if repeated to an individual on multiple or cumulative occasions."
"We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted, and black women are disproportionately targeted," Milena Marin, senior advisor for tactical research at Amnesty International, explained in a blog post.
The study included politicians from across the political spectrum and journalists from a range of publications, including The New York Times, The Guardian, The Sun, Pink News and Breitbart. It had more than 6,500 volunteers sorting through the tweets — which Amnesty calls the world's largest crowdsourced dataset about online abuse against women.
Among the "Troll Patrol" volunteers' other findings:
– Black women were 84 percent more likely to be sent abusive tweets than white women.
– Women of color, which includes Asian, Latinx, Black and mixed race women, were 34 percent more likely to be mentioned in abusive or problematic tweets than white women.
– Online abuse is bipartisan: Liberal and conservative women, along with women from liberal and conservative publications, all faced similar levels of abuse.
“Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse and publish technical information about the algorithms they rely on," Marin said in a statement.
Marin added: "We have the data to back up what women have long been telling us — that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked."
Vijaya Gadde, Twitter's legal, policy and trust & safety global lead, told Fox News in a statement:
"I would note that the concept of “problematic” content for the purposes of classifying content is one that warrants further discussion. It is unclear how you have defined or categorized such content, or if you are suggesting it should be removed from Twitter. We work hard to build globally enforceable rules and have begun consulting the public as part of the process — a new approach within the industry."
A source familiar with the company's thinking on the matter told Fox News that Twitter has introduced dozens of changes over the last two years to improve the safety of the platform. According to its latest biannual transparency report, Twitter received reports on over 2.8 million "unique accounts" for abuse, nearly 2.7 million accounts for "hateful" speech, and 1.35 million accounts for violent threats. Of those, the company took action — which can include account suspension — on about 250,000 for abuse, 285,000 for hateful conduct and just over 42,000 for violent threats.
In addition, Twitter has said in the past that it is investing in better technology to identify abusive content or behavior ahead of time and limit its spread.