Updated

Criticism for hate speech, extremism, fake news and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement.

Social media moderation is often about finding a balance between creating a safe online environment and inhibiting free speech. In many cases, the social media platform themselves steps up to protect users, like with Twitter’s recent rule overhaul, or to keep advertisers, like in YouTube’s recent changes after big advertisers boycotted the video platform. But, in other cases, such as Germany’s new hate speech law and a potential new similar European Union law, moderation is government mandated.

Earlier in January, Facebook, Twitter, and YouTube testified before a Senate committee on what steps the platforms are taking to keep terrorist propaganda offline. While the hearing appears to be uncommon, the same groups also testified before Congress on Russian involvement in the 2016 U.S. election.

So should the government regulate social media platforms — or is there another option? In a recent white paper, the New York University Stern Center for Businesses and Human Rights suggested another option based on their research — moderation from the social media companies themselves, with limited government involvement. The report, Harmful Content: The Role of Internet Platform Companies in Fighting Terrorist Incitement and Politically Motivated Disinformation, looks specifically at political propaganda and extremism. While the group says social media platforms shouldn’t be held liable for such content, the research suggests the platforms can — and should — do more to regulate content.

More From Digital Trends

The group suggests that, because social media platforms have already made progress in preventing or removing such content, such moderation is not only possible, but preferable to government interference. Social media platforms have previously leaned towards no moderation at all, which, unlike a newspaper that chooses what news to publish, meant the platforms had no legal liability. Recent laws directed at social media have that changing — in Germany, social networks could pay up to $60 million in fines if hate speech isn’t removed within 24 hours.

The report doesn’t push to make social networks liable for information users share on the platform, but suggests a new category outside the categories of traditional news editors and publishers that don’t regulate content at all. “This long-standing position rests on an incorrect premise that either the platforms serve as a fully responsible (and potentially liable) news editors, or they make no judgements at all about pernicious content,” the white paper reads. “We argue for a third way — a new paradigm for how internet platforms govern themselves.”

Statista/Martin Armstrong

The spread of misinformation with a political motivation is hardly new, the group points out, as evidenced by the “coffin handbills” handed out during Andrew Jackson’s campaign in 1828 that accused the future president of murder and cannibalism. At one time, misinformation could potentially be countered with, as Supreme Court Justice Louis Brandeis once said, “more free speech.” The faster speed at which information travels on social media, however, changes that. The top 20 fake news reports on Facebook during the 2016 election had more engagement than the same number of stories from major media outlets, according to BuzzFeed News.

“The problem with turning to the government to regulate more aggressively is that it could easily, and probably would, result in an overreaction by the companies to avoid whatever punishment was put in place,” Paul Barrett, the deputy director at the NYU Center for Business Rights and Human Development, told Digital Trends. “That would interfere with the free expression that is one of the benefits of social media… If the platforms do this work themselves, they can do it more precisely and do it without government overreach.”

The group isn’t suggesting that the government stay out of social media entirely — the legislation to apply the same laws to social media ads that apply to political ads on the TV and radio, Barrett says, is one example of laws that wouldn’t overreach. But, the paper argues, if social media companies step up their efforts against politically motivated misinformation and terrorist propaganda, government involvement wouldn’t be necessary.

The white paper suggests social networks enhance their own governance, continue to refine the algorithms, use more “friction” — like warnings and notifications for suspicious content — expand human oversight, adjust advertising, and continue to share knowledge with other networks to reach those goals. Finally, the group suggests identifying exactly what the government role is in the process.

Barrett recognizes that those suggestions aren’t going to be free for the companies, but calls the steps short-term investments for long-term results. Those changes are, in part, already in motion — like Facebook CEO Mark Zuckerberg’s comment that the company’s profits would be affected by safety changes the platform plans to implement, including an increase in human review staff to 20,000 this year.

The expansion of Facebook’s review staff joins a handful of other changes social media companies have launched since the report. Twitter has booted hate groups, YouTube is adding additional human review staff and expanding algorithms to more categories, and Zuckerberg has made curbing abuse on Facebook his goal for 2018.

“The kind of free speech that we are most interested in promoting — and that the first amendment is directed at — is speech related to political matters, public affairs and personal expression,” Barrett said. “None of those kinds of speech would be affected by an effort to screen out disguised, phony advertising that purports to come from organizations that don’t really exist and actually attempt to undermine discussions on elections. There will be some limitations on fraudulent speech and violent speech, but those are not the types of speech protected by the first amendment. We can afford to lose those types of speech to create an environment where free speech is promoted.”