Is Twitter doing enough to crack down on porn links?

Last year Twitter took steps to clamp down on tweets featuring non-consensual explicit sexual images and videos, but experts say that the platform still needs to tackle the issue of porn links.

While Twitter, for the most part is a vast collection of pithy 280-character sentiments, you may have clicked a link or two that showed an explicit image. It’s surprising because Twitter is mostly meant for everyday folks -- and kids, who can sign up at age 13.

One report about trending topics said that bots were to blame for spreading porn links on Twitter. One example: When reports about New England Patriots player Aaron Hernandez committing suicide started trending, it became obvious that some of the links were to porn sites. Another highly publicized incident involved the circulation of revenge porn. It’s also common to see a “user” who is obviously a bot with a sexually-oriented profile image, a ploy meant to convince you to click on links.

FACEBOOK SAYS IT NEEDS YOUR EXPLICIT PHOTOS TO COMBAT REVENGE PORN

Experts, however, say that the issue of porn links is a growing problem.

Dr. Nicole Prause, a neuroscientist who studies sexual motivation, says the main issue is related to consent. Adults and kids who use Twitter may not realize that there are no safeguards in place, and that it just takes one trending topic link to lead to a porn site. “People should be informed to know what kind of content they are accessing and be able (within reason) to avoid it if they do not wish to see it, otherwise they are not consenting to that exposure,” she says.

With better safeguards, some users could be blocked automatically or a warning could appear telling the user that the link will lead to a site that contains pornography.

HOW FACEBOOK IS PLANNING TO ELIMINATE THE REVENGE PORN PROBLEM

"Twitter could look at a series of signals to determine if either the user sharing the false link or the link itself should be flagged before allowing users to click through,” says Eric Dahan, co-founder and CEO of marketing platform Open Influence that uses similar tech to spot fake accounts. “Though AI is not necessary to do this, AI can be used to learn what a malicious user looks like and assign a confidence level to users that fit that profile.”

Yvan Monneron, who runs a private browsing platform called SnowHaze, says protecting younger users is much more difficult. For one, it’s hard to know if a middle-schooler lied about his or her age. He says Twitter tends to monitor accounts and strings of tweets but not individual tweets. “Blacklists (effective but not efficient) or a good combination of artificial intelligence may achieve this. Concerning the important protection of young users, a big company like Twitter should make investment in youth protection priority,” he says.

AI routines could monitor fake accounts, block links for younger users, and protect users. The question to ask is -- why hasn’t Twitter taken the steps to make that happen? Monneron said, in part, that it’s really hard. The bots that create the links can react to any counter-measures and adjust their tactics, so it has to be a concerted, on-going security effort.

THE SATANIC TEMPLE IS THINKING ABOUT SUING TWITTER OVER 'RELIGIOUS DISCRIMINATION'

“Further evaluation of such techniques might show great potential in combatting bots,” he said.

Twitter reps said the social platform does allow some adult content. Users sharing these links can mark the content as sensitive. And, there is a way to block adult content (see link). This does not account for the links that are hidden or not marked as sensitive, or the links that sneak through any filters.