I-Team Election Project

I-Team: Meet the Troll Hunters

A team of tri-state researchers are using artificial intelligence to help hunt down internet trolls aiming to put a finger on the scale of our elections

NBC Universal, Inc.

What to Know

  • Researchers are using machine learning to identify foreign troll operations
  • By analyzing those canceled accounts the AI technology learned to recognize how foreign trolls change their strategies. 
     
  • Researchers say they're able to pick up on the covert operation by tracking similarities in posting schedules, domains and hashtags

A team of researchers at New Jersey Institute of Technology, Princeton University, and New York University has developed artificial intelligence they say can predict when inflammatory social media posts are actually coming from foreign internet trolls.

The researchers developed a way of using machine learning to identify patterns that link accounts of foreign disinformation agents – even when those accounts are brand new.

Jacob Shapiro, a Princeton professor of politics and international affairs, said the AI ferrets out foreign trolls  by picking up on similarities in the links they post, the time of day they publish, the domains they use, and the hashtags they reference.  He said developing the technology cost just a few hundred thousand dollars.

“There were always multiple weird things that the Russian, Chinese, and Venezuelan trolls were doing that you could used to separate them from normal people,” Shapiro said.

After revelations surfaced about Russian trolls meddling in the 2016 presidential election, Twitter, Facebook, Reddit and other social media platforms took steps to limit the impact of foreign disinformation campaigns. 

In some cases, they took posts down and closed accounts.  In others they notified users who were exposed to foreign propaganda. More recently Twitter and Facebook publicly warned Russian agents are again trying to use their platforms to influence next month’s election.

But the researchers say the federal government could amplify the efforts of social media companies, by developing its own artificial intelligence and publishing country specific forecasts – that warn Americans when foreign influence campaigns are active. 

“If we can do a pretty good job of this in our research lab with the small amount of resources we have access to, the government can do a much better job of this, creating greater awareness of what other countries are trying to do to influence our politics and that would improve our democracy,” Shapiro said.

To build the AI technology, the NJIT, Princeton, and NYU researchers used data provided by Twitter and Reddit – that show which foreign accounts and posts those platforms canceled - often because the accounts were connected to the Russian Internet Research Agency (IRA), a company accused by federal prosecutors of generating hundreds of thousands of pieces of troll content.

By analyzing those canceled accounts the AI technology learned to recognize how foreign trolls change their strategies. 

Cody Buntain, an NJIT Assistant Professor of Informatics, said between 2015 and 2017, the key characteristics of Russian troll accounts changed often.

“At the beginning, it was things like what time of day these accounts posted,” Buntain said.  “It turned out many of them posted during the general working day for Moscow – not in the United States.”

Later, the AI picked up on a change. The trolls were posting their content more randomly. But they were often using other common indicators.

Then it became more about the hashtags that they used,” Buntain said.  “And then it became the political websites, the nature of the political websites that they shared.”

Contact Us