Opinion: Trolling Is Now Mainstream Political Discourse

Our new study on Islamophobia, xenophobia, and racism during the 2018 midterms confirms we're on a path to digital dystopia.
Rep. Ilhan Omar
During the 2018 midterm election season, more than half of some 100,000 tweets about Muslim congressional candidates Ilhan Omar (above) and Rashida Tlaib involved outright hate speech.Photograph: Stephen Maturen/Getty Images

It was a few weeks before the 2016 election, and I was putting together a report on the future of online political discourse. We had canvassed thousands of the world's leading experts in technology and culture, and had begun the long task of interpreting the more than 700 responses to the final question in our survey:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?

Despite having studied this space for years, I read agape. It wasn't that the predictions for the coming decade were, as I'd expected, pessimistic. It was their excruciatingly candid, matter-of-fact dystopianism that left the impression. One comment in particular would become downright prescient.

Kate Crawford, a leading scholar and author who regularly comments on the impact of technology, said “Distrust and trolling is happening at the highest levels of political debate, and the lowest .... The Overton window [the range of acceptable behavior] has been widened considerably by the 2016 US presidential campaign, and not in a good way ... presidential candidates speak of banning Muslims from entering the country [and] retweet neo-Nazis. Trolling is a mainstream form of political discourse.”

Mainstream trolling? Sure, I thought at the time.

Overall, there seemed to be a consensus that civility online was bound to get worse before it got better. Yet there was a sense of hope, that advancements in machine learning and natural language detection might eventually shield us from most of it, like a leveled-up Gandalf riding down the mountain on a white horse.

"To troll is human," one of the bolded takeaways in the Future of the Internet report said. Life went on.

Last year, I joined a research effort led by Lawrence Pintak, that looked at the experiences of more than 80 American Muslims who were running for office in the 2018 midterms. We were eager to understand the prevalence of hate speech, xenophobia, and toxic behavior over the course of their election campaigns. The first report from our study was just published.

Like the experts predicted back in 2016, we did end up heading down the dystopian path. Trolling became a mainstream form, if not the mainstream form of political discourse. Fueled by networked communication technologies, for better and for worse, everyone has a voice. Platforms like Facebook, Twitter, and YouTube have democratized participation.

Before getting into the nitty gritty, let's take another step back to an earlier prescience. In early 2008, author and technology critic Douglas Rushkoff gave a keynote address to members of the audience at the Personal Democracy Forum. In his speech, he rallied against what he felt was a fundamental misconception of networked democratic participation:

The technologies we're using—the biases of these media—cede central authority to decentralized groups. Instead of moving power to the center, they tend to move power to the edges. This means the way to participate is not simply to subscribe to an abstract myth, but to do real things. That's the opportunity of the networked era: to drop out of myths and actually do.

Rushkoff's observation encapsulates the trajectory of political discourse in America in the 11 years since with remarkable precision. The notion of democratic participation was formed in the Renaissance, and founded upon a naive idea of individual participation, which Rushkoff felt worked against the individual by ceding power to central authorities.

Social networks function in the opposite direction: They take power away from central authorities and institutions, and push it to individuals at the edges. Rushkoff saw this power—at least as a tool for democratic gain—only harnessed once people took action, which we’ve witnessed since 2008. This idea predicates both the digital activism that helped elect Barack Obama to the presidency and the mechanism that turned the tables back around to elect our current president. The issues may have changed, but the means to campaign wins are largely one and the same: action.

Machine learning, it turns out, is not galloping toward us on a white horse (or a Tesla) to whisk us away from our decaying public sphere. We're in Ludicrous Mode. At best, moderation tech only dampens the toxicity that's visible at the network's surface. But it leaves the edges of the network, where the worst of harassment and polarization happens, to fend for itself. And, of course, it demands huge capital investments in technology.

The number of real people who are participating, including those who inspire and galvanize others to take political action, like vote, is on the decline. Instead, social platforms are increasingly populated by machines: bots, conversational AI, etc. Their agenda includes silencing real people who voice opposition and support for certain views. They also serve as threat intel—connecting our conversations, discovered through the monitoring of our expressed feelings and shared posts, with political issues.

In our latest study, we found more than half of some 100,000 tweets about two female Muslim congressional candidates in the 2018 midterms (both of whom would eventually win historic victories) involved outright hate speech. What's more, the bulk of the harassment and provocation came from a small cohort of troll-like accounts. These amplifiers didn’t simply retweet news stories and spam links. Content wasn't necessarily their primary weapon; connectivity was.

We found a remarkable pattern of these accounts persistently tagging House representatives Ilhan Omar of Minnesota and Rashida Tlaib of Michigan, both Democrats, into threads and replies. This in turn helped funnel hate speech, amplify rumors, and pull others into heated discussion threads. While some of the instigator accounts were stereotypical bots, others represented an upgraded model of troll: They had traces of automation, quickly swarming on a specific post, for example, but were clearly used and supervised by real humans; they were cyborgs. Instead of mass amplifiers, these accounts functioned more like polarization vacuums. To me, this signals a wholesale shift in political distortion tactics.

This is a new twist to electoral politics and democratic participation in 2020 and in the coming decade. Over time, and especially across disparate Twitter communities, groups, and hashtags, these tactics will continue to surface anger and emotional vitriol. They will connect political candidates' identities to controversial issues, raising them in tandem, and then connecting them in the form of a narrative to real voters. This manufacturing of outrage legitimizes otherwise unsustainable rumors and ideas.

Through Rushkoff's interpretation, these hostile actors are exploiting fundamental design flaws in Twitter's social connectivity to galvanize feelings around heated issues—gender, ethnicity, and religion—and convert them into political action: voting.

We've moved power away from the center, which isn't a bad thing. But as it stands—the affordances of online anonymity, the lack of oversight, and the incentive for bad actors to stay two steps ahead of moderation tech at every turn—ensure that the more we participate, the more we dig ourselves into an inequitable system of governance.

We have entered an era where silence is not golden, and our participation is beholden to technology platforms. It's a rigged game we cannot win. Which means that American voters have but one way out: taking action in 2020.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories