If you’ve spent any significant amount of time online, you’ve probably come across them.
They lurk on comment boards. They pop up in your mentions on Twitter. They’re anywhere and everywhere people congregate online, adding their own special contributions to the discourse.
They’re commonly, and nebulously, referred to as “trolls.” And, even though they make up a tiny portion of the billions of internet users worldwide, they make sure to make their presence felt.
“It’s a problem everywhere, and it seems to be getting worse,” says Laura Macomber, journalism and press freedom project manager for the free-expression organization PEN America. “It’s certainly becoming more and more amplified.”
It hasn’t always been this way. The concept of “trolling” began on the newsgroups of the early internet as a way for members to share inside jokes and make each other laugh. In the years that followed, the many offshoots of the trolling tree have spread their roots into the domains of online harassment, cyberbullying, doxxing (the posting of personal details like address or social security number), threats of violence, and other behavior that can have severe effects on their targets.
The internet has given basically everybody a voice. The ones who use it in this way wield a disproportionate amount of influence on our experience online because they are persistent, outrageous, and just itching for a reaction.
I just do not understand internet trolls? So much spare time to be sitting writing abuse 😂🤔 firm believer in treat others they way you’d want to be treated yourself 💭
— Chelsea Robertson (@chelsrobertson_) May 28, 2018
“Somebody gets to act online with impunity,” says Andrea Weckerle, founder of CiviliNation, an organization that combats online harassment. “They will sit there and prod somebody until they react. The minute they get a reaction out of you, they’re like, ‘Hey this is great. We’re going to stick to this topic or person because of the lulz.’” The lulz, of course, being the internet’s corruption of LOL; it’s trolling done just for giggles.
Who are the people that engage in this behavior? Why do they do it, and is there anything we can do to reclaim the internet from them?
From Under the Bridge
Lindsay Blackwell takes issue with the way most people use the terms “troll” and “trolling.” A PhD candidate at the University of Michigan School of Information, Blackwell has made it her life’s work to study the targets—and perpetrators—of online harassment.
She feels that huddling everyone together under the “troll” umbrella is a bit reductive.
“This is a long-standing soapbox of mine,” Blackwell says.
The term “trolling” comes from the early days of Usenet in the 1980s. Back then, the mostly male, tech-savvy denizens of the collection of newsgroups that made up Usenet would “troll” for comments on posts as one would “troll” for a fish during a trip to the pond: dangle some tantalizing bait out there and wait to see if something bites.
On the internet, this usually meant posting messages that were patently ridiculous in the context of the board and hoping that someone engaged with them seriously. This spirit is still alive and well today, with trolls such as Ken M—who posts knowingly ludicrous comments under news stories to get reactions—and the tradition of PhotoShop trolls, in which trolls fulfill another user’s request for help PhotoShopping a picture…but not in the way they intended.
— James Fridman (@fjamie013) April 29, 2018
“Those are things that are funny,” Blackwell says. “You might have your time wasted, but you’re not going to leave that interaction feeling horrible. Where we run into trouble is when people start adopting that word to apply to malicious, abusive behavior online.”
A 2017 Pew Research Center study found that 41 percent of the American respondents have been harassed online, and 66 percent had witnessed this sort of behavior targeted at others. Of the Americans who reported being harassed online, 18 percent said they have been the targets of such extreme behavior as sexual harassment and physical threats.
There is a broad spectrum of severity when it comes to such behaviors, from, say, playfully mocking someone’s music preferences to broadcasting someone’s home address and phone number and rallying others to send them threatening mail. There is also a broad spectrum of people who engage in behavior that would traditionally be associated with trolling, from someone who fires off a snarky tweet every now and then to someone who consistently and habitually finds new targets on which to prey in increasingly harmful ways.
A 2014 study found that not only did strong positive correlations exist between frequent online commenting and enjoyment of trolling, but that trolling correlated positively with the “Dark Tetrad” of personality traits: sadism, psychopathy, and Machiavellianism.
“There is no single motivation. Like a lot of human behavior, it is complex,” says Joseph Reagle, associate professor of communication studies at Northeastern University and author of Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web. “I sometimes make the distinction between people who act poorly online because they are having a bad day and those who may have more antisocial personality traits. But even then, it is probably a sliding scale.”
Blackwell cautions against seeing the more benign form of trolling as a gateway to harassment. She also cautions against the notion that only a certain type of person is capable of this behavior.
“For all of us, there are certain things that motivate us to participate in aggressive behavior online,” Blackwell says. “It’s not just trolls, or harassers, or bullies. All those words really create this mentality that makes people feel comfortable saying, ‘I’m one of the good guys. They’re the dudes in black hoodies living in their mom’s basement and posting on 4chan all day.’”
“It’s not that a subset of the population are trolls. It’s that all of us are capable of this behavior.”
Actions and Consequences
Some experts have endeavored to quantify the classes of trolls—a “troll taxonomy,” if you will.
Susan Benesch, a faculty associate at the Berkman Center for Internet and Society at Harvard and the executive director of the Dangerous Speech Project, distinguishes between intermittent and chronic “bad actors” online, ones who post toxic rhetoric every now and then versus ones who do it on a regular basis. She also recognizes a class of “bandwagon-jumpers,” who are spurred to action by certain topics or events.
At any rate, their victims often wind up at the center of what Reagle calls a “trollplex,” a sustained attack by people of various backgrounds and behaviors coalescing around a target. That’s never a good place to be.
“Online is the same as offline at this point. You really can’t separate it out,” says Weckerle. “Once information is posted online, it’s indexed, can be found via search engine, and can have long-term, damaging consequences. It can harm people who really haven’t done anything wrong.”
In her book, Weckerle uses the example of the parents of Mitchell Henderson, a teenager who took his own life in 2006. They became the target of harassment, including nuisance calls in the voice of their deceased son. This all stemmed from a MySpace memorial page that segments of the internet found amusing.
The trolling debate is often couched in terms of free expression. Sure, we may not agree with what they’re saying or how they say it, but isn’t their right to say it one of the foundational tenets of our democracy?
Some free-speech organizations worry that a largely sanction-free webspace is curtailing the speech of those who are often the targets of online harassment. Last fall, PEN America conducted a survey of more than 230 writers and journalists who have experienced online harassment. Around two-thirds of those surveyed reported a “severe reaction” to this harassment, including fearing for their safety or the safety of their loved ones, refraining from publishing their work, and/or permanently deleting their social media accounts.
On a larger scale, people from traditionally underrepresented classes—women, racial minorities, the LGBTQ community—tend to be the ones who bear the disproportionate brunt of online harassment.
“If somebody is being silenced because of somebody else’s expression, then whose speech do we want to prioritize here?” Macomber says. “The internet is a really important form of communication for more marginalized communities. So if they’re also withdrawing from online spaces, then we’re not hearing from them at all. And that’s a huge issue.”
Don’t believe “Don’t Feed the Trolls.”
Another remnant from the Usenet days has made its way into the current discourse. Back then, the phrase “Don’t feed the trolls” meant that ignoring the bait from mischievous posters was the best policy. That way, everyone could get back to having meaningful discussions as quickly as possible without the inevitable sidetracks trolling invited.
Today, with all the damaging behaviors associated with trolling, that advice doesn’t really work anymore.
“When we say it to people who are being doxxed and are being sent threats of [violence] … telling them that is really useless and does way more harm than good,” Blackwell says. “It minimizes the problem and the impacts of online abuse.”
So what recourse do you have if you become the target of trolling? Short answer: It depends on what kind of harassment you’re enduring.
Legal remedies can be costly and ineffective, as it’s a fairly new arena of law and the behaviors involved have to rise to a pretty severe standard to be prosecuted under “cyberstalking” laws. Reporting the behavior to social media platforms is another option, but they’re often loath to react and sanction offenders. You can confront your troll, but that sometimes leads to an escalation of behavior. You can delete your accounts, but that’s not always an option for people who make their living online.
“There’s no one thing, one magic bullet that can help you,” Blackwell says. “But there are a lot of resource guides that have very specific strategies.”
PEN America released its Online Harassment Field Manual earlier this year. It includes information about cyber safety, tips for maintaining your well-being during troll attacks, legal rights, and testimonials from writers who have experienced it. Weckerle sees CiviliNation, which was incorporated in 2010 and provides resources for dealing with specific kinds of abuse, as a “clearinghouse” of information for victims of online harassment.
The past few years have also seen rise to a bystander intervention movement. HeartMob, which launched in 2016, seeks to provide a support community for people who are feeling the effects of online harassment. It is a small, selective community of people approved by its moderators with safeguards designed to keep out those applying for acceptance just to create havoc.
"How are you going to care for and nourish yourself today?"#SelfCareSunday
— HeartMob (@theheartmob) May 13, 2018
“If you have hundreds or thousands of people telling you these horrible things, that’s all you’re seeing; to be able to come to a space like HeartMob and have people say, ‘This is wrong. I’m so sorry for what you’re experiencing. This isn’t your fault,’ that’s very affirming,” Blackwell says. “The problem is, if I’m a bystander witnessing online harassment and I jump in to sanction that perpetrator, that opens me up to a similar world of abuse. We need to look at strategies that enable bystanders to intervene without putting themselves at risk.”
An important thing to consider is that anyone, under the right circumstances, can become a troll—or, worse, engage in harassing behavior online.
Predictably, participants who were told that the harassment’s target had committed theft felt the harassing tweet was more deserved and more justified than participants who saw the tweet with no context. Blackwell says it goes back to the Western sense of “retributive justice:” An eye for an eye.
“That helps explain this subset of online harassment where there’s some sort of social sanctioning motivation,” Blackwell says.
You can see it in the sustained harassment endured by Justine Sacco after she posted—then deleted—her infamous 2013 tweet about AIDS in Africa. The sentiment expressed in the tweet was offensive, but did it justify throngs of people picking up the trolling tactics that have caused harm to so many? Did the ends justify the means?
It’s an age-old conundrum with a 21st-century twist.
“Humans have always sucked. Since the beginning of time, we’ve been nasty to each other, murdering each other, breaking each other’s hearts,” Blackwell says. “The problem is, we’re now seeing it at scale. These types of interactions are now visible.”