What is harassment if not emotional malware?
In the intervening years, the term “troll” has come to subsume all kinds of antagonistic online behaviors, regardless of whether the participants would describe themselves as trolls. I am wary of this new framing […] and whenever possible avoid using the term as a behavioral catch-all. Instead, I prefer to describe online antagonism in terms of the impact it has on its targets. So, if someone is engaging in violently misogynistic behavior, I call them a violent misogynist, as “troll” implies a level of playfulness that tends to minimize their antagonistic behaviors, or at least establish a firewall between the embodied person and their digitally mediated actions. (“I’m not really a racist, I just play one on the Internet” doesn’t account for the fact that, regardless of what might be in someone’s heart, his or her actions have a real and demonstrable impact on those forced to read yet another racist statement online.)
–Whitney Phillips on naming.
[T]here is a real sense of threat felt by readers and writers which wasn’t there before. Stalking, harassment, and abuse are more prevalent than they used to be, and while most of us haven’t faced threats that rise to the level found in other communities, ours are bad enough. It completely undercuts the idea of [the Romance community] as a refuge from the rest of the world, or a place where we can pursue and share our common interests apart from the fraught issues we deal with in the rest of our lives. And the big problem is that these don’t feel like isolated events. The [Kathleen] Hale episode would be easier to dismiss if we didn’t have other, less violent examples of romland people going after each other prior to and subsequent to it. There is little sense that we can speak freely in Romland space. Everything is public and every potential breach is screenshotted by someone. It’s worse than always speaking in public, really; it’s more like speaking in public with an endless, never-erased video running while you do it. No wonder blog posting and commenting are down and Romance Twitter is more anodyne. Who wants the grief that follows even a minor fuckup?
–Sunita on surveillance fandom.
Sunita is talking about one specific community/fandom, but this is applicable pretty much everywhere.
I’ve witnessed multiple different fandom communities, for example, where individuals brag about monitoring the blogs and Tumblrs and Twitters of people they don’t like, with the sole express purpose of submitting anything potentially “wanky” to services like the Wayback Machine1 and FreezePage. I’m always really unsure how I feel about behaviour like this. There certainly is a technique wherein trolls post and then almost immediately delete harassing material in order to “cover their tracks” and avoid suspension under ToS clauses; this is a notable tactic of GamerGate on Twitter, for example. In those situations, yeah. Screenshotting is useful. But this is often explicitly not what’s going on in fannish spaces. Targets of obsessive screenshotting in fandom are often not harassing individuals so much as being kind of… generally obnoxious according to the standards of some other individual or community. But their words are monitored and recorded and reported on all the same. Moreover, this behaviour seems to be targeted more often than not at women of colour, particularly women of colour who speak out about social justice.
This is a really toxic space for a community to get into, I think. We all say inadvisable or ill-thought-out things sometimes. We phrase things wrong and regret them later, or just change our opinions based on new inputs. And yet, how can people have the room to grow and move on–emotionally and intellectually–when they’re under constant threat and lockdown over people repeating and re-repeating things they’ve said in the past?
If you think you have any kind of pat simple answer to this (“They should just apologise!” “They should just stop saying wanky things!”) then I would bet you have not actually been on the receiving end of this behaviour, and thus should probably shut the fuck up.
This is car-crash fandom, and it’s addictive. And it’s hardly something that’s limited to fandom or online communities, either; the entire tabloid industry is based on it, for example. The only thing the internet brings to the table is the ability for anyone, anywhere, at any time to become the victim of the online paparazzi.
So did you know that researchers at Cornell University, in partnership with Google and Disqus, believe they’ve developed an algorithm that can auto-ban internet trolls.
In order to talk about this, I first want everyone to take a moment to contemplate pretty much any cyberpunk or near-future sci-fi that came out of the mid-to-late 90s. Pick one, any one. I can almost guarantee you that, whenever communications technology is mentioned, it will be at least once part of some kind of lament about a deluge of spam that’s killing the internet/CyberScope/VirtualTube/whatever. Go back five to ten years earlier than that, and everyone is worried about self-replicating viruses, as in Pat Cadigan’s 1992 novel, Synners. In the late nineties and early noughties, it was intrusive popup advertising. Hell, even Twilight–not exactly a techno-thriller–has a scene where Bella is deluged by popup windows the moment she opens her browser.
Hands up. When was the last time any of you actually saw a pop-up ad? Not those PitA interstitial lightbox things, but an actual, proper, for really reals popup window?
Viruses were the Hot New Thing in cyberpunk the early 90s because in 1988 approximately ten percent of the then-Internet was taken down by something called the Morris worm. This was the world’s first “wild” self-replicating virus. Previously, viruses had to be introduced into a system, such as by a user putting in the wrong floppy disk1 and opening the wrong file, and they tended to stay on the system they were on. But you could catch, and spread, Morris just by being connected to the Internet. SFF authors of the time took this terrifying idea–the machines are becoming alive! they’re replicating themselves without our intervention!–and ran with it.
Hands up time again: when was the last time you got a worm on your computer? Not a virus or a Trojan–something you specifically had to open and interact with–but something that infected you just by virtue of your computer being on? I remember mine: it was over a decade ago, in 2003, when I caught Blaster off the university LAN.
And spam. Again, hands up: when was the last time your inbox was so deluged by spam as to be unusable? Not your actual spam folder, but your inbox? Again, this was a thing of massive angst in the early 2000s, and it has to do with the rise of automation versus the drop in cost of computing power and bandwidth, plus the rise in the growth of the Internet. Basically, it got to a point where you could press a button and send a whole network of computers off trying to send an email to hundreds of thousands of randomly generated addresses, in the hope that one would reach the eyeballs of someone real. It’s the coral spawning method of cyber fraud, with all the burden of cleanup placed on the shoulders of end users. For a while there, everyone was predicting the death of email via spam. Like, that is literally a thing people were predicting a decade ago. Except, go read that Salon article again and see if you can pick the thing it doesn’t mention.
Found it yet?
The article doesn’t mention Gmail. It doesn’t mention Gmail because Gmail launched as an invitation-only service in 2004, two years after the Salon article was written. Gmail wasn’t open sign-up until 2007. Gmail is critical in the history of spam because Gmail was the first big public email provider to really seriously kick the shit out of the problem. Back In The Day, people would switch to the service from their Hotmail/ISP/self-hosted email accounts just to leverage Gmail’s spam filter, which was miles ahead of everyone else’s, and the reason it was ahead was because it was, in effect, one giant, user-driven learning engine. The same sorts of content algorithms that figure out what you really meant to search for when you type
hpw gmsil killrd soam are the ones that keep your email inbox (mostly) usable.
Ditto for the death of pop-up ads. They annoyed the shit out of people, and are a pain in the ass to deal with… for a human. But the code that spawns them are easy for a computer to detect with a few lines of regex. For a while there, every software dev and her cat were writing desktop proxies to strip popups. Later, these moved to in-browser extensions, until finally popup blocking options were integrated natively into web browsers. Nowadays, most browsers will ask you whether you really meant to spawn a new window whenever one tries to open in a pop-up style context. As such, pop-ups have all-but disappeared, which is good news for Bella Swan the next time she needs to research supernatural entities2 online.
So is it with self-replicating worms. Antivirus technologies certainly have their flaws, but the one thing they are exceptionally good at is protecting today’s users from yesterday’s problems. Again, they do this with algorithms and pattern matching based on past learning of what is “good” and what is “bad” code.
Worms, spam, and pop-ups. The thing these have in common is they’re all past scourges of the internet, all of whom have been, if not defeated, then at least tamed by the same method.
That method? Better algorithms.3
So here’s a prediction for you: the next Scourge of the Internet is going to be remembered as trolling. It’s another one of those problems that’s been around forever, but seems to be getting worse (or at least more attention), to the point where “don’t read the comments” has become its own meme, and turning comments off has become trendy both for individual bloggers and large sites alike. Trolls destroy communities and they destroy lives. If I was going to be writing a cyberpunk-esque book right now, predicting a dystopian near-future internet, it would be one where online spaces resembled nothing so much as a post-apocalyptic zombie wasteland, with pockets of civilisation huddled in the dark, defending against the sealioning GamerGate onslaught outside.4
And yet… in some ways, I think this is already becoming yesterday’s problem. Tolerance for trolling is plummeting even as–or because of the fact that–the aggression of trolls is skyrocketing. The narrative is slowly shifting away from “it’s only the internet” and into “the internet is life“. Online gaming and social media companies, long guilty of turning a blind eye to their services being used for threatening purposes, are starting (slowly) to crack down as they realise fostering toxic atmospheres actually, surprise surprise, loses them business.
When the social climate changes, so too does the technology. If worms didn’t destroy computer systems, we’d be swimming in them.5 If pop-ups and spam didn’t aggravate people, they’d be the only things we saw. So too will it be with trolling. Already, both companies and individuals are developing innovative technical methods of curbing trollish and harassing behaviour. The backlash, from people who apparently believe they have some kind of inalienable right to be assholes, has been severe.
The terrible irony here is that the more the trolls come out of the woodwork–the more real-world harm they cause–they more likely they are to lose out, long term. It’s really hard to shrug trolling off as “just the internet” when people’s lives are in danger. Laws and law enforcement is slow to catch up, but catch up it will.
But I also think there’s a growing awareness that online trolling and harassment is a problem that needs to be addressed at the root, long before it gets to the stage where federal authorities are investigating incidents of terrorism masquerading as “pranks”. If you run an online space–from the smallest blog to the largest social media site–and that space is toxic, then it’s your fault. Weeding out individual trolls is emotionally taxing and incredibly time consuming… but so was sorting spam back in 2001. Nowadays, all it takes is typing an Akismet key or installing a reCAPTCHA plugin or clicking “mark as Spam”.
The systems have caught up. So to will they for trolls.
So this is my claim chowder of the day. I predict that, in the next few years, you will be able to sign up for services that block trollish and harassing messages based on heuristic learning algorithms. Services like Block Together are the first step. These tools will slowly migrate from being stand-alone, manual systems, into being adopted by major platforms; Google, Facebook, Twitter, and so on. The data inputs from the huge sites will fuel learning across the board, in the same way spam and antivirus heuristics work now.
By 2025, I predict that finding a “get raped and die cunt” message on any major social media or blogging platform will be as archaic and quaint as finding a spam comment is in 2015. Online spaces devoid of this sort of discourse will be the norm in the same way offline spaces where people don’t routinely spit on each other over minor disagreements are the norm. Not only that, but legal instruments will catch up to the point where launching a sustained harassment campaign will be prosecuted as routinely as any in-person stalking charge.6
In short, the future won’t be perfect, but it’ll be… better. First comes social change, then technical controls, then the law. It’s happened before. It’s happened before on the Internet, even; shocking, I know.
We’ve got a long way to go, true. But I think the future is bright. And I, for one, welcome it.
- That was what we used to call USB thumbdrives Back In The Dark Ages, kids. ↩
- Or parenting. Or the parenting of supernatural entities. ↩
- Also, just so you know, before spellcheck gets to it, that word tends to come out of my keyboard looking something like “algorhythmns”. Needless to say, that’s made writing this post “fun”. Fuck you, whole word reading method. ↩
- Actually… hey. That could totally work. First dibs! Editors, email my agent! ↩
- And, actually, we are swimming in the ones that don’t. They get called by euphemisms like “APTs”, and they’re used in cyber-espionage. They are incredibly difficult to detect, mostly because their primary objective is not to be detected, which means being “light touch” on the systems they infest. ↩
- Which is to say, not perfectly, or even particularly well. But better than the current status quo of “why don’t you just turn off the computer?”. ↩
Twitter is responding to this problem [trolling] because the targets of Twitter harassment and abuse are talking about their experiences publicly.
The “don’t feed the trolls” approach that is so often advocated by those who try to minimize and/or excuse the harassment does not in fact work; indeed, “not feeding” trolls encourages them, by making clear they will face no repercussions for their abusive behavior.
“Don’t feed the trolls” FEEDS THE TROLLS.
–David Futrelle feeds the trolls.
Back when I was a wee tyke on the Interblargs, “don’t feed the trolls” meant don’t continue to argue with trolls. But this was in the context of things like mailing lists and message boards, i.e. spaces that were already public, where “trolling” conversations were a kind of spam that could block up the entire community’s discussion.
In other words, Don’t Feed The Trolls came from a time when everyone already knew exactly who all the trolls were, and how they operated. It was designed to shut them up. Also, the conversations were less “I hope you get raped and die, cunt” and more like the angry Star Wars fan ranting on
Somewhere along the line, however, it morphed into “don’t talk about the trolls”. Well, fuck that shit.
Talk about the trolls. If there’s one thing 2014 has taught us, be it GamerGate or Requires Hate, it’s that you need to talk about the goddamn trolls.
All of these are things I’ve seen happen within the last week. There are common threads here. Lots of them. I’m not sure I’m smart enough to tie them altogether.
Note that, in the below summaries, I’ve intentionally removed most racial and gendered markers for the individuals involved. However I will say right now that race and gender are not unrelated to any of the stuff described below. Not even
There’s also a general content warning, particularly at the links, for sexual assault, stalking, harassment, violent imagery, slurs, and just about everything else. These include both journalistic retellings, survivor stories, and, in at least one instance, a first-hand account of committing what has been construed as stalking by many, including yours truly.
Scenario #1: Video games
A handful of individuals begin to create content pushing back against normative tropes in the video game industry. This content, ranging from critical essays to alternate games, results in an organised “counter-push”, ostensibly cloaked under the mantle of fostering “ethics and transparency” in the area of video game journalism, specifically with regards to product reviews. In actual fact, very few actual game reviewers are targeted, and instead producers of progressive and critical content become the focus of a long-term campaign of harassment, doxxing, and death and rape threats, up-to-and-including the threat of a mass school shooting at a public talk given by one of said targets.
Reactions to the occurrences have been noted to split down political lines, with cultural conservatives supporting the “ethics in games journalism” position, while progressives decry it as a harassment campaign. Industry reactions have been more muted and/or slow in coming, mostly focusing on general statements regarding the unacceptability of harassment. Some critics have noted that the sort of harassment now getting attention has routinely occurred within the industry for years, largely unacknowledged, and wonder whether it will take an actual death for the industry to truly take it seriously.
Scenario #2: The SFF author
A talented up-and-coming SFF author is well-regarded in the community for her friendly, approachable personality and innovative stories that push back against conservative genre tropes. One of the author’s editors “outs” her on social media as being the same individual as a book reviewer known not just for her cutting anti-racist commentary but controversial violent statements directed at other authors. Others note the link between this reviewer and a pseudonymous LiveJournal user known in fandom communities for over a decade, and implicated in a long history of violent threats and harassment against other community members. Alleged former victims begin to come forward with stories. Many professional supporters of the author at the centre of these accusations realise they and/or their friends have been the targets of threats, criticism, or harassment originating from the book reviewer and/or LiveJournal personas.
Reaction to the allegations is mixed. Defenders of the author point out harassing and obnoxious behaviour occurs regularly in the SFF community, with much more prominent names receiving far less criticism for far worse actions. Critics reject this, pointing out similar past incidents as well as decrying the implied stance of “it’s okay because everyone does it”. Criticism of the editor’s actions is largely universal, though muted in the face of discussion around other allegations against the author.
Scenario #3: The book reviewer
A debut YA author’s novel receives a negative review on a social book reviewing site. This review is critical of content such as the ridiculing/minimisation of PTSD, domestic violence, and rape, as well as certain dialogue perceived as racist. The author becomes obsessed with the reviewer, compulsively following said reviewer’s social media presences. Eventually, the author begins to suspect the reviewer may be using a pseudonym to write reviews, and undertakes a campaign of attempting to identify the reviewer’s real-life identity. The author employs a number of techniques including combing social media accounts of friends and family of the suspected reviewer; buying the suspected reviewer’s personal details from an information broker; and exploiting contacts at publishing houses for information about the suspected reviewer. The episode results in the author phoning an individual suspected of being the reviewer with interrogative questions about the review, and finally turning up on the suspected individual’s doorstep. The author then chronicles the entire episode in a piece published by a major international news outlet.
Reaction to the piece from book reviewers is uniformly negative, condemning the author’s actions as stalking. From authors, however, it is mixed, with some authors agreeing with the stalking assessment, while others commending the author for attempting to track down the source of what is described as “anonymous libel”.
So… yeah. All that happened (is happening, will continue to happen).
Like I said, I don’t have a coherent narrative about any of it other than a general sense of each of these three incidents being linked by some combustible combination of race, gender, politics, culture wars, the internet, doxxing, stalking, harassment, anger, and privilege.
Things happen, lives crumble, the beat goes on.
Same as it ever was.
Imani Gandy (a.k.a. @AngryBlackLady) on Twitter’s refusal to deal with harassment on its service, and some of the manual pushback she’s been trying.
As I mentioned when I first saw this–ironically, on Twitter–reading this kind of made me want to program a simple app that auto-forwards abusive Tweets to @support. It wouldn’t be difficult to do, and if Twitter won’t implement an appropriate “flag as inappropriate” function…
Of course, the general consensus at the time was that the only result of such a service would be Twitter blocking the API calls of the service itself, not actually doing anything to address the problem.
Trust issues. Twitter has tiem.