Worms, spammers, pop-ups, and trolls: Why assholes never win.

/Worms, spammers, pop-ups, and trolls: Why assholes never win.

So did you know that researchers at Cornell University, in partnership with Google and Disqus, believe they’ve developed an algorithm that can auto-ban internet trolls.

In order to talk about this, I first want everyone to take a moment to contemplate pretty much any cyberpunk or near-future sci-fi that came out of the mid-to-late 90s. Pick one, any one. I can almost guarantee you that, whenever communications technology is mentioned, it will be at least once part of some kind of lament about a deluge of spam that’s killing the internet/CyberScope/VirtualTube/whatever. Go back five to ten years earlier than that, and everyone is worried about self-replicating viruses, as in Pat Cadigan’s 1992 novel, Synners. In the late nineties and early noughties, it was intrusive popup advertising. Hell, even Twilight–not exactly a techno-thriller–has a scene where Bella is deluged by popup windows the moment she opens her browser.

Hands up. When was the last time any of you actually saw a pop-up ad? Not those PitA interstitial lightbox things, but an actual, proper, for really reals popup window?

Viruses were the Hot New Thing in cyberpunk the early 90s because in 1988 approximately ten percent of the then-Internet was taken down by something called the Morris worm. This was the world’s first “wild” self-replicating virus. Previously, viruses had to be introduced into a system, such as by a user putting in the wrong floppy disk1 and opening the wrong file, and they tended to stay on the system they were on. But you could catch, and spread, Morris just by being connected to the Internet. SFF authors of the time took this terrifying idea–the machines are becoming alive! they’re replicating themselves without our intervention!–and ran with it.

Hands up time again: when was the last time you got a worm on your computer? Not a virus or a Trojan–something you specifically had to open and interact with–but something that infected you just by virtue of your computer being on? I remember mine: it was over a decade ago, in 2003, when I caught Blaster off the university LAN.

And spam. Again, hands up: when was the last time your inbox was so deluged by spam as to be unusable? Not your actual spam folder, but your inbox? Again, this was a thing of massive angst in the early 2000s, and it has to do with the rise of automation versus the drop in cost of computing power and bandwidth, plus the rise in the growth of the Internet. Basically, it got to a point where you could press a button and send a whole network of computers off trying to send an email to hundreds of thousands of randomly generated addresses, in the hope that one would reach the eyeballs of someone real. It’s the coral spawning method of cyber fraud, with all the burden of cleanup placed on the shoulders of end users. For a while there, everyone was predicting the death of email via spam. Like, that is literally a thing people were predicting a decade ago. Except, go read that Salon article again and see if you can pick the thing it doesn’t mention.

Found it yet?

The article doesn’t mention Gmail. It doesn’t mention Gmail because Gmail launched as an invitation-only service in 2004, two years after the Salon article was written. Gmail wasn’t open sign-up until 2007. Gmail is critical in the history of spam because Gmail was the first big public email provider to really seriously kick the shit out of the problem. Back In The Day, people would switch to the service from their Hotmail/ISP/self-hosted email accounts just to leverage Gmail’s spam filter, which was miles ahead of everyone else’s, and the reason it was ahead was because it was, in effect, one giant, user-driven learning engine. The same sorts of content algorithms that figure out what you really meant to search for when you type hpw gmsil killrd soam are the ones that keep your email inbox (mostly) usable.

Ditto for the death of pop-up ads. They annoyed the shit out of people, and are a pain in the ass to deal with… for a human. But the code that spawns them are easy for a computer to detect with a few lines of regex. For a while there, every software dev and her cat were writing desktop proxies to strip popups. Later, these moved to in-browser extensions, until finally popup blocking options were integrated natively into web browsers. Nowadays, most browsers will ask you whether you really meant to spawn a new window whenever one tries to open in a pop-up style context. As such, pop-ups have all-but disappeared, which is good news for Bella Swan the next time she needs to research supernatural entities2 online.

So is it with self-replicating worms. Antivirus technologies certainly have their flaws, but the one thing they are exceptionally good at is protecting today’s users from yesterday’s problems. Again, they do this with algorithms and pattern matching based on past learning of what is “good” and what is “bad” code.

Worms, spam, and pop-ups. The thing these have in common is they’re all past scourges of the internet, all of whom have been, if not defeated, then at least tamed by the same method.

That method? Better algorithms.3

So here’s a prediction for you: the next Scourge of the Internet is going to be remembered as trolling. It’s another one of those problems that’s been around forever, but seems to be getting worse (or at least more attention), to the point where “don’t read the comments” has become its own meme, and turning comments off has become trendy both for individual bloggers and large sites alike. Trolls destroy communities and they destroy lives. If I was going to be writing a cyberpunk-esque book right now, predicting a dystopian near-future internet, it would be one where online spaces resembled nothing so much as a post-apocalyptic zombie wasteland, with pockets of civilisation huddled in the dark, defending against the sealioning GamerGate onslaught outside.4

And yet… in some ways, I think this is already becoming yesterday’s problem. Tolerance for trolling is plummeting even as–or because of the fact that–the aggression of trolls is skyrocketing. The narrative is slowly shifting away from “it’s only the internet” and into “the internet is life“. Online gaming and social media companies, long guilty of turning a blind eye to their services being used for threatening purposes, are starting (slowly) to crack down as they realise fostering toxic atmospheres actually, surprise surprise, loses them business.

When the social climate changes, so too does the technology. If worms didn’t destroy computer systems, we’d be swimming in them.5 If pop-ups and spam didn’t aggravate people, they’d be the only things we saw. So too will it be with trolling. Already, both companies and individuals are developing innovative technical methods of curbing trollish and harassing behaviour. The backlash, from people who apparently believe they have some kind of inalienable right to be assholes, has been severe.

The terrible irony here is that the more the trolls come out of the woodwork–the more real-world harm they cause–they more likely they are to lose out, long term. It’s really hard to shrug trolling off as “just the internet” when people’s lives are in danger. Laws and law enforcement is slow to catch up, but catch up it will.

But I also think there’s a growing awareness that online trolling and harassment is a problem that needs to be addressed at the root, long before it gets to the stage where federal authorities are investigating incidents of terrorism masquerading as “pranks”. If you run an online space–from the smallest blog to the largest social media site–and that space is toxic, then it’s your fault. Weeding out individual trolls is emotionally taxing and incredibly time consuming… but so was sorting spam back in 2001. Nowadays, all it takes is typing an Akismet key or installing a reCAPTCHA plugin or clicking “mark as Spam”.

The systems have caught up. So to will they for trolls.

So this is my claim chowder of the day. I predict that, in the next few years, you will be able to sign up for services that block trollish and harassing messages based on heuristic learning algorithms. Services like Block Together are the first step. These tools will slowly migrate from being stand-alone, manual systems, into being adopted by major platforms; Google, Facebook, Twitter, and so on. The data inputs from the huge sites will fuel learning across the board, in the same way spam and antivirus heuristics work now.

By 2025, I predict that finding a “get raped and die cunt” message on any major social media or blogging platform will be as archaic and quaint as finding a spam comment is in 2015. Online spaces devoid of this sort of discourse will be the norm in the same way offline spaces where people don’t routinely spit on each other over minor disagreements are the norm. Not only that, but legal instruments will catch up to the point where launching a sustained harassment campaign will be prosecuted as routinely as any in-person stalking charge.6

In short, the future won’t be perfect, but it’ll be… better. First comes social change, then technical controls, then the law. It’s happened before. It’s happened before on the Internet, even; shocking, I know.

We’ve got a long way to go, true. But I think the future is bright. And I, for one, welcome it.

  1. That was what we used to call USB thumbdrives Back In The Dark Ages, kids. ^
  2. Or parenting. Or the parenting of supernatural entities. ^
  3. Also, just so you know, before spellcheck gets to it, that word tends to come out of my keyboard looking something like “algorhythmns”. Needless to say, that’s made writing this post “fun”. Fuck you, whole word reading method. ^
  4. Actually… hey. That could totally work. First dibs! Editors, email my agent! ^
  5. And, actually, we are swimming in the ones that don’t. They get called by euphemisms like “APTs”, and they’re used in cyber-espionage. They are incredibly difficult to detect, mostly because their primary objective is not to be detected, which means being “light touch” on the systems they infest. ^
  6. Which is to say, not perfectly, or even particularly well. But better than the current status quo of “why don’t you just turn off the computer?”. ^
2018-11-26T08:21:13+00:0012th May, 2015|Tags: culture, cw: harassment, harassment, tech, xp|

2 Comments

  1. gehayi 12th May, 2015 at 1:56 am
  2. gamingtimeladyfromgallifrey 17th May, 2015 at 12:39 pm

Comments are closed.