Back in the early days of the web, the Guardian ran a brilliant ad which asked “Ever wondered how every day there’s just enough news to fit in the newspaper?” It was advertising the Guardian website, and the fact there was more there than you’d find in the paper.
Now? There are a gazillion websites – but tons of them are simple copies, monetised by adverts from Google or whoever, which leach from the originating sites by copying their content. We’ve now established the limits of how much news is generated each day: it’s more than fits in newspapers, but less than fits on all the websites currently dedicated to “news”.
Charles Arthur on volume.
The whole post, which is kind of old now (I’m still going through my backlog of links… from 2015), is mainly talking about tracking and advertising online.
Incidentally, as an aside, one of the things I almost never see mentioned in any lament on the rise and rise of online surveillance is the contribution from the parallel rise of HTTPS.
Yeah, you heard me.
Here’s the thing. Back in Ye Oldene Dayes of the internet, you didn’t need to follow everyone around the internet to find out where they were coming from to reach your site. You knew, because whenever they hit up one of your pages1 their browser used to send along a little thing saying where they visited from, a.k.a. the referrer.
The thing about HTTPS, is that one of the “privacy”2 features it offers is that it does not send the referrer when you move between pages. If you’ve run any kind of traditional tracking software on your website, e.g. Mint or Piwiki or Jetpack, and have done so for a while, you’ll notice that they get less and less useful data every year. Referrers from blogs? Gone. Social media? Forget about it. Even most URL shorteners work by obfuscating the true source, meaning you might know someone came to your site from Twitter (t.co), but to find the actual originating Tweet you’re going to need to do a manual search or scrape and API.
See, back in Ye Oldene Dayes, individual webmasters used to be able to assemble reasonably good profiles of their website’s users; who was linking them, who were the repeat visitors, and so on. It went both ways, too; bloggers and website owners got to know each other and built their communities around the referrer log. Nowadays, if you want that information? You’re going to have to buy it from one of the Internet Surveillance Megacorps (and it’ll cost you). In other words, the web has moved from “small town/nosy neighbor surveillance” to the capitalist Big Brother variety. Various social media sites will pretend to give some of this community back to their users–think things like Tumblr reblogs–with the key emphasis being on keeping the community on their platform (and, thus, marketable to their advertisers).
Ranting about things like this is one of the hallmarks that makes me old, I know.