In his epic anti-A.I. work from the mid-1970s, “Computer Power and Human Reason,” [Joseph] Weizenbaum described the scene at computer labs. “Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be as riveted as a gambler’s on the rolling dice,” he wrote. “They exist, at least when so engaged, only through and for the computers. These are computer bums, compulsive programmers.”

He was concerned about them as young students lacking perspective about life and was worried that these troubled souls could be our new leaders. Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. “No playwright, no stage director, no emperor, however powerful,” Mr. Weizenbaum wrote, “has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.”

Welcome to Silicon Valley, 2017.

Noam Cohen on techtatorships.

As an aside, Weizenbaum is the guy who, in the mid 1960s, wrote ELIZA, one of the first chatbots. ELIZA was primitive—it basically just echoed parts of what people typed to it back at them—but people’s reactions to it disturbed Weizenbaum so much that he kinda dropped the whole AI field and went on to be something of a tech doomsayer. Weizenbaum’s 1976 book, Computer Power and Human Reason: From Judgment To Calculation, is one of the ur-texts that attempts to lay out a pseudo-secular argument that humans have some inherent quality to them that will always hold us apart—or, rather, should always hold us apart—from AIs. The religious version of this argument—the same one that’s been used for millennia, albeit usually against animals—is that we have souls. Weizenbaum’s is a more nebulous distinction between “decisions”, that can be made programmatically, and “choices”, that can only be made by humans. If you want to be kind, you can phrase this as a variant of a free will argument (although, I’d argue that you can’t ascribe free will only to humans and not AIs without some kind of religious framework underneath). If you’re being unkind, you’d say that Weizenbaum is essentially arguing that human “choice”/judgement is unavailable to AIs because it’s fundamentally irrational.

Either way, the potential emergence of AI is the next big challenge to the notion of human exceptionalism. Arguably even moreso than, say, potential contact with sentient aliens. You can handwave aliens away with most of the same arguments about souls and free will and whatever that are applied currently to humans—maybe God made multiple planets and just didn’t tell anyone about it, in the same way He apparently “forgot” to tell everyone about dinosaurs—but it gets a bit trickier to imbue religio-metaphysical constructs onto machines, if only because most religious frameworks were laid out prior to the era where anyone thought “independently intelligent machines” could be A Thing.1 In other words, aliens could’ve been made by God, but AIs came from the hand of Man. What, then, is the moral framework under which we treat our creations? Or, more importantly, what is the framework under which they treat us?

I think it’s not a coincidence that most AI stories in sci-fi are basically framed as slave revolts in chrome dressing. We know we’re going to fuck this one up when its time comes.

Which is, yanno. All a total aside from the quote/article, but… Tl;dr, be nice to Siri, everyone.

  1. Which isn’t to say I think human religions won’t adapt to AIs if and/or when “true” AI emerges. Of course it will. []