The AI Safety Summit
On November 1-2, a hundred or so world leaders and tech executives will meet outside of London to discuss Artificial Intelligence, the future, and how we won’t have one if we don’t get ahead of AI.
To instill a sense of history and context, the meeting is being held at the famous Bletchley Park, where SIS gathered enough eggheads to break the Germany Enigma codes during World War II. The architect who built the house out to its present dimensions called it a “maudlin and monstrous pile”, whatever that tells us about the place. At any rate, it is the birthplace of the world’s first programmable digital electric computer, named Colossus – which did not fit in the pocket of your skinny jeans. The advantage that the Bletchley Park crew did have over those attending this week’s AI Safety Summit is that they actually had a clear understanding of the problem that needed solving.
The launch of Large Language Models like ChatGPT have creeped everyone out, but I’d argue that the AI that “thinks” like people is emotionally creepy but not the real problem. The AI that “thinks” like no human ever has before is the catastrophic danger – that’s the AI that’s going tell some damned fool how to make a super-virus to combat climate change by killing off civilization.
Either way, Brussels is trying to enact sweeping EU regulation on AI by the end of the year so that it can position itself as the setter of global digital standards. The White House, for its part, is offering up an executive order on the issue, because it wants to avoid “the Brussels effect” where the world follows Europe’s lead. Neither Washington nor Brussels wants to be outdone by Beijing, which isn’t going to follow Brussels anyway, as it’s already implemented its own “Global AI Governance Initiative.”
Everyone agreeing that we need to put a leash on this monster actually solves very little. Large Language Models (LLM) along with other Al tech are broadly portable and global, so for any of the measures to work, there must be something close to global buy-in. There isn’t. Nor is there any good reason to believe that, in a world of paranoid deglobalization and reshoring, where most players are going to be losers in one way or another, anyone is going to work in concert with anyone else.
The twist is that up until now, Big Tech has resisted government regulations, and now it’s lobbying the for it. The firms at the forefront of AI want some framework in place – something akin to “Please stop me before I kill again.” Or so they say. These are not people known for altruistic restraint or self-reflection; this is the same crew who unleashed social media on your children and still won’t admit a relentless IV of social pressure, bullying and pedophiles is no way to raise kid. The motivation of AI firms is probably more practical than that. Part of it is competition; almost by definition, competition and innovation in the AI sphere is not going to be entirely human innovation. While Silicon Valley was pretty mellow about inventing apps to destroy everyone else’s jobs, it does look like the smartest guys in the room just managed to invent a wildly popular technology that will replace them.
The will to solve the problem will be there at Bletchly Park this week, but there is no consensus as to what the problem is. Compounding things is that whatever suite of horrors AI is about to unleash will be a very fast-moving target. It has taken 20 years and a generation of emotional cripples for the government to sort out lawlessness of the internet might be dangerous and they still haven’t done anything. What are a bunch of regulators going to do about a technology that can think 1000x faster than any human, and then implement its ideas without help from us?
Less apocalyptically, the GameStop meme-stock fiasco was a mere headache when it was fueled by Gen Z turds making trying the stick it the kids in school who went into finance. Imagine if Big Blue got involved in that. Gary Gensler, Chair of SEC, said an AI engineered financial crisis was "nearly unavoidable" without swift intervention.
Without a doubt, AI is capable of creating a “maudlin and monstrous pile” out of whatever we give it. But what form that intervention needs to take, or how to implement it, is something of an enigma that we haven’t yet cracked.