In 2022, says Ezra Klein in The New York Times, AI experts were asked how likely it was that people would lose control of future advanced AI systems, causing “human extinction or similarly permanent and severe disempowerment of the human species”. The median reply was 10%. It may be “hard to fathom” why so many bright sparks are slogging away to create something they think has a 10% chance of wiping out humanity. But I regularly spend time with these geeks, and “I don’t know that I can convey just how weird that culture is”. It is a community living with an “altered sense of time and consequence”. They are “creating a power they do not understand at a pace they often cannot believe”.
We tend to reach for science fiction when we’re trying to understand this stuff, but I’ve come to believe that more apt metaphors “lurk in fantasy novels and occult texts”. AI folk aren’t so much inventing new tech as “summoning” it. When pushed as to why they carry on, given calamity is so likely, many offer what sounds like “the AI’s perspective” – they feel a responsibility to “usher this new form of intelligence into the world”. The coders casting these spells “have no idea what will stumble through the portal”. And the oddest part is that they speak of this freely. These aren’t “naïfs who believe their call can be heard only by angels”. They believe they might summon demons. “They are calling anyway.”