
One weird thing about reporting on AI, says Ezra Klein in The New York Times, is that “person after person” in the industry tells me they are “desperate to be regulated”, even if it slows them down. “In fact, especially if it slows them down.” Fierce competition is forcing Big Tech firms to “go too fast and cut too many corners”, but no one company is prepared to risk hitting the brakes and losing out. That’s where the government comes in, “or so they hope”. The question is, which government?
The first major proposal on how to do it came from the European Commission, which claimed it had come up with a “future-proof” way of restricting particularly sensitive uses of AI – marking national exams, say, or calculating credit scores. But this “predictably arrogant” claim is already obsolete, as models like GPT-4 aren’t restricted to any one particular use. The White House’s suggestions aren’t much better, amounting to vague requirements that automated systems should “provide explanations” of what they’re up to. Perhaps surprisingly, for all the talk of an AI arms race, the one government “perfectly willing to cripple” its tech firms is in Beijing. Chinese AI is banned from producing content that might harm national unity, overturn the socialist system, incite “separatism” or in any way “upset economic order or social order”. I wouldn’t go quite as far as that, but “we need to go a lot further than we have – and fast”.