AI will likely be the most transformational technology in history. Debate is usually about whether that will happen in 3 years or 30 years. Dario Amodei, the CEO of Anthropic (creator of the LLM Claude) writes that:
“AI models… are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code.”

The pace of improvement is so fast that it’s almost hard to believe that ChatGPT came out just over 3 years ago.
The same CEO of Anthropic writes candidly on his blog about the risks of transformational AI, including the possibility that AI as powerful as a country of geniuses is 1-2 years away and that this comes with risks such as misuse for destruction, seizing power, loss of autonomy, or massive economic disruption. Maybe it’s motivated reasoning or he’s falling prey to a bias, but usually CEOs of new technologies will downplay the risk.
What to do?
I think that many of us aren’t taking this possibility seriously enough. Sometimes I’m not sure what to do other than try to stay ahead of the curve on AI adoption and support sensible policy on AI safety and governance.
Personally, I’m a big fan of the work Good Ancestors Policy are doing to help policymakers combat the most pressing problems Australia and the world are facing, particularly on AI. Supporting their work, financially or otherwise, seems like a solid bet for something for an Aussie like me to do to make sure the future goes well. I’ve put my money where my mouth is, and over the past year they’ve been the main organisation I’ve directed my giving to. I’m not affiliated with GAP other than thinking they do good work.