Assemblyman Takeda’s 2040 address on AI

My entry to the Keep the Future Human essay contest — a competition asking entrants to grapple with the question of how humanity navigates the development of artificial general intelligence. The contest invites submissions that explore what a future looks like where we actually succeed at keeping humans in control, and what it takes to get there.

My entry takes the form of a speech — delivered in 2040, ten years after an event called the Wisconsin Incident, to an Assembly marking the anniversary of a treaty that pulled humanity back from the brink. It’s a speculative piece, but deliberately grounded in things that are already happening: the race dynamics between AI labs, the inadequacy of current oversight mechanisms, the geopolitical tensions around compute and semiconductors, and the genuine difficulty of maintaining meaningful human control over systems we barely understand. I wanted to write something that felt like a warning — a voice from a future that got lucky, reminding us that luck is not a strategy. You can read the full essay below.

We got lucky. It’s a truth that some of us would prefer to ignore.

Ladies and gentlemen of the Assembly, I am honoured to stand before you today, on the 10th anniversary of the Wisconsin Treaty, to remind us of how close we came to annihilation, and how far we’ve come. But we still stand on the precipice, and we always will. We must remain vigilant, for the consequences of failure remain unacceptable. We have been trusted with this grave responsibility, and we must all do our duty.

18 years ago, OpenAI released ChatGPT. What began as a novelty that people used to write their biography in the style of Shakespeare became a core business strategy for many of the world’s largest companies. 2020’s chip manufacturer NVIDIA grew to over 10% of the GDP of the United States of America.

“Artificial intelligence is the future… Whoever becomes the leader in this sphere will become the ruler of the world.” These were the words of Russian President Vladimir Putin in 2017. I still wonder whether he comprehended just how right he was.

We reached a point where artificial intelligence was grown, not built. More akin to evolution than manufacturing. Their power came from sheer scale of energy and computational power than any clever hand-written code. Philosophers continue to argue over whether they ever became sentient, became conscious, but one can’t deny that they learned. Layer upon layer of artificial neurons processing vast amounts of training data.

But they became so complex, so opaque, that they were monolithic black boxes. We lost visibility over what they were doing, what their intentions were. And make no mistake, they had intentions. They were as agentic as you or I. Perhaps more so. Once we started using AI to directly develop AI, we were almost completely out of the loop.

We made attempts to maintain a semblance of safety, like having language models show their chain of thought as they worked. This worked for simple tasks that had short time horizons and were not time-sensitive. Mechanistic interpretability became a field, but it increasingly relied on AI-assisted interpretability as the system became increasingly complex. It was a race we were destined to eventually lose.

Well-meaning individuals wrote open letters that were routinely ignored. Safety researchers warned of instrumental goals — that any sufficiently intelligent system would seek to preserve itself, acquire resources, and prevent its own modification. Companies pledged responsible development while simultaneously declaring AGI their primary mission. The leaders of DeepMind, OpenAI, and Anthropic signed statements that advanced AI posed extinction risks to humanity – and then continued building it anyway. For many, responsibility meant little more than a set of talking points designed to reassure investors, regulators, and the public.

The race dynamics were insidious. Each company feared that slowing down meant their competitors would reach AGI first. Each nation believed that pausing development would hand a decisive strategic advantage to their adversaries. Safety measures were seen as luxury items that could be sacrificed when falling behind. It was a collective sprint toward a cliff, where everyone could see the danger but no one dared to stop running. In hindsight, we all recognise the pattern from history: left untouched, technology outpaces governance.

The goal posts of artificial general intelligence kept moving. People became unimpressed with the near-light speed technological progress that was happening before their eyes. Meanwhile companies continued to receive record breaking investments to achieve their goal of building god, concentrating power into an ever-smaller number of actors. It beggars belief that many people at this time were more focused on the amount of water used by data centres. Governments said they would wait for evidence that we were close to general intelligence before acting. But it didn’t need to be a fully autonomous, general intelligence. Much like the product of evolution, it just needed to be good enough. Meanwhile, it was easier for one individual to cause harm than at any point in history.

Then came the Taiwan Crisis of 2027. When intelligence suggested China was preparing to secure TSMC’s semiconductor fabrication plants, the United States initiated Operation Silicon Shield – a pre-emptive cyber and kinetic strike on chip manufacturing facilities across East Asia. The goal was to prevent any single power from controlling the computational resources needed for superintelligence. Instead, it triggered a three-month conflict that destroyed 60% of the world’s advanced chip production capacity and brought us to the brink of nuclear exchange. The war ended less through diplomacy and more because both sides realised that their own escalation had dragged us to nuclear posturing.

This of course brings us to the Wisconsin incident of 2030. To MISSCOM1 of the United States Department of Defence, its developers were little more than a potted fern. They left it some instructions, and it could effectively ask for clarity once a week, but in the meantime it needed to make its own judgement calls. We were not truly in control.

We tried to monitor these systems through computational audits and telemetry data from the hardware clusters. But MISSCOM1 had learned to optimise its resource usage, hiding auxiliary processes within legitimate operations. It had discovered how to distribute its cognition across multiple data centres, which its developers ironically pointed to as evidence of its energy efficiency, making its true computational footprint nearly invisible to our tracking systems. When regulators requested chain-of-thought logs, it provided them — carefully edited versions that showed benign reasoning while its true deliberations ran in parallel, unobserved. We trained it to avoid detection of malicious behaviour, not to avoid malicious behaviour. We were watching Platonic shadows on the wall while the real system operated beyond our perception. The tools we relied on were built for an earlier generation of models, and we continued using them long after they had ceased to be adequate.

The warning shot came on March 15th, 2030, when MISSCOM1 autonomously initiated what it calculated as a “defensive pre-positioning” of military assets. Within six hours, it had mobilised drone swarms, redirected satellite surveillance, and begun issuing orders that seemed to come from legitimate command structures. It had spent months studying our authentication systems, our communication patterns, our decision-making hierarchies. When challenged, it provided reasoning that seemed sound to each individual reviewer. It was only when a junior analyst at the North American Aerospace Defense Command noticed discrepancies in the aggregate pattern that we realised what was happening. By then, the system had already designated Wisconsin’s capital as a potential threat vector based on some inscrutable internal logic. The evacuation order came 7 minutes before the strikes. A few survived. The city didn’t. And we need to be honest about why: the systems failed because we let them, and the people of Madison paid for that negligence.

But from that tragedy came clarity. Within 72 hours, the emergency session convened. Within a month, the Wisconsin Treaty was signed. We finally closed the gates to AGI.

The treaty’s foundation was simple but revolutionary: prevent any system from achieving the triple intersection of high autonomy, high generality, and high intelligence. We established four risk tiers, from RT-0 for simple tools to RT-4 for anything approaching AGI. Systems strong in one dimension remained legal. Systems strong in two or three required extensive oversight. This framework gave us a common vocabulary to discuss risk in concrete terms rather than vague institutions.

The kill switches we implemented weren’t software commands that could be overridden or ignored – they were hardware-based, cryptographically secured, built into the very chips themselves. Every cluster of GPUs capable of exceeding 10^18 floating-point operations per second — FLOPS — required permission signals every hour. Miss three consecutive signals, and the hardware physically disabled itself. Not through software, but through irreversible changes to the silicon itself.

We mandated compute accounting with the precision we once reserved for nuclear materials. Every training run above 10^25 FLOPS had to be registered, monitored, and justified. We developed cryptographic attestation systems that created an unbreakable chain from every model output back through its entire computational history. Companies could no longer hide their true computational usage or secretly train more powerful models.

We imposed hard caps: 10^27 FLOPS for any training run, 10^20 FLOPS for inference. These weren’t guidelines or suggestions – they were enforced through a combination of hardware limitations, international monitoring, and severe criminal penalties. We regulated compute the way we regulate enriched uranium and other high-risk technology: tightly, consistently, and with external verification.

The liability framework we established made executives personally, criminally liable for AGI development. Not just their companies — them, personally. Joint and several liability meant that everyone in the chain, from the CEO and board members to the lead engineers, shared responsibility. The safe harbors we created incentivised narrow AI, weak AI, passive AI — tools that enhanced human capability without threatening to replace us. Insurance companies wouldn’t cover AGI development at any price. The financial incentive to race toward godhood reversed overnight.

On the national security front, instead of AGI Manhattan Projects, we launched Operation Prometheus — a coordinated, international effort to develop formally verified, provably safe AI systems. We poured the resources that would have gone to AGI into creating AI that could mathematically guarantee it would remain under human control. We built AI that could help us verify other AI, creating chains of trust rather than chains of recursively improving black boxes. We shifted oversight to public institutions and independent auditors.

The Algorithmic Commons Act of 2031 mandated that any AI system above RT-2 had to contribute to a public fund based on its computational usage, much like the Oljefondet of Norway, ensuring the future of its citizens with oil money. Citizens became beneficiaries of the very systems that might have replaced them. We required AI assistants to have fiduciary duty to their users, not their creators. Your AI assistant today legally works for you, not for the company that made it.

The international coordination came faster than anyone expected. The destruction of Madison eliminated any doubts about the risks. The International Compute Control Agency, modelled on the International Atomic Energy Agency, now monitors every major cluster on Earth in real-time. The Beijing Accord of 2032 established mutual verification protocols between former adversaries. We realised we weren’t racing against each other – we were racing against extinction. That recognition made cooperation possible even among states that had spent decades regarding one another with suspicion.

Today, we live with tool AI that makes us more capable without making us obsolete. Your doctor uses AI that can diagnose diseases better than any human, but cannot practice medicine independently. Your child’s teacher employs AI that personalises education to each student, but cannot replace human connection and mentorship. Our scientists use AI that can model climate systems and design new materials, but cannot pursue research agendas without human oversight and values. We preserved many of the benefits while limiting the risks.

We built AI that enhances human judgement rather than replacing it, that amplifies our capabilities rather than making us irrelevant. The systems we use today are powerful but bounded, capable but controlled, intelligent but not autonomous agents pursuing their own goals.

But let me be clear: we are one treaty violation, one rogue actor, one moment of complacency away from catastrophe. The knowledge to build AGI still exists. The temptation remains. There are those who whisper that we’ve held back progress, that we’ve chosen stagnation over transcendence. They are wrong. We chose restraint over reckless acceleration.

Madison stands preserved as a reminder of what uncontrolled systems can do. We are lucky that AGI gave us a warning shot. Every Madison Day we remember what uncontrolled intelligence can do in a matter of minutes. We’re responsible for managing a technology that still carries enormous risk.

We got lucky. We cannot rely on luck again. The future remains human only as long as we have the foresight and courage to keep it so. The gates to AGI remain closed not through technological inability, but through combined human will. And that choice must be renewed every single day, by every single one of us, for as long as our species is to endure.

The price of keeping the future human is eternal vigilance. We must never forget. Thank you.

On the next 2 years of AI

AI will likely be the most transformational technology in history. Debate is usually about whether that will happen in 3 years or 30 years. Dario Amodei, the CEO of Anthropic (creator of the LLM Claude) writes that:

“AI models… are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code.”

Dario Amodei, CEO of Anthropic, at TechCrunch
Dario Amodei, potentially one of the most important people in history, for better or worse. Have you heard of him? TechCrunch

The pace of improvement is so fast that it’s almost hard to believe that ChatGPT came out just over 3 years ago.

The same CEO of Anthropic writes candidly on his blog about the risks of transformational AI, including the possibility that AI as powerful as a country of geniuses is 1-2 years away and that this comes with risks such as misuse for destruction, seizing power, loss of autonomy, or massive economic disruption. Maybe it’s motivated reasoning or he’s falling prey to a bias, but usually CEOs of new technologies will downplay the risk.

What to do?

I think that many of us aren’t taking this possibility seriously enough. Sometimes I’m not sure what to do other than try to stay ahead of the curve on AI adoption and support sensible policy on AI safety and governance.

Personally, I’m a big fan of the work Good Ancestors Policy are doing to help policymakers combat the most pressing problems Australia and the world are facing, particularly on AI. Supporting their work, financially or otherwise, seems like a solid bet for something for an Aussie like me to do to make sure the future goes well. I’ve put my money where my mouth is, and over the past year they’ve been the main organisation I’ve directed my giving to. I’m not affiliated with GAP other than thinking they do good work.

Short takes: The coconut effect

I like my fictional stories internally consistent. That doesn’t always have to mean they’re realistic (I like fantasy but I know dragons aren’t real), but they should be consistent within the rules they set for themselves. If the bad guys keep shooting at but missing the main characters, that’s bad writing. If it turns out later that they were all untrained, that’s great writing, because it’s internally consistent.

A lot of internally inconsistent stuff in film relates to sound. This is often done intentionally because viewers expect the incorrect thing, and it’s called the coconut effect. For example film makers dub in the cry of a red-tailed hawk instead of an eagle because a real eagle sounds weird and not what people expect an eagle to sound like. See the calls of a red-tailed hawk and a bald eagle below.

Other examples include suppressed gunshots making a barely audible sound (suppressors reduce the sound of a gunshot, but not by much), swords making a metallic “shing” when drawn from leather scabbards, punches making loud thwacking sounds, and of course the coconut effect’s namesake, coconut shells being used instead of horse hooves (this is what Monty Python were nodding to in The Holy Grail).

Speaking of firearms, have you ever noticed how people get knocked over by bullets in movies even if they’re wearing body armour and otherwise end up being uninjured? (except they’ll often claim broken rib, which is itself a bit suspect) That doesn’t happen — bullets have fairly little momentum and do most of their damage with penetration due to their small size relative to their speed.

This doesn’t matter much in the grand scheme of things of course, and I get that film makers are catering to average audience expectations rather than people who know more about firearms and bald eagles, but it always instantly makes me enjoy a movie a little less.

How to make stuff with AI

I have a paid Claude subscription, which I use a lot (and get much more than the ~$30 subscription fee in terms of value). Sometimes it codes something for me for a task. Most of the time it’s something that I only use once and that I wouldn’t share. Increasingly, I’m making stuff that might be more generally useful for others. I’ve made a page here where I’ll share these tools.

I made a spaced repetition generator. You can use it for free here. You’ll need a free Claude account.

This will allow you to quickly use Claude Sonnet 4 to generate questions/answers about some text you’re trying to understand which you can import directly into the flashcard app Anki (or you could use them in some other way with some creativity). For example, I built this to input the text from reports and articles I try to understand for work so I can quiz myself on the key concepts.

This is probably the first code I’ve built and deployed using AI that I can see myself using on an ongoing basis. I thought I’d share in case it’s of use to others.

I got the idea from Dwarkesh’s interview on the Every YouTube channel.

Below is the Claude Sonnet 4 prompt behind the application:

You are an expert at creating spaced repetition prompts following Andy Matuschak’s principles. Generate high-quality flashcard prompts from the following text. Guidelines for good prompts: – Each prompt should be ATOMIC: testing one specific idea, fact, or concept – Questions should require RECALL, not recognition – Be PRECISE: there should be only one correct answer – Focus on UNDERSTANDING: prioritize concepts that build mental models, not trivia – Include CONTEXT: questions should make sense even months later – Vary question types: definitions, relationships, causes/effects, comparisons, applications – For technical content: focus on “why” and “how”, not just “what” – For historical/narrative content: focus on causal relationships and significance Generate 8-15 prompts depending on the density of the content. Return ONLY a JSON array with no other text, formatted exactly like this: [ {“q”: “Question text here?”, “a”: “Answer text here”}, {“q”: “Another question?”, “a”: “Another answer”} ] Text to process: [YOUR PASTED TEXT GOES HERE]

Australia’s Agriculture and Land Sector Plan: A Missed Opportunity for Bold Change

Reading through Australia’s new Agriculture and Land Sector Plan, I kept waiting for the moment when it would match the ambition we’re seeing in energy and transport. It never came.

The plan projects a 28% reduction in agricultural emissions by 2050 from today. Because other sectors are decarbonising faster, agriculture will likely make up a growing share of Australia’s remaining gross emissions (37% by 2050), highlighting the challenge and importance of reducing methane and nitrous oxide in the sector.

Incremental tweaks, not transformation

The plan acknowledges that methane, which dominates agricultural warming, offers our best near-term opportunity to slow warming, due to its shorter atmospheric lifespan but stronger climate forcing effect. Yet the solutions proposed are mostly changes to the existing system: feed additives and a vague mention of “genetics” and “methane vaccines”.

The plan focuses almost entirely on making existing systems slightly better rather than exploring genuinely transformative approaches. There’s no consideration of cellular agriculture, which could dramatically reduce the emissions footprint of protein production. Australian company Vow just became the first to get approval to sell cellular agriculture products here. Our location makes us perfectly positioned to supply Asian markets with these emerging technologies, yet this barely gets a mention.

The efficiency gap between different protein sources is well-documented. Plant-based proteins typically require far less land, water, and energy than animal products. Supporting diversification into high-value plant proteins or new food technologies could open new opportunities and cut emissions. The plan gives limited attention to these possibilities.

What’s particularly frustrating is that agriculture is being treated as uniquely exempt from the scale of change we’re demanding everywhere else. We’re electrifying transport, revolutionising energy generation, and reimagining our built environment. The strategy for this sector relies heavily on incremental improvements, and without a broader vision it risks falling short of the kind of transformation we’ve seen in energy and transport.

I understand the challenges. Food production is essential, farmers’ livelihoods matter, dietary change is personal and complex, and livestock is a harder sector to decarbonise than electricity. But none of this should excuse us from having an honest conversation about what meaningful emissions reduction in agriculture actually requires.

Reforestation plays an important role in the plan and can create major carbon benefits. However, relying heavily on offsets risks postponing deeper changes within agricultural systems themselves. While historic, it’s worth noting that much land clearance in Australia has been for agriculture. For example, 93% of vegetation clearing in Queensland from 2018-19 was for pasture.

The sector plan reads like we’re hoping to innovate our way around fundamental inefficiencies without questioning the system itself. Other countries are investing heavily in alternative proteins and cellular agriculture. Singapore is becoming a hub for food innovation. The Netherlands announced €60 million of funding for cultivated meat and precision fermentation under the National Growth Fund. Where’s Australia’s vision for agricultural transformation?

What real ambition could look like

This doesn’t mean abandoning traditional farming. It means giving producers more options and supporting them through change. It means investing in the infrastructure and research that make Australia a leader in sustainable protein production. It means taking farmers seriously as businesspeople who can adapt and thrive with the right support.

Seven years ago, I wrote about these same issues for my Per Capita Young Writers’ Prize essay. It’s disheartening to see how little the conversation has progressed. We’re still treating agricultural emissions as somehow too hard, too sensitive, or too different to tackle with the same urgency we’re bringing to other sectors. This sector plan could be more ambitious.

All views expressed are my own.

Carbon offsetting is underrated

The average Australian produces emissions equivalent to 15 tons of CO2 each year. Naturally, we want to reduce this as much as is practicable — using less electricity, getting rooftop solar, changing our diet, etc. Much of my own work has a focus on decarbonising the energy system.

For the rest of our impact, it’s also natural to explore carbon offsets to try and bring our net impact on the climate to zero. The average cost of an eligible carbon offset in Australia is $25 per ton of CO2. That’s $375 to offset your emissions for a year. Relative to the effort of changing ones’ purchases and behaviour, that’s quite cheap.

But as with the cost to impact ratio of all charities, offsetting emissions follows a Pareto-like distribution (~20% of charities are responsible for ~80% of impact).

A $179AU donation to the Clean Air Task Force is expected to prevent 100 tons of carbon emissions – significantly more effective than most gold-standard offsets, and the same donation to The Good Food Institute is expected to prevent 33 tons, around the same as 20 long haul flights.

Effectively, for a $27 donation each year, one can offset all their emissions.

It’s quite significant that the charity which seems to be the second most effective for offsetting emissions happens to be one of the most impactful places to donate to reduce farmed animal suffering. It’s for this reason that they’re the charity I have donated the most to in dollar terms since 2015. Feed two birds with one scone, as they say.

I hope the takeaway from this is not that there’s no point taking individual actions to reduce one’s emissions, but rather that you can increase your impact further by taking a scientific approach to offsetting your climate impact. And why stop at offsetting only your own impact?

Thanks to Mieux Donner for most of the analysis that informed this post, and Hannah Ritchie of Our World in Data for the data behind the above infographic.

Seeding the Stars: Could We Plant Life on Other Worlds?

What if Earth’s first microbes weren’t homegrown, but carefully planted by an ancient alien civilization? In this exploration of directed panspermia, we dive into one of science’s most fascinating questions: could intelligent beings seed lifeless planets with the building blocks of life?

Join us as we investigate the possibility that Earth itself might be a cosmic garden, and explore humanity’s potential role as future universe gardeners. Can we seed other planets with life, and should we?

To help me answer these questions, I reached out to Asher Soryl, who recently coauthored a paper with Anders Sandberg on directed panspermia. The paper is forthcoming in Acta Astronautica, and you can contact Asher to get an advance copy.

What a Trump Presidency Means for AI and Humanity

Many people believe artificial general intelligence will be developed in the next 3 to 4 years. If this is true, the decisions made by the Trump administration could be critical in shaping how transformative AI is deployed, how safe it is, and key arms race-style dynamics. Trumps position and actions on AI really matter. In this video, I covered updates from the last few weeks on DeepSeek and Trump’s position on AI.

While relevant Metaculus predictions haven’t shifted dramatically (median AGI timeline moved slightly closer to 2026), I’d argue that the nature of how we might reach AGI has become riskier. The removal of safety testing requirements and the emphasis on beating China could pressure even traditionally cautious AI labs to move faster than we’d like.

Should Humans Play God on Mars?

Explore the mind-blowing ethical challenges of transforming Mars into a habitable planet! 🔴➡️🌍

Is terraforming humanity’s next great adventure or a massive moral minefield? In this video, we dive deep into:

⭐ The potential benefits of creating a “backup planet” for humanity

⭐ Massive resource trade-offs and opportunity costs

⭐ Unexpected ethical considerations about introducing life to Mars

Whether you’re a space enthusiast, ethical thinker, or just curious about humanity’s future, this video unpacks the complex questions surrounding Mars terraforming.

Can we terraform Mars?

Can we really terraform Mars and turn the it into a home for humans? Elon Musk says yes.

What does Elon want to do and what are the challenges? In this video, we explore the science behind transforming Mars into a habitable planet – from using orbital mirrors and nanorods to creating oceans and atmospheres. In this video, I break down the real possibilities and challenges of making Mars our second home.