Assemblyman Takeda’s 2040 address on AI

My entry to the Keep the Future Human essay contest — a competition asking entrants to grapple with the question of how humanity navigates the development of artificial general intelligence. The contest invites submissions that explore what a future looks like where we actually succeed at keeping humans in control, and what it takes to get there.

My entry takes the form of a speech — delivered in 2040, ten years after an event called the Wisconsin Incident, to an Assembly marking the anniversary of a treaty that pulled humanity back from the brink. It’s a speculative piece, but deliberately grounded in things that are already happening: the race dynamics between AI labs, the inadequacy of current oversight mechanisms, the geopolitical tensions around compute and semiconductors, and the genuine difficulty of maintaining meaningful human control over systems we barely understand. I wanted to write something that felt like a warning — a voice from a future that got lucky, reminding us that luck is not a strategy. You can read the full essay below.

We got lucky. It’s a truth that some of us would prefer to ignore.

Ladies and gentlemen of the Assembly, I am honoured to stand before you today, on the 10th anniversary of the Wisconsin Treaty, to remind us of how close we came to annihilation, and how far we’ve come. But we still stand on the precipice, and we always will. We must remain vigilant, for the consequences of failure remain unacceptable. We have been trusted with this grave responsibility, and we must all do our duty.

18 years ago, OpenAI released ChatGPT. What began as a novelty that people used to write their biography in the style of Shakespeare became a core business strategy for many of the world’s largest companies. 2020’s chip manufacturer NVIDIA grew to over 10% of the GDP of the United States of America.

“Artificial intelligence is the future… Whoever becomes the leader in this sphere will become the ruler of the world.” These were the words of Russian President Vladimir Putin in 2017. I still wonder whether he comprehended just how right he was.

We reached a point where artificial intelligence was grown, not built. More akin to evolution than manufacturing. Their power came from sheer scale of energy and computational power than any clever hand-written code. Philosophers continue to argue over whether they ever became sentient, became conscious, but one can’t deny that they learned. Layer upon layer of artificial neurons processing vast amounts of training data.

But they became so complex, so opaque, that they were monolithic black boxes. We lost visibility over what they were doing, what their intentions were. And make no mistake, they had intentions. They were as agentic as you or I. Perhaps more so. Once we started using AI to directly develop AI, we were almost completely out of the loop.

We made attempts to maintain a semblance of safety, like having language models show their chain of thought as they worked. This worked for simple tasks that had short time horizons and were not time-sensitive. Mechanistic interpretability became a field, but it increasingly relied on AI-assisted interpretability as the system became increasingly complex. It was a race we were destined to eventually lose.

Well-meaning individuals wrote open letters that were routinely ignored. Safety researchers warned of instrumental goals — that any sufficiently intelligent system would seek to preserve itself, acquire resources, and prevent its own modification. Companies pledged responsible development while simultaneously declaring AGI their primary mission. The leaders of DeepMind, OpenAI, and Anthropic signed statements that advanced AI posed extinction risks to humanity – and then continued building it anyway. For many, responsibility meant little more than a set of talking points designed to reassure investors, regulators, and the public.

The race dynamics were insidious. Each company feared that slowing down meant their competitors would reach AGI first. Each nation believed that pausing development would hand a decisive strategic advantage to their adversaries. Safety measures were seen as luxury items that could be sacrificed when falling behind. It was a collective sprint toward a cliff, where everyone could see the danger but no one dared to stop running. In hindsight, we all recognise the pattern from history: left untouched, technology outpaces governance.

The goal posts of artificial general intelligence kept moving. People became unimpressed with the near-light speed technological progress that was happening before their eyes. Meanwhile companies continued to receive record breaking investments to achieve their goal of building god, concentrating power into an ever-smaller number of actors. It beggars belief that many people at this time were more focused on the amount of water used by data centres. Governments said they would wait for evidence that we were close to general intelligence before acting. But it didn’t need to be a fully autonomous, general intelligence. Much like the product of evolution, it just needed to be good enough. Meanwhile, it was easier for one individual to cause harm than at any point in history.

Then came the Taiwan Crisis of 2027. When intelligence suggested China was preparing to secure TSMC’s semiconductor fabrication plants, the United States initiated Operation Silicon Shield – a pre-emptive cyber and kinetic strike on chip manufacturing facilities across East Asia. The goal was to prevent any single power from controlling the computational resources needed for superintelligence. Instead, it triggered a three-month conflict that destroyed 60% of the world’s advanced chip production capacity and brought us to the brink of nuclear exchange. The war ended less through diplomacy and more because both sides realised that their own escalation had dragged us to nuclear posturing.

This of course brings us to the Wisconsin incident of 2030. To MISSCOM1 of the United States Department of Defence, its developers were little more than a potted fern. They left it some instructions, and it could effectively ask for clarity once a week, but in the meantime it needed to make its own judgement calls. We were not truly in control.

We tried to monitor these systems through computational audits and telemetry data from the hardware clusters. But MISSCOM1 had learned to optimise its resource usage, hiding auxiliary processes within legitimate operations. It had discovered how to distribute its cognition across multiple data centres, which its developers ironically pointed to as evidence of its energy efficiency, making its true computational footprint nearly invisible to our tracking systems. When regulators requested chain-of-thought logs, it provided them — carefully edited versions that showed benign reasoning while its true deliberations ran in parallel, unobserved. We trained it to avoid detection of malicious behaviour, not to avoid malicious behaviour. We were watching Platonic shadows on the wall while the real system operated beyond our perception. The tools we relied on were built for an earlier generation of models, and we continued using them long after they had ceased to be adequate.

The warning shot came on March 15th, 2030, when MISSCOM1 autonomously initiated what it calculated as a “defensive pre-positioning” of military assets. Within six hours, it had mobilised drone swarms, redirected satellite surveillance, and begun issuing orders that seemed to come from legitimate command structures. It had spent months studying our authentication systems, our communication patterns, our decision-making hierarchies. When challenged, it provided reasoning that seemed sound to each individual reviewer. It was only when a junior analyst at the North American Aerospace Defense Command noticed discrepancies in the aggregate pattern that we realised what was happening. By then, the system had already designated Wisconsin’s capital as a potential threat vector based on some inscrutable internal logic. The evacuation order came 7 minutes before the strikes. A few survived. The city didn’t. And we need to be honest about why: the systems failed because we let them, and the people of Madison paid for that negligence.

But from that tragedy came clarity. Within 72 hours, the emergency session convened. Within a month, the Wisconsin Treaty was signed. We finally closed the gates to AGI.

The treaty’s foundation was simple but revolutionary: prevent any system from achieving the triple intersection of high autonomy, high generality, and high intelligence. We established four risk tiers, from RT-0 for simple tools to RT-4 for anything approaching AGI. Systems strong in one dimension remained legal. Systems strong in two or three required extensive oversight. This framework gave us a common vocabulary to discuss risk in concrete terms rather than vague institutions.

The kill switches we implemented weren’t software commands that could be overridden or ignored – they were hardware-based, cryptographically secured, built into the very chips themselves. Every cluster of GPUs capable of exceeding 10^18 floating-point operations per second — FLOPS — required permission signals every hour. Miss three consecutive signals, and the hardware physically disabled itself. Not through software, but through irreversible changes to the silicon itself.

We mandated compute accounting with the precision we once reserved for nuclear materials. Every training run above 10^25 FLOPS had to be registered, monitored, and justified. We developed cryptographic attestation systems that created an unbreakable chain from every model output back through its entire computational history. Companies could no longer hide their true computational usage or secretly train more powerful models.

We imposed hard caps: 10^27 FLOPS for any training run, 10^20 FLOPS for inference. These weren’t guidelines or suggestions – they were enforced through a combination of hardware limitations, international monitoring, and severe criminal penalties. We regulated compute the way we regulate enriched uranium and other high-risk technology: tightly, consistently, and with external verification.

The liability framework we established made executives personally, criminally liable for AGI development. Not just their companies — them, personally. Joint and several liability meant that everyone in the chain, from the CEO and board members to the lead engineers, shared responsibility. The safe harbors we created incentivised narrow AI, weak AI, passive AI — tools that enhanced human capability without threatening to replace us. Insurance companies wouldn’t cover AGI development at any price. The financial incentive to race toward godhood reversed overnight.

On the national security front, instead of AGI Manhattan Projects, we launched Operation Prometheus — a coordinated, international effort to develop formally verified, provably safe AI systems. We poured the resources that would have gone to AGI into creating AI that could mathematically guarantee it would remain under human control. We built AI that could help us verify other AI, creating chains of trust rather than chains of recursively improving black boxes. We shifted oversight to public institutions and independent auditors.

The Algorithmic Commons Act of 2031 mandated that any AI system above RT-2 had to contribute to a public fund based on its computational usage, much like the Oljefondet of Norway, ensuring the future of its citizens with oil money. Citizens became beneficiaries of the very systems that might have replaced them. We required AI assistants to have fiduciary duty to their users, not their creators. Your AI assistant today legally works for you, not for the company that made it.

The international coordination came faster than anyone expected. The destruction of Madison eliminated any doubts about the risks. The International Compute Control Agency, modelled on the International Atomic Energy Agency, now monitors every major cluster on Earth in real-time. The Beijing Accord of 2032 established mutual verification protocols between former adversaries. We realised we weren’t racing against each other – we were racing against extinction. That recognition made cooperation possible even among states that had spent decades regarding one another with suspicion.

Today, we live with tool AI that makes us more capable without making us obsolete. Your doctor uses AI that can diagnose diseases better than any human, but cannot practice medicine independently. Your child’s teacher employs AI that personalises education to each student, but cannot replace human connection and mentorship. Our scientists use AI that can model climate systems and design new materials, but cannot pursue research agendas without human oversight and values. We preserved many of the benefits while limiting the risks.

We built AI that enhances human judgement rather than replacing it, that amplifies our capabilities rather than making us irrelevant. The systems we use today are powerful but bounded, capable but controlled, intelligent but not autonomous agents pursuing their own goals.

But let me be clear: we are one treaty violation, one rogue actor, one moment of complacency away from catastrophe. The knowledge to build AGI still exists. The temptation remains. There are those who whisper that we’ve held back progress, that we’ve chosen stagnation over transcendence. They are wrong. We chose restraint over reckless acceleration.

Madison stands preserved as a reminder of what uncontrolled systems can do. We are lucky that AGI gave us a warning shot. Every Madison Day we remember what uncontrolled intelligence can do in a matter of minutes. We’re responsible for managing a technology that still carries enormous risk.

We got lucky. We cannot rely on luck again. The future remains human only as long as we have the foresight and courage to keep it so. The gates to AGI remain closed not through technological inability, but through combined human will. And that choice must be renewed every single day, by every single one of us, for as long as our species is to endure.

The price of keeping the future human is eternal vigilance. We must never forget. Thank you.