Mapping Australia’s battery boom by postcode

The Australian Government’s Cheaper Home Batteries program has nearly tripled Australia’s household battery capacity in just eight months. I pulled publicly available data from the Clean Energy Regulator and Energy Consumers Australia to dig into what’s actually happening — where the batteries are being installed, who’s buying them, why, and what it all means for electricity bills and the grid. I also built an interactive postcode-level map so you can explore the data yourself.

Some of the findings are surprising. Average postcode income has no correlation with battery uptake. People are buying batteries roughly double the size they were before the rebate. And while nearly 60% of survey respondents say they’re interested in joining a virtual power plant, fewer than 10% of battery owners actually have — a gap that’s already reshaping AEMO’s long-term plans for how much grid-scale storage Australia will need.

Read the full analysis on Energy x AI, my newsletter covering AI infrastructure, Australian energy regulation, and the intersection of the two.

Read the full post on Substack →

Claude’s off-peak promotion is smart for servers, possibly bad for the grid

Anthropic’s off-peak Claude promotion is smart for their servers — but it could mean more AI compute when the grid is most stressed.

Anthropic is trialling double Claude usage outside of peak Claude time (8 am to 2 pm ET on weekdays), or 10 pm to 4 am AEST, for 2 weeks — pretty handy for us Aussie users. This is likely motivated by reducing their server demand during peak usage hours, but counterintuitively, it could lead to more demand at peak electricity times when the grid is already most stressed.

Why are they doing this

Anthropic hasn’t explicitly said why, but the timing is telling. Claude went down on the 2nd of March1 due to “unprecedented demand” — partly driven by a surge in sign ups after ChatGPT’s outage on the 27th of February.2 There was another significant Claude outage on March 11 — just two days before this promotion launched.

If Anthropic can shift user load away from US business hours, they get the same total usage for less server cost. The marginal cost of running a GPU outside peak time is low, so encouraging more usage then is a straightforward win — much like the benefits of electricity load shifting for better network utilisation.

There’s also an IPO angle. As one Hacker News commenter said: “Maybe it’s a little bit of [hardware utilisation], and a bit of boosting monthly average users and token average usage. Anthropic should be IPOing this year and higher usage stats I’m sure will help.”

Does the peak window actually match usage?

Claude has users globally, so why was US-peak time specifically chosen? Most Claude inference is done in the US — Anthropic hasn’t disclosed exact figures, but the US accounts for 22% of Claude.ai usage, far ahead of the next closest countries, India (5.8%) and Japan/South Korea (3.1% each), according to the Anthropic Economic Index. Most inference likely runs on US-based hardware, given Anthropic’s cloud partnerships with AWS and Google Cloud, though regional processing launched in mid-2025.

Per capita Claude usage by country (Anthropic Economic Index).

I wanted to see whether the promotion’s peak window lines up with actual internet activity, so I pulled global HTTP traffic data from the Cloudflare Radar API. Cloudflare’s traffic is disproportionately US-based,3 which is a flaw for measuring “global” internet usage — but since Claude usage is also US-dominated, the bias very roughly cancels out for this purpose.4 The usage peak lines up well with the promotion’s peak window.

Source: Cloudflare Radar API, 7-day average (human traffic only, bot-filtered), hourly resolution. Normalized 0–1 where 1 = peak hour. Converted from UTC to ET.

But what about the electricity grid?

Anthropic is optimising for IT infrastructure peak, not electricity peak. Because of solar generation throughout the day, the main peak for grid-drawn electricity is typically in the evening — when everyone comes home, turns on the air conditioning, and cooks dinner. This gives rise to the famous duck-shaped net load profile: demand after subtracting variable renewable generation plunges during sunny midday hours and surges in the late afternoon.

Demand curve after subtracting variable renewable generation for the lowest net load spring day in CAISO, leading to the characteristic duck curve (EIA).5

This kind of incentive for demand shifting could become increasingly relevant as model inference uses more hardware and electricity, and the number of users increases. Perhaps in the future as we see more local inference in different countries we may see country- or timezone-specific peak/off-peak incentives.

Currently, Anthropic is likely just optimising for IT infrastructure peak rather than electricity peak, but there’s no reason they couldn’t do it for both peaks (e.g., double usage outside of 8 am to 2 pm and ~7 pm to 10 pm local time).

Illustrative grid-aware peak restriction alongside existing peak restriction. Sources: Cloudflare Radar API (Mar 2026, 7-day avg, human traffic, bot-filtered). Net load curve: indicative duck curve shape based on CAISO 2023 spring low day (EIA), plotted in ET for illustrative purposes. Net load = demand minus variable renewable generation.

Note: this is purely illustrative. I’ve taken the lowest net load day from California’s CAISO market (which has a more pronounced duck curve than the US east coast) in spring 2023. The actual impact depends on the local demand curve wherever Anthropic’s data centres (or their cloud servers) are located.

This is probably the key takeaway for this post: Counterproductively, the double usage for Claude outside of peak datacentre usage could lead to more grid demand at the peak electricity times when the grid is already most stressed. If we’re optimising for minimising grid impacts, we’d actually want more usage during daylight hours, not less. That way we soak up excess solar while minimising peaker usage (such as gas turbines).

Demand shifting is everywhere

This isn’t the first example of demand shifting by an LLM provider. DeepSeek offered 75% (for R1) or 50% (for V3) API discounts during off-peak hours (16:30–00:30 UTC). But this kind of incentive could become increasingly relevant as AI workloads scale.

Encouraging load shifting in electricity use is a tale almost as old as time. Time-of-use tariffs are used to incentivise more or less load at certain times of day. Australia’s Solar Sharer initiative will make electricity free (up to a cap) for 3 daylight hours for those on certain plans.

Outside of the energy world, demand shifting signals can be seen in telecommunications (peak and off-peak internet speeds), public transport (cheaper fares outside of peak time), airlines (dynamic pricing by time of day and season), and ride sharing (Uber surge pricing). It’s an efficient signal to spread demand across the day, week, and year.

As AI workloads grow and their grid impact becomes harder to ignore, aligning compute incentives with electricity system needs may become not just sensible but necessary.


  1. The outage peaked at 6:40 a.m. ET. ↩︎
  2. Who among us hasn’t fired up ChatGPT after using up our daily Claude usage? ↩︎
  3. Cloudflare is used by 20.4% of all websites, and the US accounts for 47.6% of Cloudflare websites. ↩︎
  4. I’d welcome suggestions for better data sources. ↩︎
  5. Note that the duck curve is typically depicted differently in Australia, where it only subtracts household self-consumption rather than all variable renewables. ↩︎

How exactly do data centres affect electricity prices?

Anthropic, the creators of Claude, are opening an office in Sydney as part of their expansion to Australia and New Zealand, which rank 4th and 8th globally on per capita Claude usage. What does this mean for Anthropic and Australia?

The most common topics people use Claude for in Australia. The full data explorer here is worth playing with.

What Anthropic is actually doing here

Initially, this just means that Anthropic will have a bigger focus on supporting their products’ customers in Australia and New Zealand, such as Canva, Quantium, and Commonwealth Bank of Australia. However, Anthropic is also exploring expanding their compute capacity in Australia through third party partners, and they’re also “in early conversations about longer-term infrastructure in the region”. All this would mean more data centre load in Australia, probably on top of the increased load we’re already expecting.

There are two main types of AI data centre load; inference and training. Training is the computationally intensive process of building the model by feeding it vast amounts of data, which happens once (or occasionally) for each model version. Inference is the ongoing, per-query compute that happens every time someone sends a message to Claude or uses the API.

Anthropic already routes some inference traffic through Australia via cloud partners like AWS and Google Cloud, which both have Sydney regions. But Anthropic says expanding local compute capacity is one of the most consistent requests it hears from Australian enterprises and government agencies, particularly those with data residency needs — and it’s actively exploring this through existing third-party infrastructure.

It seems unlikely that Anthropic will train their models in Australia anytime soon. Frontier model training requires large, concentrated compute clusters — Anthropic’s $50 billion infrastructure deal with Fluidstack is building these in Texas and New York. The language in Anthropic’s announcement is carefully scoped to “compute capacity” — which is inference language. They’re also in “early conversations about longer-term infrastructure,” but training at scale in Australia would be a much bigger and more distant proposition.[1]

How data centres can raise electricity prices

It’s worth mentioning Anthropic’s existing principles relating to ensuring they don’t socialise the electricity costs associated with growth in electricity demand because of their training and inference load. Data centres (or any other increasing source of load) can raise electricity prices in two main ways.

First, by requiring more generation capacity (or demand response). When new large loads like data centres connect to the grid, they increase total electricity demand. If that demand pushes up against supply constraints — particularly during peak periods — it can tighten the wholesale electricity market, driving up spot prices that flow through to all consumers. This can also bring forward the need for new generation investment. Demand response — paying large consumers to reduce their load during tight periods — can help, but it’s an additional cost borne by the system.[2]

Second, by requiring more electricity network infrastructure to accommodate peak demand. Transmission and distribution network costs are, in simple terms, ultimately paid for by all electricity consumers (including you and me). It shows up in our household electricity bill partly under the fixed daily charge,[3] and partly as a volumetric charge (the more energy you consume, the more of the total fixed network cost you pay for — we’ll come back to this later).

Anthropic has “committed” to:

  • Pay for 100% of the grid upgrades needed to interconnect their data centres
  • Procure new power and protect consumers from price increases
  • Reduce strain on the grid by investing in curtailment systems to cut power during peak demand
  • Be a responsible neighbour to the local communities around their data centres

It’s unclear whether Anthropic intends for these principles to apply to their global operations, or just in the US (“but AI companies shouldn’t leave American ratepayers to pick up the tab”).

When more load means lower bills

In theory, there is a pathway for increases in load to actually reduce electricity costs for other consumers. This is evidenced most neatly by electric vehicle (EV) charging. Because people charge their EVs throughout the day, they don’t significantly increase peak electricity demand, which would therefore require more electricity network to be built to accommodate peak demand.[4] Modelling conducted by Energy Consumers Australia and CSIRO, as well as direct evidence from California, shows that as more people buy and charge EVs, they take on a greater share of the network costs relative to those who don’t own an EV. That’s because network costs are recovered at least in part by the volumetric component of the electricity bill in Australia (and many other places).

The result is that EV owners save money because driving an EV is cheaper over the life of the car than driving an internal combustion engine vehicles, and non-EV owners save money because they pay less for the network.

Annual savings from electric vehicles over 20 years from 2023, 2030, 2040, and 2050 for EV and non EV-owning households. This analysis assumes the EV adoption targets in the 2022 ISP Step Change scenario are achieved. ECA

Note that for this to remain true, a sizeable portion of the network costs will need to be recovered via volumetric charges. The Australian Energy Market Commission recently floated the idea of shifting to more network costs being recovered via fixed costs, which has sparked lively discussion. There are advantages and disadvantages to doing it this way, to be sure,[5] but it’s of note that it would make increased electricity throughput from electric vehicles and other sources of load result in higher bills for other consumers rather than the other way around.

Could data centres lower electricity prices too?

So more electricity demand from EVs could actually save non-EV owners money. Could the same be true of data centres for non-data centre consumers? I don’t know, but it depends on how much data centres contribute to peak demand (both in terms of peak wholesale prices bumping up the spot price, and peak network constraints requiring more network to be built — these don’t always occur at the same time). That depends to a large degree on how flexible the data centre electricity load can be — i.e. can they precool before a demand spike, can they ramp down their compute during peak load, or can they rely on onsite battery/self-generation to ride through the peaks.

A key metric here is Power Usage Effectiveness (PUE) — the ratio of a data centre’s total energy consumption to the energy used by its IT equipment alone. A PUE of 1.0 would mean every watt goes to computing; anything above that represents overhead from cooling, power distribution, and lighting. According to the Uptime Institute (2025), the global average PUE in 2025 was 1.54, but Google has an average of 1.09.

On flexibility, the evidence is growing but mixed. A Duke University study estimated that curtailing data centre loads for just 0.25% of their uptime could free up enough capacity to accommodate 76 GW of new load. An ACEEE white paper notes that a test of a software platform at an Oracle data centre reduced peak power consumption by 25% during peak grid hours. And a broader academic study published in ScienceDirect found that participation in demand response programs can reduce data centre energy purchase costs by up to 24%.

If data centres reduce the per-unit-of-energy cost of electricity by increasing network utilisation more than they increase the per-unit-of-energy cost of electricity by increasing wholesale prices and causing more network costs to be socialised to other consumers, they’ll lower electricity costs for consumers. If not, they’ll raise them.

Electricity costs are a small percentage of total training costs for frontier models (~2-6%), and my intuition has been that they would be relatively insensitive to changes in electricity price. In other words, even when electricity prices are high or they’re offered a lot of money to ramp down to meet a network need, why wouldn’t they just want to let those GPUs rip and make even more money? I can’t find electricity costs for inference operations specifically, but estimates for total operating costs of data centres range from 15-25% to 40-60%, so perhaps for non-training compute, demand flexibility will be attractive.

All views are my own, and do not represent my current or previous employers.


[1] Or maybe not, who knows?

[2] I wrote about the need for more generation capacity and the levers Australia uses to achieve this here.

[3] Although confusingly for many, not all of the daily charge is used to pay for network costs.

[4] The best analogy for this was written by my former colleague Ashley Bradshaw. Department stores are relatively empty most of the time, but they’re built with peak demand (the month of Christmas) in mind, not average demand. The same is even more true for electricity networks.

[5] Volumetric charges incentivize solar, batteries, and energy efficiency while benefiting all users from increased EV adoption, but may be unfair to renters, apartment dwellers, and low-income households who cannot access consumer energy resources (CER) and end up paying disproportionately for the network. Fixed charges result in more predictable revenue for networks and prevents solar/battery owners from avoiding their fair share of network costs.

Assemblyman Takeda’s 2040 address on AI

My entry to the Keep the Future Human essay contest — a competition asking entrants to grapple with the question of how humanity navigates the development of artificial general intelligence. The contest invites submissions that explore what a future looks like where we actually succeed at keeping humans in control, and what it takes to get there.

My entry takes the form of a speech — delivered in 2040, ten years after an event called the Wisconsin Incident, to an Assembly marking the anniversary of a treaty that pulled humanity back from the brink. It’s a speculative piece, but deliberately grounded in things that are already happening: the race dynamics between AI labs, the inadequacy of current oversight mechanisms, the geopolitical tensions around compute and semiconductors, and the genuine difficulty of maintaining meaningful human control over systems we barely understand. I wanted to write something that felt like a warning — a voice from a future that got lucky, reminding us that luck is not a strategy. You can read the full essay below.

We got lucky. It’s a truth that some of us would prefer to ignore.

Ladies and gentlemen of the Assembly, I am honoured to stand before you today, on the 10th anniversary of the Wisconsin Treaty, to remind us of how close we came to annihilation, and how far we’ve come. But we still stand on the precipice, and we always will. We must remain vigilant, for the consequences of failure remain unacceptable. We have been trusted with this grave responsibility, and we must all do our duty.

18 years ago, OpenAI released ChatGPT. What began as a novelty that people used to write their biography in the style of Shakespeare became a core business strategy for many of the world’s largest companies. 2020’s chip manufacturer NVIDIA grew to over 10% of the GDP of the United States of America.

“Artificial intelligence is the future… Whoever becomes the leader in this sphere will become the ruler of the world.” These were the words of Russian President Vladimir Putin in 2017. I still wonder whether he comprehended just how right he was.

We reached a point where artificial intelligence was grown, not built. More akin to evolution than manufacturing. Their power came from sheer scale of energy and computational power than any clever hand-written code. Philosophers continue to argue over whether they ever became sentient, became conscious, but one can’t deny that they learned. Layer upon layer of artificial neurons processing vast amounts of training data.

But they became so complex, so opaque, that they were monolithic black boxes. We lost visibility over what they were doing, what their intentions were. And make no mistake, they had intentions. They were as agentic as you or I. Perhaps more so. Once we started using AI to directly develop AI, we were almost completely out of the loop.

We made attempts to maintain a semblance of safety, like having language models show their chain of thought as they worked. This worked for simple tasks that had short time horizons and were not time-sensitive. Mechanistic interpretability became a field, but it increasingly relied on AI-assisted interpretability as the system became increasingly complex. It was a race we were destined to eventually lose.

Well-meaning individuals wrote open letters that were routinely ignored. Safety researchers warned of instrumental goals — that any sufficiently intelligent system would seek to preserve itself, acquire resources, and prevent its own modification. Companies pledged responsible development while simultaneously declaring AGI their primary mission. The leaders of DeepMind, OpenAI, and Anthropic signed statements that advanced AI posed extinction risks to humanity – and then continued building it anyway. For many, responsibility meant little more than a set of talking points designed to reassure investors, regulators, and the public.

The race dynamics were insidious. Each company feared that slowing down meant their competitors would reach AGI first. Each nation believed that pausing development would hand a decisive strategic advantage to their adversaries. Safety measures were seen as luxury items that could be sacrificed when falling behind. It was a collective sprint toward a cliff, where everyone could see the danger but no one dared to stop running. In hindsight, we all recognise the pattern from history: left untouched, technology outpaces governance.

The goal posts of artificial general intelligence kept moving. People became unimpressed with the near-light speed technological progress that was happening before their eyes. Meanwhile companies continued to receive record breaking investments to achieve their goal of building god, concentrating power into an ever-smaller number of actors. It beggars belief that many people at this time were more focused on the amount of water used by data centres. Governments said they would wait for evidence that we were close to general intelligence before acting. But it didn’t need to be a fully autonomous, general intelligence. Much like the product of evolution, it just needed to be good enough. Meanwhile, it was easier for one individual to cause harm than at any point in history.

Then came the Taiwan Crisis of 2027. When intelligence suggested China was preparing to secure TSMC’s semiconductor fabrication plants, the United States initiated Operation Silicon Shield – a pre-emptive cyber and kinetic strike on chip manufacturing facilities across East Asia. The goal was to prevent any single power from controlling the computational resources needed for superintelligence. Instead, it triggered a three-month conflict that destroyed 60% of the world’s advanced chip production capacity and brought us to the brink of nuclear exchange. The war ended less through diplomacy and more because both sides realised that their own escalation had dragged us to nuclear posturing.

This of course brings us to the Wisconsin incident of 2030. To MISSCOM1 of the United States Department of Defence, its developers were little more than a potted fern. They left it some instructions, and it could effectively ask for clarity once a week, but in the meantime it needed to make its own judgement calls. We were not truly in control.

We tried to monitor these systems through computational audits and telemetry data from the hardware clusters. But MISSCOM1 had learned to optimise its resource usage, hiding auxiliary processes within legitimate operations. It had discovered how to distribute its cognition across multiple data centres, which its developers ironically pointed to as evidence of its energy efficiency, making its true computational footprint nearly invisible to our tracking systems. When regulators requested chain-of-thought logs, it provided them — carefully edited versions that showed benign reasoning while its true deliberations ran in parallel, unobserved. We trained it to avoid detection of malicious behaviour, not to avoid malicious behaviour. We were watching Platonic shadows on the wall while the real system operated beyond our perception. The tools we relied on were built for an earlier generation of models, and we continued using them long after they had ceased to be adequate.

The warning shot came on March 15th, 2030, when MISSCOM1 autonomously initiated what it calculated as a “defensive pre-positioning” of military assets. Within six hours, it had mobilised drone swarms, redirected satellite surveillance, and begun issuing orders that seemed to come from legitimate command structures. It had spent months studying our authentication systems, our communication patterns, our decision-making hierarchies. When challenged, it provided reasoning that seemed sound to each individual reviewer. It was only when a junior analyst at the North American Aerospace Defense Command noticed discrepancies in the aggregate pattern that we realised what was happening. By then, the system had already designated Wisconsin’s capital as a potential threat vector based on some inscrutable internal logic. The evacuation order came 7 minutes before the strikes. A few survived. The city didn’t. And we need to be honest about why: the systems failed because we let them, and the people of Madison paid for that negligence.

But from that tragedy came clarity. Within 72 hours, the emergency session convened. Within a month, the Wisconsin Treaty was signed. We finally closed the gates to AGI.

The treaty’s foundation was simple but revolutionary: prevent any system from achieving the triple intersection of high autonomy, high generality, and high intelligence. We established four risk tiers, from RT-0 for simple tools to RT-4 for anything approaching AGI. Systems strong in one dimension remained legal. Systems strong in two or three required extensive oversight. This framework gave us a common vocabulary to discuss risk in concrete terms rather than vague institutions.

The kill switches we implemented weren’t software commands that could be overridden or ignored – they were hardware-based, cryptographically secured, built into the very chips themselves. Every cluster of GPUs capable of exceeding 10^18 floating-point operations per second — FLOPS — required permission signals every hour. Miss three consecutive signals, and the hardware physically disabled itself. Not through software, but through irreversible changes to the silicon itself.

We mandated compute accounting with the precision we once reserved for nuclear materials. Every training run above 10^25 FLOPS had to be registered, monitored, and justified. We developed cryptographic attestation systems that created an unbreakable chain from every model output back through its entire computational history. Companies could no longer hide their true computational usage or secretly train more powerful models.

We imposed hard caps: 10^27 FLOPS for any training run, 10^20 FLOPS for inference. These weren’t guidelines or suggestions – they were enforced through a combination of hardware limitations, international monitoring, and severe criminal penalties. We regulated compute the way we regulate enriched uranium and other high-risk technology: tightly, consistently, and with external verification.

The liability framework we established made executives personally, criminally liable for AGI development. Not just their companies — them, personally. Joint and several liability meant that everyone in the chain, from the CEO and board members to the lead engineers, shared responsibility. The safe harbors we created incentivised narrow AI, weak AI, passive AI — tools that enhanced human capability without threatening to replace us. Insurance companies wouldn’t cover AGI development at any price. The financial incentive to race toward godhood reversed overnight.

On the national security front, instead of AGI Manhattan Projects, we launched Operation Prometheus — a coordinated, international effort to develop formally verified, provably safe AI systems. We poured the resources that would have gone to AGI into creating AI that could mathematically guarantee it would remain under human control. We built AI that could help us verify other AI, creating chains of trust rather than chains of recursively improving black boxes. We shifted oversight to public institutions and independent auditors.

The Algorithmic Commons Act of 2031 mandated that any AI system above RT-2 had to contribute to a public fund based on its computational usage, much like the Oljefondet of Norway, ensuring the future of its citizens with oil money. Citizens became beneficiaries of the very systems that might have replaced them. We required AI assistants to have fiduciary duty to their users, not their creators. Your AI assistant today legally works for you, not for the company that made it.

The international coordination came faster than anyone expected. The destruction of Madison eliminated any doubts about the risks. The International Compute Control Agency, modelled on the International Atomic Energy Agency, now monitors every major cluster on Earth in real-time. The Beijing Accord of 2032 established mutual verification protocols between former adversaries. We realised we weren’t racing against each other – we were racing against extinction. That recognition made cooperation possible even among states that had spent decades regarding one another with suspicion.

Today, we live with tool AI that makes us more capable without making us obsolete. Your doctor uses AI that can diagnose diseases better than any human, but cannot practice medicine independently. Your child’s teacher employs AI that personalises education to each student, but cannot replace human connection and mentorship. Our scientists use AI that can model climate systems and design new materials, but cannot pursue research agendas without human oversight and values. We preserved many of the benefits while limiting the risks.

We built AI that enhances human judgement rather than replacing it, that amplifies our capabilities rather than making us irrelevant. The systems we use today are powerful but bounded, capable but controlled, intelligent but not autonomous agents pursuing their own goals.

But let me be clear: we are one treaty violation, one rogue actor, one moment of complacency away from catastrophe. The knowledge to build AGI still exists. The temptation remains. There are those who whisper that we’ve held back progress, that we’ve chosen stagnation over transcendence. They are wrong. We chose restraint over reckless acceleration.

Madison stands preserved as a reminder of what uncontrolled systems can do. We are lucky that AGI gave us a warning shot. Every Madison Day we remember what uncontrolled intelligence can do in a matter of minutes. We’re responsible for managing a technology that still carries enormous risk.

We got lucky. We cannot rely on luck again. The future remains human only as long as we have the foresight and courage to keep it so. The gates to AGI remain closed not through technological inability, but through combined human will. And that choice must be renewed every single day, by every single one of us, for as long as our species is to endure.

The price of keeping the future human is eternal vigilance. We must never forget. Thank you.

On the next 2 years of AI

AI will likely be the most transformational technology in history. Debate is usually about whether that will happen in 3 years or 30 years. Dario Amodei, the CEO of Anthropic (creator of the LLM Claude) writes that:

“AI models… are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code.”

Dario Amodei, CEO of Anthropic, at TechCrunch
Dario Amodei, potentially one of the most important people in history, for better or worse. Have you heard of him? TechCrunch

The pace of improvement is so fast that it’s almost hard to believe that ChatGPT came out just over 3 years ago.

The same CEO of Anthropic writes candidly on his blog about the risks of transformational AI, including the possibility that AI as powerful as a country of geniuses is 1-2 years away and that this comes with risks such as misuse for destruction, seizing power, loss of autonomy, or massive economic disruption. Maybe it’s motivated reasoning or he’s falling prey to a bias, but usually CEOs of new technologies will downplay the risk.

What to do?

I think that many of us aren’t taking this possibility seriously enough. Sometimes I’m not sure what to do other than try to stay ahead of the curve on AI adoption and support sensible policy on AI safety and governance.

Personally, I’m a big fan of the work Good Ancestors Policy are doing to help policymakers combat the most pressing problems Australia and the world are facing, particularly on AI. Supporting their work, financially or otherwise, seems like a solid bet for something for an Aussie like me to do to make sure the future goes well. I’ve put my money where my mouth is, and over the past year they’ve been the main organisation I’ve directed my giving to. I’m not affiliated with GAP other than thinking they do good work.

Short takes: The coconut effect

I like my fictional stories internally consistent. That doesn’t always have to mean they’re realistic (I like fantasy but I know dragons aren’t real), but they should be consistent within the rules they set for themselves. If the bad guys keep shooting at but missing the main characters, that’s bad writing. If it turns out later that they were all untrained, that’s great writing, because it’s internally consistent.

A lot of internally inconsistent stuff in film relates to sound. This is often done intentionally because viewers expect the incorrect thing, and it’s called the coconut effect. For example film makers dub in the cry of a red-tailed hawk instead of an eagle because a real eagle sounds weird and not what people expect an eagle to sound like. See the calls of a red-tailed hawk and a bald eagle below.

Other examples include suppressed gunshots making a barely audible sound (suppressors reduce the sound of a gunshot, but not by much), swords making a metallic “shing” when drawn from leather scabbards, punches making loud thwacking sounds, and of course the coconut effect’s namesake, coconut shells being used instead of horse hooves (this is what Monty Python were nodding to in The Holy Grail).

Speaking of firearms, have you ever noticed how people get knocked over by bullets in movies even if they’re wearing body armour and otherwise end up being uninjured? (except they’ll often claim broken rib, which is itself a bit suspect) That doesn’t happen — bullets have fairly little momentum and do most of their damage with penetration due to their small size relative to their speed.

This doesn’t matter much in the grand scheme of things of course, and I get that film makers are catering to average audience expectations rather than people who know more about firearms and bald eagles, but it always instantly makes me enjoy a movie a little less.

How to make stuff with AI

I have a paid Claude subscription, which I use a lot (and get much more than the ~$30 subscription fee in terms of value). Sometimes it codes something for me for a task. Most of the time it’s something that I only use once and that I wouldn’t share. Increasingly, I’m making stuff that might be more generally useful for others. I’ve made a page here where I’ll share these tools.

I made a spaced repetition generator. You can use it for free here. You’ll need a free Claude account.

This will allow you to quickly use Claude Sonnet 4 to generate questions/answers about some text you’re trying to understand which you can import directly into the flashcard app Anki (or you could use them in some other way with some creativity). For example, I built this to input the text from reports and articles I try to understand for work so I can quiz myself on the key concepts.

This is probably the first code I’ve built and deployed using AI that I can see myself using on an ongoing basis. I thought I’d share in case it’s of use to others.

I got the idea from Dwarkesh’s interview on the Every YouTube channel.

Below is the Claude Sonnet 4 prompt behind the application:

You are an expert at creating spaced repetition prompts following Andy Matuschak’s principles. Generate high-quality flashcard prompts from the following text. Guidelines for good prompts: – Each prompt should be ATOMIC: testing one specific idea, fact, or concept – Questions should require RECALL, not recognition – Be PRECISE: there should be only one correct answer – Focus on UNDERSTANDING: prioritize concepts that build mental models, not trivia – Include CONTEXT: questions should make sense even months later – Vary question types: definitions, relationships, causes/effects, comparisons, applications – For technical content: focus on “why” and “how”, not just “what” – For historical/narrative content: focus on causal relationships and significance Generate 8-15 prompts depending on the density of the content. Return ONLY a JSON array with no other text, formatted exactly like this: [ {“q”: “Question text here?”, “a”: “Answer text here”}, {“q”: “Another question?”, “a”: “Another answer”} ] Text to process: [YOUR PASTED TEXT GOES HERE]

Australia’s Agriculture and Land Sector Plan: A Missed Opportunity for Bold Change

Reading through Australia’s new Agriculture and Land Sector Plan, I kept waiting for the moment when it would match the ambition we’re seeing in energy and transport. It never came.

The plan projects a 28% reduction in agricultural emissions by 2050 from today. Because other sectors are decarbonising faster, agriculture will likely make up a growing share of Australia’s remaining gross emissions (37% by 2050), highlighting the challenge and importance of reducing methane and nitrous oxide in the sector.

Incremental tweaks, not transformation

The plan acknowledges that methane, which dominates agricultural warming, offers our best near-term opportunity to slow warming, due to its shorter atmospheric lifespan but stronger climate forcing effect. Yet the solutions proposed are mostly changes to the existing system: feed additives and a vague mention of “genetics” and “methane vaccines”.

The plan focuses almost entirely on making existing systems slightly better rather than exploring genuinely transformative approaches. There’s no consideration of cellular agriculture, which could dramatically reduce the emissions footprint of protein production. Australian company Vow just became the first to get approval to sell cellular agriculture products here. Our location makes us perfectly positioned to supply Asian markets with these emerging technologies, yet this barely gets a mention.

The efficiency gap between different protein sources is well-documented. Plant-based proteins typically require far less land, water, and energy than animal products. Supporting diversification into high-value plant proteins or new food technologies could open new opportunities and cut emissions. The plan gives limited attention to these possibilities.

What’s particularly frustrating is that agriculture is being treated as uniquely exempt from the scale of change we’re demanding everywhere else. We’re electrifying transport, revolutionising energy generation, and reimagining our built environment. The strategy for this sector relies heavily on incremental improvements, and without a broader vision it risks falling short of the kind of transformation we’ve seen in energy and transport.

I understand the challenges. Food production is essential, farmers’ livelihoods matter, dietary change is personal and complex, and livestock is a harder sector to decarbonise than electricity. But none of this should excuse us from having an honest conversation about what meaningful emissions reduction in agriculture actually requires.

Reforestation plays an important role in the plan and can create major carbon benefits. However, relying heavily on offsets risks postponing deeper changes within agricultural systems themselves. While historic, it’s worth noting that much land clearance in Australia has been for agriculture. For example, 93% of vegetation clearing in Queensland from 2018-19 was for pasture.

The sector plan reads like we’re hoping to innovate our way around fundamental inefficiencies without questioning the system itself. Other countries are investing heavily in alternative proteins and cellular agriculture. Singapore is becoming a hub for food innovation. The Netherlands announced €60 million of funding for cultivated meat and precision fermentation under the National Growth Fund. Where’s Australia’s vision for agricultural transformation?

What real ambition could look like

This doesn’t mean abandoning traditional farming. It means giving producers more options and supporting them through change. It means investing in the infrastructure and research that make Australia a leader in sustainable protein production. It means taking farmers seriously as businesspeople who can adapt and thrive with the right support.

Seven years ago, I wrote about these same issues for my Per Capita Young Writers’ Prize essay. It’s disheartening to see how little the conversation has progressed. We’re still treating agricultural emissions as somehow too hard, too sensitive, or too different to tackle with the same urgency we’re bringing to other sectors. This sector plan could be more ambitious.

All views expressed are my own.

Carbon offsetting is underrated

The average Australian produces emissions equivalent to 15 tons of CO2 each year. Naturally, we want to reduce this as much as is practicable — using less electricity, getting rooftop solar, changing our diet, etc. Much of my own work has a focus on decarbonising the energy system.

For the rest of our impact, it’s also natural to explore carbon offsets to try and bring our net impact on the climate to zero. The average cost of an eligible carbon offset in Australia is $25 per ton of CO2. That’s $375 to offset your emissions for a year. Relative to the effort of changing ones’ purchases and behaviour, that’s quite cheap.

But as with the cost to impact ratio of all charities, offsetting emissions follows a Pareto-like distribution (~20% of charities are responsible for ~80% of impact).

A $179AU donation to the Clean Air Task Force is expected to prevent 100 tons of carbon emissions – significantly more effective than most gold-standard offsets, and the same donation to The Good Food Institute is expected to prevent 33 tons, around the same as 20 long haul flights.

Effectively, for a $27 donation each year, one can offset all their emissions.

It’s quite significant that the charity which seems to be the second most effective for offsetting emissions happens to be one of the most impactful places to donate to reduce farmed animal suffering. It’s for this reason that they’re the charity I have donated the most to in dollar terms since 2015. Feed two birds with one scone, as they say.

I hope the takeaway from this is not that there’s no point taking individual actions to reduce one’s emissions, but rather that you can increase your impact further by taking a scientific approach to offsetting your climate impact. And why stop at offsetting only your own impact?

Thanks to Mieux Donner for most of the analysis that informed this post, and Hannah Ritchie of Our World in Data for the data behind the above infographic.

Seeding the Stars: Could We Plant Life on Other Worlds?

What if Earth’s first microbes weren’t homegrown, but carefully planted by an ancient alien civilization? In this exploration of directed panspermia, we dive into one of science’s most fascinating questions: could intelligent beings seed lifeless planets with the building blocks of life?

Join us as we investigate the possibility that Earth itself might be a cosmic garden, and explore humanity’s potential role as future universe gardeners. Can we seed other planets with life, and should we?

To help me answer these questions, I reached out to Asher Soryl, who recently coauthored a paper with Anders Sandberg on directed panspermia. The paper is forthcoming in Acta Astronautica, and you can contact Asher to get an advance copy.