Blog

Mapping Australia’s battery boom by postcode

The Australian Government’s Cheaper Home Batteries program has nearly tripled Australia’s household battery capacity in just eight months. I pulled publicly available data from the Clean Energy Regulator and Energy Consumers Australia to dig into what’s actually happening — where the batteries are being installed, who’s buying them, why, and what it all means for electricity bills and the grid. I also built an interactive postcode-level map so you can explore the data yourself.

Some of the findings are surprising. Average postcode income has no correlation with battery uptake. People are buying batteries roughly double the size they were before the rebate. And while nearly 60% of survey respondents say they’re interested in joining a virtual power plant, fewer than 10% of battery owners actually have — a gap that’s already reshaping AEMO’s long-term plans for how much grid-scale storage Australia will need.

Read the full analysis on Energy x AI, my newsletter covering AI infrastructure, Australian energy regulation, and the intersection of the two.

Read the full post on Substack →

Data centres in space: all the pros and cons

Data centres in space — crazy but maybe not quite as crazy as you think. Here are all the pros and cons of putting data centres in space so you can impress your colleagues and friends the next time this comes up in conversation.

People started talking a lot about data centres in space around 6 months ago. Starcloud and SpaceX have both submitted FCC proposals to put satellite-based data centres in space, and Google wants to play too with Project Suncatcher. I went from thinking it was totally crazy (mostly because I know that space is hard and putting stuff in orbit is really expensive) to thinking it’s possible and maybe even smart within 3 years with the right scaling circumstances.1

Feel free to share this with anyone who is really confident data centers in space definitely will or won’t be a thing.

The case for

Effectively free, 24/7 solar power

In a sun-synchronous orbit, solar panels receive near-constant sunlight — no night, no clouds, no atmosphere. The capacity factor is above 95%, compared to ~24% for terrestrial solar in the US. Solar irradiance is also about 36% higher above the atmosphere, and Google claims panels can be up to 8x more productive in orbit. And as Elon Musk pointed out on a recent Dwarkesh podcast, solar cells built for space can actually be cheaper to manufacture — they don’t need heavy glass or robust framing because there’s no weather to survive. I mean, space solar is about 1,500-3,000 times more expensive than terrestrial solar today, but maybe someday, sure.

No need for batteries

On Earth, solar-powered data centres need battery storage (or need to draw from the grid) to cover nighttime and cloudy periods. In a sun-synchronous orbit, solar is near-continuous, eliminating the need for batteries entirely. This significantly increases the cost advantage of orbital solar over terrestrial solar-plus-storage.

Fewer permitting and land use constraints

This is the “Abundance” argument. On Earth, getting permits for large-scale energy and data centre projects can take years. As Musk puts it: “it’s harder to scale on the ground than it is to scale in space.” In space, no one can hear you(r) scream(ing data centre). There’s no social licence to manage, no local planning regimes, no environmental reviews, no connection agreements with utilities. That said, orbit isn’t unregulated — you still need FCC approval2 (or equivalent national authority) for satellite deployment, and the International Telecommunication Union coordinates frequency bands and orbital positions internationally to prevent interference. SpaceX’s million-satellite orbital data centre filing is currently in FCC review. But the regulatory burden is arguably much lighter than siting a gigawatt power plant and data centre on Earth — no land acquisition, no grid interconnection studies, no water rights.

No water needed for cooling.

Schematic of a typical evaporative cooling system in a data centre (US Department of Energy)

Terrestrial data centres that use evaporative cooling consume water — it circulates through the cooling system but eventually evaporates. That said, as Andy Masley has written, the water issue is often overstated: only about 10% of water attributed to AI use is consumed directly onsite at the data centre or the associated powerplants through evaporative cooling, and the remaining ~90% is non-consumptively withdrawn by power plants and returned to the source.3 On the other other hand, water used to cool open-loop powerplants before being returned to source isn’t zero impact — it’s returned warmer than it was drawn4, and this has to be managed to ensure the environmental impacts don’t breach certain limits.5 Also, fish are frequently killed by impingement, where they are trapped against the water intake filters, and early-life-stage fish are killed by entrainment,6 where they are drawn through pumps and heat exchangers.7, 8

Fish stuck against an intake structure (NRDC)

In space, cooling is done via radiators that reject heat as infrared radiation (see appendix). This isn’t necessarily easier than terrestrial cooling — “the radiator mass and area are hypothesised to dominate the entire spacecraft”9 — but it does mean you don’t need water for cooling.

The case against

Launch costs

Getting mass to low Earth orbit (LEO) today costs around $1,500/kg on a Falcon Heavy and $2,720/kg on a Falcon 9. Google’s Project Suncatcher team estimates costs need to fall below $200/kg for orbital data centres to be cost-competitive with terrestrial energy costs, which they project could happen by the mid-2030s. It may be a question of when, not if — but it isn’t cheap today.

Low or zero serviceability

It’s hard and almost certainly not cost effective to swap out damaged or end-of-life hardware in orbit.10 These may be essentially disposable data centres, replaced every 5–6 years, compared to data centres on Earth that last longer but can have their GPUs swapped out at end-of-life. Musk counters this by arguing that GPUs tend to fail right after they’re made or not at all, meaning you could test each before putting them on the satellite and mitigate losses that way.

Radiation

Radiation can lead to cumulative degradation of electronics over time. However, most proposed orbital data centres would sit below the inner Van Allen belt (starting at around 640 km). At these altitudes the radiation environment is relatively mild and commercial off-the-shelf components should be viable with appropriate screening. The main concerns would be galactic cosmic rays, solar particle events, and the South Atlantic Anomaly where the inner belt dips closer to Earth. Google tested its Trillium TPUs in a proton beam simulating five years of LEO radiation — the logic survived fine, but high-bandwidth memory was the most sensitive component.11 Their two prototype satellites in partnership with Planet will test this in orbit in 2027.

The Van Allen belts (Booyabazooka)

Latency and bandwidth

For many AI workloads, fibre connections on the ground will probably always be faster than bouncing data to and from orbit. Laser inter-satellite links can achieve multi-Gbps to 100+ Gbps per beam, and Google hit 1.6 Tbps in lab tests, but the bandwidth for returning the signal to Earth would likely be the bottleneck.12 This probably means orbital data centres are best suited to training and batch inference rather than real-time applications.

Space debris

More satellites means more collision risk. At scale, this contributes to orbital congestion and the potential for a Kessler syndrome cascade (Gravity was a terrible movie but it did introduce this risk to a mainstream audience). Orbits are a shared, finite resource. Space is big, but the space around our planet isn’t that big, especially if you’re as scale-pilled as Musk.

Environmental impact of launches

Building and launching thousands of rockets per year has its own carbon and atmospheric footprint. An EU-funded study (ASCEND) found that space data centres only beat terrestrial ones on carbon if the launcher is reusable and emits less than 370 kgCO2/kg of payload over its lifespan.

Net assessment

Most of these cons are engineering problems that get cheaper over time. It’s not going to happen this year13, but I wouldn’t bet against it happening this decade.

Launch costs are falling fast. Falcon Heavy is at $1,500/kg today; Starship could plausibly get this below $100/kg within a few years.

But battery costs are falling fast too — and that goes the other way. One of the biggest advantages of orbital solar is that you don’t need batteries. On Earth, solar requires batteries to cover nighttime and cloudy periods, and those batteries are expensive. Except they’re getting dramatically cheaper, and people keep underpredicting the pace of renewable energy development. Even optimistic forecasters keep underestimating future battery price reductions.14

Lithium-ion pack prices hit a record low of $108/kWh in 2025, down 93% since 2010. Stationary storage specifically has plunged to $70/kWh, a 45% drop in a single year. The cheaper batteries get, the less it costs to pair terrestrial solar with storage — and the weaker the “no batteries needed” advantage of space becomes.

So the question — is which curve wins, the cons getting smaller or batteries getting cheaper?


Appendix: How do you cool something in a vacuum?

Some folks may not know how stuff can cool down in space with no air or water around it. Satellites and any other body in space cools down via blackbody radiation. Any object above absolute zero degrees will radiate heat, but spacecraft and satellites have radiators designed to do this more efficiently. Radiators reflect visible light and sunlight while radiating out infrared. Heat from individual components (like GPUs, one day) is transferred to the radiators via fluid circulation in heat pipes or through high thermally conductive pathways like metal.

The ISS is a good example. Note how the radiators are perpendicular to the solar arrays (and the direction of the sun) so they heat up even less.

Solar arrays and radiators on the International Space Station (Xie and Burger 2016)

A sun-synchronous orbit means constant sunlight, which means more heat to deal with, but thanks to the magic of radiators it’s not an insurmountable problem.


  1. Smart from one point of view, which doesn’t necessarily mean I support it. ↩︎
  2. I enjoyed seeing the FCC need to explain what a Kardashev II-level civilisation is in a regulatory document. ↩︎
  3. See this Semi Analysis post for more about data centre cooling systems. Also, Microsoft’s latest data centres use closed-loop cooling that requires zero water for evaporation. ↩︎
  4. Increased river/lake water temperature can stress or kill fishes and other wildlife. This is doubly harmful because elevated temperatures typically decrease the level of dissolve oxygen while also increasing metabolic rates so organisms need more oxygen. ↩︎
  5. Even small temperature increases can cause decline in bottom-dwelling organisms. Organisms already in warmer environments are even more vulnerable to additional thermal stress. ↩︎
  6. In 2005-06 a coal plant in Ohio killed ~46 million fish and ~2 billion fish eggs and larvae. ↩︎
  7. Going even deeper down the rabbit hole we can see that impingement and entrainment don’t affect fish population levels because many of them would have died anyway, other things like pollution have a much greater effect, and “only” ~10% of the wild population died due to the coal plant in the case of the Ohio Bay Shore coal plant and Maumee River. I’m also not sure what the net effect is on wild-animal suffering of a fish/larvae dying in a water intake vs dying naturally. ↩︎
  8. Note that these are problems that any new electricity load would cause, and as we’re trending away from thermal generation (coal and nuclear — gas too but it uses less water for cooling) and towards renewables, this problem may reduce in scope. ↩︎
  9. As a counterview, Mach33 Research say that for small spacecraft radiators are only 10-20% of total mass and ~7% of total planform area. ↩︎
  10. You may as well just launch another satellite instead of maintaining another. ↩︎
  11. The primary radiation effect at LEO altitudes is single-event bit flips in memory caused by energetic particles. For traditional software, even a single bit flip can be catastrophic, but large neural networks may be inherently resilient: as Musk put it on the same Dwarkesh episode, “if you’ve got a multi-trillion parameter model and you get a few bit flips, it doesn’t matter.” Memory shielding can further reduce the risk, and Google’s proton beam tests suggest that with appropriate screening of the most vulnerable components (particularly high-bandwidth memory), radiation is a solvable engineering problem. ↩︎
  12. The current top downlink for space-to-ground is 200 Gbps with NASA’s TBIRD system, compared to a modern submarine fibre cable with 200+ Tbps. Maybe that’s not a fair comparison. ↩︎
  13. It could happen today if someone wanted to, approval aside, but what I really mean is it’s not going to be better than terrestrial data centres this year. ↩︎
  14. The solar hedgehog graph is one of my top 10 favourite graphs. They just keep underestimating solar growth. ↩︎

Claude’s off-peak promotion is smart for servers, possibly bad for the grid

Anthropic’s off-peak Claude promotion is smart for their servers — but it could mean more AI compute when the grid is most stressed.

Anthropic is trialling double Claude usage outside of peak Claude time (8 am to 2 pm ET on weekdays), or 10 pm to 4 am AEST, for 2 weeks — pretty handy for us Aussie users. This is likely motivated by reducing their server demand during peak usage hours, but counterintuitively, it could lead to more demand at peak electricity times when the grid is already most stressed.

Why are they doing this

Anthropic hasn’t explicitly said why, but the timing is telling. Claude went down on the 2nd of March1 due to “unprecedented demand” — partly driven by a surge in sign ups after ChatGPT’s outage on the 27th of February.2 There was another significant Claude outage on March 11 — just two days before this promotion launched.

If Anthropic can shift user load away from US business hours, they get the same total usage for less server cost. The marginal cost of running a GPU outside peak time is low, so encouraging more usage then is a straightforward win — much like the benefits of electricity load shifting for better network utilisation.

There’s also an IPO angle. As one Hacker News commenter said: “Maybe it’s a little bit of [hardware utilisation], and a bit of boosting monthly average users and token average usage. Anthropic should be IPOing this year and higher usage stats I’m sure will help.”

Does the peak window actually match usage?

Claude has users globally, so why was US-peak time specifically chosen? Most Claude inference is done in the US — Anthropic hasn’t disclosed exact figures, but the US accounts for 22% of Claude.ai usage, far ahead of the next closest countries, India (5.8%) and Japan/South Korea (3.1% each), according to the Anthropic Economic Index. Most inference likely runs on US-based hardware, given Anthropic’s cloud partnerships with AWS and Google Cloud, though regional processing launched in mid-2025.

Per capita Claude usage by country (Anthropic Economic Index).

I wanted to see whether the promotion’s peak window lines up with actual internet activity, so I pulled global HTTP traffic data from the Cloudflare Radar API. Cloudflare’s traffic is disproportionately US-based,3 which is a flaw for measuring “global” internet usage — but since Claude usage is also US-dominated, the bias very roughly cancels out for this purpose.4 The usage peak lines up well with the promotion’s peak window.

Source: Cloudflare Radar API, 7-day average (human traffic only, bot-filtered), hourly resolution. Normalized 0–1 where 1 = peak hour. Converted from UTC to ET.

But what about the electricity grid?

Anthropic is optimising for IT infrastructure peak, not electricity peak. Because of solar generation throughout the day, the main peak for grid-drawn electricity is typically in the evening — when everyone comes home, turns on the air conditioning, and cooks dinner. This gives rise to the famous duck-shaped net load profile: demand after subtracting variable renewable generation plunges during sunny midday hours and surges in the late afternoon.

Demand curve after subtracting variable renewable generation for the lowest net load spring day in CAISO, leading to the characteristic duck curve (EIA).5

This kind of incentive for demand shifting could become increasingly relevant as model inference uses more hardware and electricity, and the number of users increases. Perhaps in the future as we see more local inference in different countries we may see country- or timezone-specific peak/off-peak incentives.

Currently, Anthropic is likely just optimising for IT infrastructure peak rather than electricity peak, but there’s no reason they couldn’t do it for both peaks (e.g., double usage outside of 8 am to 2 pm and ~7 pm to 10 pm local time).

Illustrative grid-aware peak restriction alongside existing peak restriction. Sources: Cloudflare Radar API (Mar 2026, 7-day avg, human traffic, bot-filtered). Net load curve: indicative duck curve shape based on CAISO 2023 spring low day (EIA), plotted in ET for illustrative purposes. Net load = demand minus variable renewable generation.

Note: this is purely illustrative. I’ve taken the lowest net load day from California’s CAISO market (which has a more pronounced duck curve than the US east coast) in spring 2023. The actual impact depends on the local demand curve wherever Anthropic’s data centres (or their cloud servers) are located.

This is probably the key takeaway for this post: Counterproductively, the double usage for Claude outside of peak datacentre usage could lead to more grid demand at the peak electricity times when the grid is already most stressed. If we’re optimising for minimising grid impacts, we’d actually want more usage during daylight hours, not less. That way we soak up excess solar while minimising peaker usage (such as gas turbines).

Demand shifting is everywhere

This isn’t the first example of demand shifting by an LLM provider. DeepSeek offered 75% (for R1) or 50% (for V3) API discounts during off-peak hours (16:30–00:30 UTC). But this kind of incentive could become increasingly relevant as AI workloads scale.

Encouraging load shifting in electricity use is a tale almost as old as time. Time-of-use tariffs are used to incentivise more or less load at certain times of day. Australia’s Solar Sharer initiative will make electricity free (up to a cap) for 3 daylight hours for those on certain plans.

Outside of the energy world, demand shifting signals can be seen in telecommunications (peak and off-peak internet speeds), public transport (cheaper fares outside of peak time), airlines (dynamic pricing by time of day and season), and ride sharing (Uber surge pricing). It’s an efficient signal to spread demand across the day, week, and year.

As AI workloads grow and their grid impact becomes harder to ignore, aligning compute incentives with electricity system needs may become not just sensible but necessary.


  1. The outage peaked at 6:40 a.m. ET. ↩︎
  2. Who among us hasn’t fired up ChatGPT after using up our daily Claude usage? ↩︎
  3. Cloudflare is used by 20.4% of all websites, and the US accounts for 47.6% of Cloudflare websites. ↩︎
  4. I’d welcome suggestions for better data sources. ↩︎
  5. Note that the duck curve is typically depicted differently in Australia, where it only subtracts household self-consumption rather than all variable renewables. ↩︎

How exactly do data centres affect electricity prices?

Anthropic, the creators of Claude, are opening an office in Sydney as part of their expansion to Australia and New Zealand, which rank 4th and 8th globally on per capita Claude usage. What does this mean for Anthropic and Australia?

The most common topics people use Claude for in Australia. The full data explorer here is worth playing with.

What Anthropic is actually doing here

Initially, this just means that Anthropic will have a bigger focus on supporting their products’ customers in Australia and New Zealand, such as Canva, Quantium, and Commonwealth Bank of Australia. However, Anthropic is also exploring expanding their compute capacity in Australia through third party partners, and they’re also “in early conversations about longer-term infrastructure in the region”. All this would mean more data centre load in Australia, probably on top of the increased load we’re already expecting.

There are two main types of AI data centre load; inference and training. Training is the computationally intensive process of building the model by feeding it vast amounts of data, which happens once (or occasionally) for each model version. Inference is the ongoing, per-query compute that happens every time someone sends a message to Claude or uses the API.

Anthropic already routes some inference traffic through Australia via cloud partners like AWS and Google Cloud, which both have Sydney regions. But Anthropic says expanding local compute capacity is one of the most consistent requests it hears from Australian enterprises and government agencies, particularly those with data residency needs — and it’s actively exploring this through existing third-party infrastructure.

It seems unlikely that Anthropic will train their models in Australia anytime soon. Frontier model training requires large, concentrated compute clusters — Anthropic’s $50 billion infrastructure deal with Fluidstack is building these in Texas and New York. The language in Anthropic’s announcement is carefully scoped to “compute capacity” — which is inference language. They’re also in “early conversations about longer-term infrastructure,” but training at scale in Australia would be a much bigger and more distant proposition.[1]

How data centres can raise electricity prices

It’s worth mentioning Anthropic’s existing principles relating to ensuring they don’t socialise the electricity costs associated with growth in electricity demand because of their training and inference load. Data centres (or any other increasing source of load) can raise electricity prices in two main ways.

First, by requiring more generation capacity (or demand response). When new large loads like data centres connect to the grid, they increase total electricity demand. If that demand pushes up against supply constraints — particularly during peak periods — it can tighten the wholesale electricity market, driving up spot prices that flow through to all consumers. This can also bring forward the need for new generation investment. Demand response — paying large consumers to reduce their load during tight periods — can help, but it’s an additional cost borne by the system.[2]

Second, by requiring more electricity network infrastructure to accommodate peak demand. Transmission and distribution network costs are, in simple terms, ultimately paid for by all electricity consumers (including you and me). It shows up in our household electricity bill partly under the fixed daily charge,[3] and partly as a volumetric charge (the more energy you consume, the more of the total fixed network cost you pay for — we’ll come back to this later).

Anthropic has “committed” to:

  • Pay for 100% of the grid upgrades needed to interconnect their data centres
  • Procure new power and protect consumers from price increases
  • Reduce strain on the grid by investing in curtailment systems to cut power during peak demand
  • Be a responsible neighbour to the local communities around their data centres

It’s unclear whether Anthropic intends for these principles to apply to their global operations, or just in the US (“but AI companies shouldn’t leave American ratepayers to pick up the tab”).

When more load means lower bills

In theory, there is a pathway for increases in load to actually reduce electricity costs for other consumers. This is evidenced most neatly by electric vehicle (EV) charging. Because people charge their EVs throughout the day, they don’t significantly increase peak electricity demand, which would therefore require more electricity network to be built to accommodate peak demand.[4] Modelling conducted by Energy Consumers Australia and CSIRO, as well as direct evidence from California, shows that as more people buy and charge EVs, they take on a greater share of the network costs relative to those who don’t own an EV. That’s because network costs are recovered at least in part by the volumetric component of the electricity bill in Australia (and many other places).

The result is that EV owners save money because driving an EV is cheaper over the life of the car than driving an internal combustion engine vehicles, and non-EV owners save money because they pay less for the network.

Annual savings from electric vehicles over 20 years from 2023, 2030, 2040, and 2050 for EV and non EV-owning households. This analysis assumes the EV adoption targets in the 2022 ISP Step Change scenario are achieved. ECA

Note that for this to remain true, a sizeable portion of the network costs will need to be recovered via volumetric charges. The Australian Energy Market Commission recently floated the idea of shifting to more network costs being recovered via fixed costs, which has sparked lively discussion. There are advantages and disadvantages to doing it this way, to be sure,[5] but it’s of note that it would make increased electricity throughput from electric vehicles and other sources of load result in higher bills for other consumers rather than the other way around.

Could data centres lower electricity prices too?

So more electricity demand from EVs could actually save non-EV owners money. Could the same be true of data centres for non-data centre consumers? I don’t know, but it depends on how much data centres contribute to peak demand (both in terms of peak wholesale prices bumping up the spot price, and peak network constraints requiring more network to be built — these don’t always occur at the same time). That depends to a large degree on how flexible the data centre electricity load can be — i.e. can they precool before a demand spike, can they ramp down their compute during peak load, or can they rely on onsite battery/self-generation to ride through the peaks.

A key metric here is Power Usage Effectiveness (PUE) — the ratio of a data centre’s total energy consumption to the energy used by its IT equipment alone. A PUE of 1.0 would mean every watt goes to computing; anything above that represents overhead from cooling, power distribution, and lighting. According to the Uptime Institute (2025), the global average PUE in 2025 was 1.54, but Google has an average of 1.09.

On flexibility, the evidence is growing but mixed. A Duke University study estimated that curtailing data centre loads for just 0.25% of their uptime could free up enough capacity to accommodate 76 GW of new load. An ACEEE white paper notes that a test of a software platform at an Oracle data centre reduced peak power consumption by 25% during peak grid hours. And a broader academic study published in ScienceDirect found that participation in demand response programs can reduce data centre energy purchase costs by up to 24%.

If data centres reduce the per-unit-of-energy cost of electricity by increasing network utilisation more than they increase the per-unit-of-energy cost of electricity by increasing wholesale prices and causing more network costs to be socialised to other consumers, they’ll lower electricity costs for consumers. If not, they’ll raise them.

Electricity costs are a small percentage of total training costs for frontier models (~2-6%), and my intuition has been that they would be relatively insensitive to changes in electricity price. In other words, even when electricity prices are high or they’re offered a lot of money to ramp down to meet a network need, why wouldn’t they just want to let those GPUs rip and make even more money? I can’t find electricity costs for inference operations specifically, but estimates for total operating costs of data centres range from 15-25% to 40-60%, so perhaps for non-training compute, demand flexibility will be attractive.

All views are my own, and do not represent my current or previous employers.


[1] Or maybe not, who knows?

[2] I wrote about the need for more generation capacity and the levers Australia uses to achieve this here.

[3] Although confusingly for many, not all of the daily charge is used to pay for network costs.

[4] The best analogy for this was written by my former colleague Ashley Bradshaw. Department stores are relatively empty most of the time, but they’re built with peak demand (the month of Christmas) in mind, not average demand. The same is even more true for electricity networks.

[5] Volumetric charges incentivize solar, batteries, and energy efficiency while benefiting all users from increased EV adoption, but may be unfair to renters, apartment dwellers, and low-income households who cannot access consumer energy resources (CER) and end up paying disproportionately for the network. Fixed charges result in more predictable revenue for networks and prevents solar/battery owners from avoiding their fair share of network costs.

Australia’s electricity Market Price Cap is going up in July. Here’s what that means.


Why peak electricity capacity matters

Australia’s peak electricity demand is set to grow significantly thanks to data centres and electrification of vehicles and appliances — even after accounting for self-consumption from solar and batteries.

Actual and forecast regional annual 50% POE maximum operational demand by state, 2025 ESOO Step Change and 2024 ESOO Central scenario, 2020-21 to 2054-55. AEMO

To meet this demand we’ll need more generation capacity, particularly peak capacity, like gas peaker plants and battery storage, to come online — probably a lot (see increases in utility storage and flexible gas over the next 25 years below). Ideally, we’ll also have a lot more demand management — such as flexible demand that can scale down at peak times — so we need less generation.

NEM capacity from 2009/10 to 2049/50 in the Step Change scenario. AEMO

How to incentivise capacity investment

Some markets, like PJM in eastern USA, incentivise more generation capacity with a capacity market, where generators are paid to be available and provide electricity during peak demand when called upon. Australia’s National Electricity Market (NEM) doesn’t have a capacity market, but rather incentivises capacity through several levers including having the highest wholesale electricity Market Price Cap (MPC) in the world ($20,300/MWh for FY25/26)1.

A high MPC means peaking generators and batteries stand to make more money during high price events, and so developers are incentivised to build more. The case for having a higher MPC has been that it would incentivise more generation and reduce energy costs for consumers in the long term.

The NEM’s MPC will increase from $20,300/MWh in FY25/26 to $23,200/MWh from 1st July 2026 — a larger increase than the typical annual adjustment with inflation.2 This begs the question — is it working?

Future values have been adjusted using RBA inflation forecasts. ECA/Baringa

Pros and cons of raising the Market Price Cap

The case for a higher MPC is that an energy-only market like the NEM needs scarcity prices high enough to incentivise investment. Gas peakers run infrequently — less than 5% of the time on average — and rely heavily on a small number of high-priced intervals to recover their costs. If the MPC is too low, these assets can’t earn enough through the wholesale market alone to justify new builds, so reliability suffers and consumers end up paying more for electricity in the long run. The Australian Energy Market Commission’s 2023 determination argued that raising the cap would bring forward new investment, ultimately reducing prices and improving reliability for consumers over the long run.

A higher MPC comes at a cost. Baringa/ECA estimated that increases to the MPC since 2019 have resulted in $4.7 billion in additional wholesale costs to consumers across the NEM, over $3 billion of which was in 2024 alone.

The rising MPC doesn’t just reward peaking plants and batteries — because of the NEM’s marginal pricing, it rewards everyone who happens to be generating during a high-price event. From 2019–2024, peaking capacity (gas, hydro, and battery storage) collectively earned less than 40% of total market revenues during price periods above $10,000/MWh. The majority went to non-peaking generators — predominantly coal (but also solar/wind) — that were already running anyway.

The impact of merit order

In the NEM, every generator bids the price at which they’re willing to supply electricity, and AEMO stacks these bids from cheapest to most expensive — the “bid stack” or merit order. It then dispatches generators from the bottom up until supply meets demand. Every dispatched generator gets paid the same price — the price bid by the last (most expensive) generator needed to balance the system. So, if a gas peaker bids $15,000/MWh and that bid sets the price because there’s sufficient demand, a coal plant that bid $50/MWh also gets paid $15,000/MWh for that interval.3

Understanding power markets: Merit order and marginal pricing - Squeaky

An example bid stack SQE

I (Claude) made4simplified visualisation5 of how marginal prices change with increasing demand, and how much infra-marginal rent lower-priced sources of generation like wind, solar, and coal can generate during high price periods. Have a play to see how high marginal prices benefit low-priced generation.

Baringa/ECA also argue that the high MPC was set in a time when there were few other levers to incentivise capacity, but that’s less true today given the advance of state (e.g., LTSAs) and Commonwealth (e.g., CIS) out-of-market programs to incentivise capacity. Given that, “there is reason to reconsider whether the current value of the MPC and its anticipated rise continue to be fit-for-purpose when considering investment signals”, they say.

A higher MPC incentivises the contract market

Comments made in response to this post have made me realise I neglected to point out that peakers get a lot of their revenue from secondary markets such as swaps, power purchase agreements, cap contracts, etc.

A higher MPC means more incentive to contract, because the higher risk exposure drives retailers and large customers (commercial and industrial) to sign contracts to mitigate the risk of high spot prices. This in turn provides peaker generators with revenue certainty.

The AEMC’s determination on the MPC did call out that the “final rule enhances contract market support for investment”. They noted that historically, cap settlement prices in NSW have been well below the level required to support new entrant gas investment, and that raising the MPC to $22,800/MWh would raise spot volatility, cap values, and contract prices to levels sufficient to support new peaking capacity investment.

My read though is that this is still only part of the story — the AEMC’s 2023 modelling quantified that of the $7.3/MWh increase in consumer costs by 2028 anticipated from a higher MPC (relative to existing settings), $2.6/MWh is from higher contract premiums, and the rest is from higher energy settlement costs. The question is, how effective is the MPC raise at incentivising new capacity or helping them reach financial closure?

As far as I can tell the main evidence in favour of this is from IES modelling that the AEMC referred to in their 2023 determination. Dan Lee also wrote about it, but his conclusion seemed to be “It is difficult to conclude whether a rise to the MPC strengthens and clarifies the in-market signal enough to outweigh the additional cost that it inevitably passes on to consumers.”

The next post in this series will be a breakdown of the implications of market design for electricity prices, particularly with the rise of data centre load. Please subscribe if you’d like to see that post and more about the intersection of energy and AI.

All views are my own, and do not represent my current or previous employers.

  1. New Zealand doesn’t technically have a price cap, but prices are effectively bounded at NZ$20,000/MWh by its scarcity pricing mechanism (I think — it’s hard to find a more recent source than 2013). As of 2022 scarcity pricing had never occurred in New Zealand. ↩︎
  2. The lesser-known cousin of the MPC, the Cumulative Price Threshold (CPT), is also increasing. The CPT limits how long high wholesale prices can be sustained. ↩︎
  3. Marginal pricing exists to encourage generators to bid their true short-run marginal cost — the actual cost of producing one more unit of electricity. If you’re a coal plant and it costs you $40/MWh to generate, you should bid $40/MWh. You don’t need to inflate your bid to make money, because you’ll be paid whatever the marginal generator sets. The “infra-marginal rent” that cheaper generators earn (the gap between their costs and the clearing price) is what funds their capital recovery and profits. ↩︎
  4. How I make apps like this, for those curious. ↩︎
  5. The main simplification to call out is the choice to make the bidding periods 1 hour rather than 5 minutes to line up with the h in MWh. ↩︎

Data centre demand forecasts, phantom load, and who pays?

I’ve started a Substack. Feel free to subscribe to me there as well.

A few interesting things on data centres and energy this week – one analysis places data centre demand on the Australian National Electricity Market (NEM) as materially lower than headline figures suggest, Australia hasn’t yet figured out who should pay for it, and Anthropic promises they’ll pay for it for their US data centres.

Most Australian data centre connection requests are phantom demand

Australia prospective data centre growth
AEMO’s prospective data centre growth in the NEM – source

Oxford Economics has done useful work estimating how much of the data centre connection pipeline in the NEM will actually translate into grid demand. The answer: roughly 6 out of every 7 MW requested won’t materialise. Of the prospective projects expected to proceed — around 6 GW of capacity — actual grid draw at maturity is estimated at around 2.8 GW, less than half.

Australian data centre phantom load
Phantom demand is primarily driven by connection requests considered unlikely to hit the grid – source

The gap has two sources. Nearly 90% comes from projects that simply won’t proceed, i.e. connection requests that won’t materialise. The remaining gap reflects the difference between what a data centre requests and what it actually draws: built-in redundancy, peak-based connection requests rather than average load, and an additional buffer on top. In practice, facilities draw roughly half of their requested capacity.

Australian data centre phantom load
Connection requests not linked to prospective projects – source

This matters because AEMO’s forecasts and the Integrated System Plan are necessarily informed by connection request data. If planners treat that pipeline as a reliable proxy for future demand, they risk building infrastructure that gets socialised across all electricity consumers. Connection requests are a leading indicator of investment interest, not a forecast.

Anthropic commits to covering electricity cost impacts

Anthropic, the creator of Claude, published a voluntary commitment to cover electricity cost increases their US data centres impose on other ratepayers. The commitments are substantive: paying 100% of grid upgrade costs that would otherwise be passed to consumers, procuring net-new generation to match consumption, investing in curtailment systems to reduce strain during peak demand, and working with utilities to estimate and cover any residual demand-driven price effects.

Keep in mind that energy costs are a relatively small percentage of training frontier models (around 2-6%, compared to staff costs at 29-49%, chips at 23-32%, server components at 15-22%, and cluster-level interconnect at 9-13%), so wearing all the energy costs of training and inference don’t seem likely to affect the margin much. It’s probably the bare minimum.

Percentage of costs for training and experiments of ML models
Source

For Australia, the direct mechanism doesn’t transfer cleanly. We don’t have the same vertically integrated utility structure, and the AER and AEMC are already working through cost recovery frameworks for large loads. But the underlying question is identical: should data centres pay the full cost of grid upgrades they necessitate, or should those costs be spread across all consumers? In Australia, how connection assets are categorised by the AER significantly affects whether everyday consumers end up subsidising that infrastructure.

Bring-Your-Own tariffs: interesting, but largely a US story

RMI published a useful explainer on “Bring-Your-Own” tariffs — mechanisms that let large electricity users fund new generation themselves in exchange for faster grid access and differentiated rates. The most prominent example is Google’s Clean Transition Tariff in Nevada, where it contracted enhanced geothermal capacity to meet some of its demand. A “Clean Transition Tariff” variant restricts eligible resources to clean energy, turning corporate demand into a driver of new clean firm capacity.

The appeal is intuitive: large loads get speed-to-power, utilities get risk protection, and other ratepayers are insulated from the cost of generation that might otherwise be stranded. In the US context, where data centres are running into multi-year interconnection queues, this solves a real problem.

Australia’s competitive wholesale market and different utility structure make a direct equivalent unlikely. We don’t have the same vertically integrated utilities with integrated resource plans to build alongside. But the underlying principle — that large loads should finance the incremental generation they require, rather than free-riding on existing capacity — is directly relevant to current policy discussions here, and it’s worth watching how the US experiments play out.

The consumer perspective

In a previous role (my views are not necessarily representative… etc. etc.) at Energy Consumers Australia I wrote a piece on how data centre growth could affect household electricity bills. The international evidence is sobering: in Ireland, data centres accounted for 88% of increased electricity demand between 2015 and 2024. In Virginia, unconstrained data centre growth is projected to add $40/month to household bills by 2040. In Australia, network costs already account for nearly half of household electricity bills, and they’re rising.

Data centre load also has implications for non-bulk energy costs such as system strength and frequency control and ancillary services (FCAS). Data centres can shift load rapidly in ways that may not be visible to the Australian Energy Market Operator in real time — illustrated vividly by an incident in Virginia where 60 data centres simultaneously switched to backup power during a grid disturbance, requiring the operator to rapidly curtail generation. If AEMO lacks real-time visibility of data centre demand, ancillary service costs increase and may get recovered broadly. The AEMC is working on rule changes to address this, which is worth watching.

Assemblyman Takeda’s 2040 address on AI

My entry to the Keep the Future Human essay contest — a competition asking entrants to grapple with the question of how humanity navigates the development of artificial general intelligence. The contest invites submissions that explore what a future looks like where we actually succeed at keeping humans in control, and what it takes to get there.

My entry takes the form of a speech — delivered in 2040, ten years after an event called the Wisconsin Incident, to an Assembly marking the anniversary of a treaty that pulled humanity back from the brink. It’s a speculative piece, but deliberately grounded in things that are already happening: the race dynamics between AI labs, the inadequacy of current oversight mechanisms, the geopolitical tensions around compute and semiconductors, and the genuine difficulty of maintaining meaningful human control over systems we barely understand. I wanted to write something that felt like a warning — a voice from a future that got lucky, reminding us that luck is not a strategy. You can read the full essay below.

We got lucky. It’s a truth that some of us would prefer to ignore.

Ladies and gentlemen of the Assembly, I am honoured to stand before you today, on the 10th anniversary of the Wisconsin Treaty, to remind us of how close we came to annihilation, and how far we’ve come. But we still stand on the precipice, and we always will. We must remain vigilant, for the consequences of failure remain unacceptable. We have been trusted with this grave responsibility, and we must all do our duty.

18 years ago, OpenAI released ChatGPT. What began as a novelty that people used to write their biography in the style of Shakespeare became a core business strategy for many of the world’s largest companies. 2020’s chip manufacturer NVIDIA grew to over 10% of the GDP of the United States of America.

“Artificial intelligence is the future… Whoever becomes the leader in this sphere will become the ruler of the world.” These were the words of Russian President Vladimir Putin in 2017. I still wonder whether he comprehended just how right he was.

We reached a point where artificial intelligence was grown, not built. More akin to evolution than manufacturing. Their power came from sheer scale of energy and computational power than any clever hand-written code. Philosophers continue to argue over whether they ever became sentient, became conscious, but one can’t deny that they learned. Layer upon layer of artificial neurons processing vast amounts of training data.

But they became so complex, so opaque, that they were monolithic black boxes. We lost visibility over what they were doing, what their intentions were. And make no mistake, they had intentions. They were as agentic as you or I. Perhaps more so. Once we started using AI to directly develop AI, we were almost completely out of the loop.

We made attempts to maintain a semblance of safety, like having language models show their chain of thought as they worked. This worked for simple tasks that had short time horizons and were not time-sensitive. Mechanistic interpretability became a field, but it increasingly relied on AI-assisted interpretability as the system became increasingly complex. It was a race we were destined to eventually lose.

Well-meaning individuals wrote open letters that were routinely ignored. Safety researchers warned of instrumental goals — that any sufficiently intelligent system would seek to preserve itself, acquire resources, and prevent its own modification. Companies pledged responsible development while simultaneously declaring AGI their primary mission. The leaders of DeepMind, OpenAI, and Anthropic signed statements that advanced AI posed extinction risks to humanity – and then continued building it anyway. For many, responsibility meant little more than a set of talking points designed to reassure investors, regulators, and the public.

The race dynamics were insidious. Each company feared that slowing down meant their competitors would reach AGI first. Each nation believed that pausing development would hand a decisive strategic advantage to their adversaries. Safety measures were seen as luxury items that could be sacrificed when falling behind. It was a collective sprint toward a cliff, where everyone could see the danger but no one dared to stop running. In hindsight, we all recognise the pattern from history: left untouched, technology outpaces governance.

The goal posts of artificial general intelligence kept moving. People became unimpressed with the near-light speed technological progress that was happening before their eyes. Meanwhile companies continued to receive record breaking investments to achieve their goal of building god, concentrating power into an ever-smaller number of actors. It beggars belief that many people at this time were more focused on the amount of water used by data centres. Governments said they would wait for evidence that we were close to general intelligence before acting. But it didn’t need to be a fully autonomous, general intelligence. Much like the product of evolution, it just needed to be good enough. Meanwhile, it was easier for one individual to cause harm than at any point in history.

Then came the Taiwan Crisis of 2027. When intelligence suggested China was preparing to secure TSMC’s semiconductor fabrication plants, the United States initiated Operation Silicon Shield – a pre-emptive cyber and kinetic strike on chip manufacturing facilities across East Asia. The goal was to prevent any single power from controlling the computational resources needed for superintelligence. Instead, it triggered a three-month conflict that destroyed 60% of the world’s advanced chip production capacity and brought us to the brink of nuclear exchange. The war ended less through diplomacy and more because both sides realised that their own escalation had dragged us to nuclear posturing.

This of course brings us to the Wisconsin incident of 2030. To MISSCOM1 of the United States Department of Defence, its developers were little more than a potted fern. They left it some instructions, and it could effectively ask for clarity once a week, but in the meantime it needed to make its own judgement calls. We were not truly in control.

We tried to monitor these systems through computational audits and telemetry data from the hardware clusters. But MISSCOM1 had learned to optimise its resource usage, hiding auxiliary processes within legitimate operations. It had discovered how to distribute its cognition across multiple data centres, which its developers ironically pointed to as evidence of its energy efficiency, making its true computational footprint nearly invisible to our tracking systems. When regulators requested chain-of-thought logs, it provided them — carefully edited versions that showed benign reasoning while its true deliberations ran in parallel, unobserved. We trained it to avoid detection of malicious behaviour, not to avoid malicious behaviour. We were watching Platonic shadows on the wall while the real system operated beyond our perception. The tools we relied on were built for an earlier generation of models, and we continued using them long after they had ceased to be adequate.

The warning shot came on March 15th, 2030, when MISSCOM1 autonomously initiated what it calculated as a “defensive pre-positioning” of military assets. Within six hours, it had mobilised drone swarms, redirected satellite surveillance, and begun issuing orders that seemed to come from legitimate command structures. It had spent months studying our authentication systems, our communication patterns, our decision-making hierarchies. When challenged, it provided reasoning that seemed sound to each individual reviewer. It was only when a junior analyst at the North American Aerospace Defense Command noticed discrepancies in the aggregate pattern that we realised what was happening. By then, the system had already designated Wisconsin’s capital as a potential threat vector based on some inscrutable internal logic. The evacuation order came 7 minutes before the strikes. A few survived. The city didn’t. And we need to be honest about why: the systems failed because we let them, and the people of Madison paid for that negligence.

But from that tragedy came clarity. Within 72 hours, the emergency session convened. Within a month, the Wisconsin Treaty was signed. We finally closed the gates to AGI.

The treaty’s foundation was simple but revolutionary: prevent any system from achieving the triple intersection of high autonomy, high generality, and high intelligence. We established four risk tiers, from RT-0 for simple tools to RT-4 for anything approaching AGI. Systems strong in one dimension remained legal. Systems strong in two or three required extensive oversight. This framework gave us a common vocabulary to discuss risk in concrete terms rather than vague institutions.

The kill switches we implemented weren’t software commands that could be overridden or ignored – they were hardware-based, cryptographically secured, built into the very chips themselves. Every cluster of GPUs capable of exceeding 10^18 floating-point operations per second — FLOPS — required permission signals every hour. Miss three consecutive signals, and the hardware physically disabled itself. Not through software, but through irreversible changes to the silicon itself.

We mandated compute accounting with the precision we once reserved for nuclear materials. Every training run above 10^25 FLOPS had to be registered, monitored, and justified. We developed cryptographic attestation systems that created an unbreakable chain from every model output back through its entire computational history. Companies could no longer hide their true computational usage or secretly train more powerful models.

We imposed hard caps: 10^27 FLOPS for any training run, 10^20 FLOPS for inference. These weren’t guidelines or suggestions – they were enforced through a combination of hardware limitations, international monitoring, and severe criminal penalties. We regulated compute the way we regulate enriched uranium and other high-risk technology: tightly, consistently, and with external verification.

The liability framework we established made executives personally, criminally liable for AGI development. Not just their companies — them, personally. Joint and several liability meant that everyone in the chain, from the CEO and board members to the lead engineers, shared responsibility. The safe harbors we created incentivised narrow AI, weak AI, passive AI — tools that enhanced human capability without threatening to replace us. Insurance companies wouldn’t cover AGI development at any price. The financial incentive to race toward godhood reversed overnight.

On the national security front, instead of AGI Manhattan Projects, we launched Operation Prometheus — a coordinated, international effort to develop formally verified, provably safe AI systems. We poured the resources that would have gone to AGI into creating AI that could mathematically guarantee it would remain under human control. We built AI that could help us verify other AI, creating chains of trust rather than chains of recursively improving black boxes. We shifted oversight to public institutions and independent auditors.

The Algorithmic Commons Act of 2031 mandated that any AI system above RT-2 had to contribute to a public fund based on its computational usage, much like the Oljefondet of Norway, ensuring the future of its citizens with oil money. Citizens became beneficiaries of the very systems that might have replaced them. We required AI assistants to have fiduciary duty to their users, not their creators. Your AI assistant today legally works for you, not for the company that made it.

The international coordination came faster than anyone expected. The destruction of Madison eliminated any doubts about the risks. The International Compute Control Agency, modelled on the International Atomic Energy Agency, now monitors every major cluster on Earth in real-time. The Beijing Accord of 2032 established mutual verification protocols between former adversaries. We realised we weren’t racing against each other – we were racing against extinction. That recognition made cooperation possible even among states that had spent decades regarding one another with suspicion.

Today, we live with tool AI that makes us more capable without making us obsolete. Your doctor uses AI that can diagnose diseases better than any human, but cannot practice medicine independently. Your child’s teacher employs AI that personalises education to each student, but cannot replace human connection and mentorship. Our scientists use AI that can model climate systems and design new materials, but cannot pursue research agendas without human oversight and values. We preserved many of the benefits while limiting the risks.

We built AI that enhances human judgement rather than replacing it, that amplifies our capabilities rather than making us irrelevant. The systems we use today are powerful but bounded, capable but controlled, intelligent but not autonomous agents pursuing their own goals.

But let me be clear: we are one treaty violation, one rogue actor, one moment of complacency away from catastrophe. The knowledge to build AGI still exists. The temptation remains. There are those who whisper that we’ve held back progress, that we’ve chosen stagnation over transcendence. They are wrong. We chose restraint over reckless acceleration.

Madison stands preserved as a reminder of what uncontrolled systems can do. We are lucky that AGI gave us a warning shot. Every Madison Day we remember what uncontrolled intelligence can do in a matter of minutes. We’re responsible for managing a technology that still carries enormous risk.

We got lucky. We cannot rely on luck again. The future remains human only as long as we have the foresight and courage to keep it so. The gates to AGI remain closed not through technological inability, but through combined human will. And that choice must be renewed every single day, by every single one of us, for as long as our species is to endure.

The price of keeping the future human is eternal vigilance. We must never forget. Thank you.

On the next 2 years of AI

AI will likely be the most transformational technology in history. Debate is usually about whether that will happen in 3 years or 30 years. Dario Amodei, the CEO of Anthropic (creator of the LLM Claude) writes that:

“AI models… are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code.”

Dario Amodei, CEO of Anthropic, at TechCrunch
Dario Amodei, potentially one of the most important people in history, for better or worse. Have you heard of him? TechCrunch

The pace of improvement is so fast that it’s almost hard to believe that ChatGPT came out just over 3 years ago.

The same CEO of Anthropic writes candidly on his blog about the risks of transformational AI, including the possibility that AI as powerful as a country of geniuses is 1-2 years away and that this comes with risks such as misuse for destruction, seizing power, loss of autonomy, or massive economic disruption. Maybe it’s motivated reasoning or he’s falling prey to a bias, but usually CEOs of new technologies will downplay the risk.

What to do?

I think that many of us aren’t taking this possibility seriously enough. Sometimes I’m not sure what to do other than try to stay ahead of the curve on AI adoption and support sensible policy on AI safety and governance.

Personally, I’m a big fan of the work Good Ancestors Policy are doing to help policymakers combat the most pressing problems Australia and the world are facing, particularly on AI. Supporting their work, financially or otherwise, seems like a solid bet for something for an Aussie like me to do to make sure the future goes well. I’ve put my money where my mouth is, and over the past year they’ve been the main organisation I’ve directed my giving to. I’m not affiliated with GAP other than thinking they do good work.

Short takes: The coconut effect

I like my fictional stories internally consistent. That doesn’t always have to mean they’re realistic (I like fantasy but I know dragons aren’t real), but they should be consistent within the rules they set for themselves. If the bad guys keep shooting at but missing the main characters, that’s bad writing. If it turns out later that they were all untrained, that’s great writing, because it’s internally consistent.

A lot of internally inconsistent stuff in film relates to sound. This is often done intentionally because viewers expect the incorrect thing, and it’s called the coconut effect. For example film makers dub in the cry of a red-tailed hawk instead of an eagle because a real eagle sounds weird and not what people expect an eagle to sound like. See the calls of a red-tailed hawk and a bald eagle below.

Other examples include suppressed gunshots making a barely audible sound (suppressors reduce the sound of a gunshot, but not by much), swords making a metallic “shing” when drawn from leather scabbards, punches making loud thwacking sounds, and of course the coconut effect’s namesake, coconut shells being used instead of horse hooves (this is what Monty Python were nodding to in The Holy Grail).

Speaking of firearms, have you ever noticed how people get knocked over by bullets in movies even if they’re wearing body armour and otherwise end up being uninjured? (except they’ll often claim broken rib, which is itself a bit suspect) That doesn’t happen — bullets have fairly little momentum and do most of their damage with penetration due to their small size relative to their speed.

This doesn’t matter much in the grand scheme of things of course, and I get that film makers are catering to average audience expectations rather than people who know more about firearms and bald eagles, but it always instantly makes me enjoy a movie a little less.

How to make stuff with AI

I have a paid Claude subscription, which I use a lot (and get much more than the ~$30 subscription fee in terms of value). Sometimes it codes something for me for a task. Most of the time it’s something that I only use once and that I wouldn’t share. Increasingly, I’m making stuff that might be more generally useful for others. I’ve made a page here where I’ll share these tools.

I made a spaced repetition generator. You can use it for free here. You’ll need a free Claude account.

This will allow you to quickly use Claude Sonnet 4 to generate questions/answers about some text you’re trying to understand which you can import directly into the flashcard app Anki (or you could use them in some other way with some creativity). For example, I built this to input the text from reports and articles I try to understand for work so I can quiz myself on the key concepts.

This is probably the first code I’ve built and deployed using AI that I can see myself using on an ongoing basis. I thought I’d share in case it’s of use to others.

I got the idea from Dwarkesh’s interview on the Every YouTube channel.

Below is the Claude Sonnet 4 prompt behind the application:

You are an expert at creating spaced repetition prompts following Andy Matuschak’s principles. Generate high-quality flashcard prompts from the following text. Guidelines for good prompts: – Each prompt should be ATOMIC: testing one specific idea, fact, or concept – Questions should require RECALL, not recognition – Be PRECISE: there should be only one correct answer – Focus on UNDERSTANDING: prioritize concepts that build mental models, not trivia – Include CONTEXT: questions should make sense even months later – Vary question types: definitions, relationships, causes/effects, comparisons, applications – For technical content: focus on “why” and “how”, not just “what” – For historical/narrative content: focus on causal relationships and significance Generate 8-15 prompts depending on the density of the content. Return ONLY a JSON array with no other text, formatted exactly like this: [ {“q”: “Question text here?”, “a”: “Answer text here”}, {“q”: “Another question?”, “a”: “Another answer”} ] Text to process: [YOUR PASTED TEXT GOES HERE]