Extended references and credits for this video.
Introduction
Donald Trump was just sworn into office a few weeks ago, and he’s wasted no time with shaking things up in the AI space. On top of that, Chinese startup Deep Seek has just launched its latest AI model, which are considered better and more efficient than competitors like OpenAI. So what will a Trump presidency mean for AI Safety? We’re going to dive into what’s changed so far, what I think will happen, and what this will mean for AI safety and the future of humanity.
For some, catastrophic risks from AI exist in the realm of science fiction and the minds of AI ethicists. Most of the projections appear to be futuristic at first glance. But one thing is certain, there is a chance that advanced AI could pose a risk to humanity, and making sure AI goes right will probably be the most important thing we ever do as a species. Many people believe artificial general intelligence will be developed in the next 3 to 4 years. If this is true, the decisions made by the Trump administration could be critical in shaping how transformative AI is deployed, how safe it is, and key arms race-style dynamics. Many people believe that AGI will quickly lead to artificial superintelligence, and that will be the end of humans. Trumps position and actions on AI really matter. I agree with Robert Miles, AI has really ruined my year.
All this is why the recent rescission of Biden’s Executive Order on AI by President Trump has been in the news. The revoked Executive Order subjected companies to safety testing and evaluation requirements, sought the protection of AI in critical infrastructure to avoid cybersecurity risks, stipulated the watermarking of AI content to reduce risks, prevented of malicious use of state data for AI development, and more. The Trump administration however, saw the Executive Order as an impediment to the US quest to remain the ultimate global leader in AI development and innovation. As a result, President Trump revoked the Executive Order to create opportunities for AI development through another Executive Order released on 23 January 2025.
Additionally, President Trump has declared regulatory support for the StarGate Project. This support for AI does not come as a surprise given the involvement of the “tech bros” in the Trump/Vance campaign and inauguration processes. If you search the keywords “donald trump AI safety” the top results portray President Trump’s administration as a setback for AI safety. In this video we will dive deeper to unpack the implications of these policy shifts and adjustments to AI governance.
It was really hard to write the script for this video, because things are constantly changing. This video was accurate as of just before I uploaded it, and apologies if it’s already out of date by the time you see it.
Just quickly, if you find this video helpful, make sure to let YouTube know by subscribing, liking, and leaving a comment.
AI Race Mode Activated
The US and China have been locked in an AI arms race, but the approach of the Trump Administration to AI safety has taken the race to a higher level. It is almost poetic that Chinese startup DeepSeek released their latest R1 model on President Trump’s Inauguration day and the day he revoked Biden’s Executive Order on AI. These trends heightened the AI race between the US and China. It will in the long run the reduce priority for AI safety while increasing support for development of advanced models. This also means that artificial general intelligence (AGI) and artificial superintelligence (ASI) models might become available sooner than projected and when they are here they may be available to all sorts of actors. In the case of military conflict there is no limit to the role AI could play, and drones are already becoming a weapon of choice.
Taiwan in particular has potential to become a flashpoint. While the US may likely back Taiwan in the case of Chinese invasion, President Trump has declared that he intends to use tariffs to compete with Taiwan for control of the global semiconductor market. This means that in the case of Chinese invasion, Taiwan, the home to the Taiwan Semiconductor Manufacturing Company, the most advanced semiconductor producer in the world, may be left with no choice than to use its AI chips to produce weapons for national security. As a result, AI competition can result in the development and deployment of AI powered autonomous weapon systems.
Advanced AI Models in the hands of malicious actors
Even though Mr. Trump considers emphasis on AI regulation to be a far left idea, he has talked about the risks of deep fakes specifically in the context of nuclear war.
Trump seems concerned here about AI’s ability to cause existential risk at least in this context, but this seems to contradict his drill baby drill mindset towards AI in other aspects, where he wants to reduce safety regulation and barriers to industry. Trump has been crystal clear about one thing – the United States must maintain its lead in the AI race, even if it means cutting corners on safety regulations. The implications of bypassing safety measures are dire. Experts warn that advanced models could get into the hands of malicious actors. Researchers argue that previous advanced models and the recent deepseek R1 are vulnerable to malicious manipulation. For example, criminals can use advanced models to carry out precise cyber attacks. Malicious users can also fine tune existing models to create harmful pathogens. Open source models such as deepseek can be made to deviate from acceptable norms by injecting malicious data into the test batch.
President Trump has argued generative AI could also be used to create deep fakes to spread false news. As of November 2024, the United States and China had a conversation on AI safety and possibilities for cooperation. However, with Trump’s propensity for tariffs and his clear agenda to engage China directly in an AI race, it is not clear whether China and the US will maintain relations in terms of AI safety. As a result, the chances for collaborative early warning systems are low. We hope that the US and China would be rational actors and take steps to factcheck deep fakes before they take an action that could lead to mutual destruction. But the number of nuclear near misses makes me concerned.
The online forecasting platform Metaculus has a number of public predictions on questions relating to AI and human extinction, so I checked whether people notably changed their views after Trump’s election victory in 2024 and after being sworn in. The answer was, surprisingly, not really. The average prediction AGI’s arrival has gotten slightly nearer over the past few months from 2027 to 2026, and the likelihood of human extinction by 2100 has remained around 1%.
Executive Order on AI (Inconsistent Policymaking)
The dissonance between Biden and Trump’s perspectives on AI safety pose a significant risk to AI companies. The inconsistency can limit safety conscious companies from developing advanced models only to be overtaken by companies that have lesser regards for safety. But President Trump’s mission is quite clear. The United States has to retain its number one spot as global leader in terms of frontier AI. In his first Executive Order, President Trump revoked President Biden’s Executive Order on AI, which he called a ‘radical leftwing idea’. This executive order was signed in October 2023, and was aimed at addressing threats AI could pose to civil rights, privacy, and national security, while promoting the use of AI for public good. A lot of the work that this executive order set in motion has already been done, so you could argue that there’s not much left to repeal. Reports and recommendations have already been written, such as the US Treasury’s report on AI and cybersecurity risks which said that AI is both a source of risk for malfeasance and a source of opportunity.
What did change with the repeal was that tech companies building powerful AI models now don’t have to share details with the government about the workings of the system including sharing safety test results before they are used by the public.
The US AI Safety Institute was created by the Biden administration a little over a year ago. In August 2024, AISI signed agreements with Anthropic and OpenAI to collaborate on AI safety research. In March 2024 the AISI was allocated a budget of $10 million, which is nothing compared to the $66 billion AI industry in the US.
So is the AI Safety Institute going to disappear under Trump? AI safety research is often seen as nothing more than a blocker to innovation, which may not align with the rest of Trump’s agenda. Having said that, Trump says he is worried about deep fakes and AI leading to catastrophic escalation, so maybe we’ll have to wait and see. It seems likely that the US will at least be less willing to cooperate on AI safety internationally, which could accelerate the race to the bottom. Everyone wants to be first for any new AI capability, including AGI and ASI. If the US continues to cut corners on safety, other countries will too, making it less likely that the first AGI will be deployed safely. On the other hand, China having passed more stringent AI safety regulations could win the trust of the world even though such regulations are designed to serve the Chinese Communist Party.
Energy Cost of AI
Trump has advocated for ensuring that the US maintains their lead in the AI race, even at the cost of environmental concerns, such as the large amount of energy that data centers consume, which will only grow in the future.
Generative AI requires a lot of data, which means a lot of energy usage. Now the simple view of this would be, well a new AI data center will use a lot of electricity and that will probably mean a lot of emissions unless the grid develops more clean energy. But this view of the electricity grid is quite simplistic, and treats the whole country as one power grid you can just draw energy from, but it’s a lot more complicated than that. Location matters, and there are constraints like how much electricity can go through one transmission line that dictate where new electricity generation and demand can be located. You can’t just plug in to the grid and use as much energy as you want. One way to solve this would be to have dedicated onsite power generation and battery storage.
To meet this energy demand, some tech companies are looking at securing contracts for or investing in dedicated nuclear energy. For example, Microsoft signed a 20-year deal to return the Three Mile Island nuclear power plant to action to power their AI ambitions.
A few months ago, OpenAI pitched the Biden administration on massive data centers that would require 5 GW of energy each, which is enough to power almost 3 millions homes. That’s about as much energy as a large nuclear reactor produces. Next Era Energy CEO John Ketchum said they already had requests from tech companies to find sites that can support 5 GW of demand in the grid. He admitted that finding such a site would be hard, but could easily find places that could fit 1 GW of demand.
Leopold Aschenbrenner wrote in his 2024 report Situational Awareness that we might expect the first trillion-dollar server cluster by 2030, which would require a power output of 100GW, which is over 20% the average power used in the US at any given time. This would require an enormous amount of dedicated, on-site power generation.
However, DeepSeek claims their R1 model is significantly more efficient than OpenAI’s and other competitors, which may make energy a less important factor, and distributed data centres, rather than one large data centre, are starting to seem more likely. However, others disagree that R1 is as efficient as it seems, and it may be too soon to say.
DeepSeek has cast doubts on the ambitious investment of $500 Billion in the StarGate project and whether the US AI hype is yet another technology bubble. This means that capital injection in AI development will crash significantly while Chinese companies continue to access government support to close the gap.
Let’s see what Trump has to say about energy.
Project Stargate
Project Stargate (no not that Stargate, sorry Teal’c) is launching with support from the Trump administration and it seems like a pretty big deal. US companies Oracle and OpenAI, and Japanese company SoftBank are investing up to $500 billion to start a new company called Stargate which will focus on building data centers and generating electricity to meet the demands of AI.
This is private money, secured by SoftBank, so what’s this got to do with President Trump? Well, Trump says he will use executive orders to make sure the infrastructure and energy projects go ahead smoothly.
Stargate already has 10 data centres under construction in Texas with more on the way across the US of A. As Zvi Mowshowitz pointed out, this suggests that at least some of this isn’t new work, and was already in progress.
Elon Musk is seemingly unhappy with the outcome of this, tweeting that they don’t have the money. You may recall that Musk was a cofounder of OpenAI before leaving the board of directors in 2018, and they are now engaged in a legal battle, with Elon suing Sam Altman.
Besides this, Musk’s own AI company xAI wasn’t involved, and he’s probably feeling a little slighted after helping out with the Trump campaign, to the tune of $277 million, making him the largest single donor in the whole election.
Allies
Speaking of Musk, who has been given a role as head of the “Department of Government Efficiency” which has been charged with auditing the US Federal Government, what do Trump’s closest allies think about AI? Opinions seem mixed. Musk has been concerned about the existential risks of AI for some time, which didn’t stop him from starting his own AI company, while Vice President JD Vance seems concerned about existential risk as a distraction to bring about regulations that would “entrench the tech incumbents”, a strategy known as regulatory capture, saying new regulation would “make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth”. Regulatory capture has definitely happened with other industries, so it’s a reasonable thing to worry about. For example, the Federal Communications Commission selectively granting communications licenses to some more powerful radio and TV stations while excluding others.
But the question is, are the examples that Vance is talking about actually regulatory capture, or are they the case of a concerned industry being proactive? There are plenty of people who are concerned about AI safety and want to increase regulation but don’t have a large financial stake in AI. Besides regulatory capture, JD Vance seems to focus on regulation and application-based issues like chatbots.
State allies and friends of the United States appear to have concern for AI safety. The EU for example, has addressed safety concerns in the EU AI Act. It is not yet clear how the EU will respond to deregulation of AI in the United States.
Change of plan
What does all this mean for how we approach AI safety? Some suggestions on an Effective Altruism Forum post by LintzA include switching our communications strategy to appeal to a different crowd and building coalitions with different types of actors. We might also want to consider whether it’s time to stop being so cautious with our comms and start raising the alarm bells. Take all this with a grain of salt and not as advice per se.
Conclusion
The Trump administration has shown great interest in supporting AI development to maintain the US global AI hegemony. President Trump appears to have the political will to foster AI advancement but the safety aspect of his AI agenda remains a little unclear for now, which is troubling. This is not to say that the Trump administration has zero plan for safety, but it appears to prioritize AI advancement and national security in favour of safety, a contrast from the Biden administration.
The race to AGI is on, and Trump wants America to win.
This could be a really important time to donate to AI safety organisations such as the Center for AI Safety. If AGI really is only a few years away, the window for having impact on AI safety is closing. Please consider supporting organisations like the Center for AI Safety and the Institute for AI Policy and Strategy, which I’ve linked to in the description.
We didn’t get time to touch on this in this video, but what happens if AI becomes sentient? I’ve covered this in a previous video, which you can find here. See you soon.
Credits
Elon Musk: CC-BY 2.0, Tesla Owners Club Belgium, https://www.flickr.com/photos/teslaclubbe/12271217906/
Stargate: CC-BY 2.0, qeloghwi, https://www.flickr.com/photos/20600946@N04/8805260473
Teal’c: CC-BY 3.0, VulcanSarek22, https://www.deviantart.com/vulcansarek22/art/Lieutenant-Commander-Teal-c-363559252
DeepSeek: CC-BY 4.0, AP, https://www.freemalaysiatoday.com/category/business/2025/01/31/south-korean-watchdog-to-question-deepseek-over-user-data/
OpenAI logo: CC-BY 4.0, https://www.freemalaysiatoday.com/category/business/2019/07/23/microsoft-to-invest-us1-bil-in-openai/
Taiwan map: CC-BY 4.0, Crocodile2020, https://commons.wikimedia.org/wiki/File:Taiwan%27sReliefMap-3.jpg
Semiconductor: CC-BY 2.0, Yellowcloud, https://commons.wikimedia.org/wiki/File:EPROMs_National_Semiconductor.jpg
Batteries: CC-BY 4.0, https://www.freemalaysiatoday.com/category/business/2021/08/02/blaze-at-tesla-big-battery-site-in-australia-under-control-after-3-days/
Softbank: CC-BY 4.0, Reuters, https://www.freemalaysiatoday.com/category/business/2024/05/13/japans-softbank-narrows-full-year-loss-on-ai-pivot/
JD Vance: CC-BY 2.0, Gage Skidmore, https://www.flickr.com/photos/gageskidmore/53809627400/
Alarm bell: CC-BY 2.0, Ben Schumin, https://www.flickr.com/photos/schuminweb/9778619044
Donald Trump: CC-BY 2.0, Gage Skidmore, https://www.flickr.com/photos/gageskidmore/53953046884
Joe Biden: CC-BY 3.0, The White House, https://www.whitehouse.gov/briefing-room/speeches-remarks/2021/01/20/inaugural-address-by-president-joseph-r-biden-jr/