The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Trump’s potential impact on emerging and disruptive technologies

By Sara Goudarzi | November 6, 2024

Elon musk jumping on stage at a Trump rallyTesla CEO Elon Musk with Donald Trump during a campaign rally on October 5. Trump has suggested Musk could lead a government efficiency commission under his administration. Credit: Jim Watson/AFP/Getty Images/TNS/Alamy Live News

In a stunning comeback, Donald Trump won the 2024 presidential election and will serve as the 47th president of the United States. Like all presidents, Trump’s (second) term in office will impact many aspects of policy and economy. This includes technology, a sector where the president elect will be sure to have input from allies—including, prominently, Silicon Valley magnates Elon Musk and Peter Thiel—all of whom will have their own motivations.

What exactly happens in the next four years with regard to a slew of tech-related issues and policies—among them artificial intelligence, autonomous vehicles (i.e. drones), and cyber-based misinformation and disinformation—remains to be seen. But given Trump’s statements and previous record, experts in disruptive technologies have some idea of what might be coming. We asked a variety of those experts to weigh in and will be publishing their responses as they come in.

–Sara Goudarzi

The responses have been lightly edited for length and clarity.

AI regulation

What kind of AI regulation should we expect from a second Trump administration? Here are three related sources of evidence that bear on this question. First, consider the Trump campaign’s own statements. Trump has said he’d repeal the “radical leftwing” Biden Executive Order on AI on day one. It’s unclear what material effect this will have: Federal agencies will continue to exercise oversight on many aspects of AI deployment in the normal course of business. But the anti-regulatory rhetoric is very clear, and the issue is slowly being made partisan, despite widespread public support for meaningful regulation, even among Republicans. But widespread public support is insufficient to motivate regulatory action: A comfortable majority of Americans have wanted stricter gun regulation for the past decade. Second, consider likely agency staffing. Trump hasn’t said who he’ll put in charge of national AI policy in his second term, but among his most vocal backers (and funders) are Silicon Valley venture capitalists such as Marc Andreessen, a self-described “techno-optimist” and “AI accelerationist” who wants to transform humanity into a race of “technological supermen.” Such associations are very unlikely to be entirely unconsulted when the Trump administration decides who is in charge of how fast large technology companies are allowed to move, and what things they are allowed to break.

Even if you think AI acceleration is a good idea (and it really might not be), it’s unclear whether a second Trump administration will effectively staff agencies in a way that efficiently promotes that goal. Trump has said he’ll put RFK Jr., a man who incorrectly believes both that vaccines cause autism and that they are (mostly) ineffective, in charge of infectious diseases. It’s hard to imagine how RFK Jr. will be an effective leader at the Centers for Disease Control and Prevention. If Trump decides to put someone who incorrectly believes that giant monopolist technology companies have the American public’s interests at heart in charge of agencies designed to keep Americans safe from advanced AI (such as the recently formed Artificial Intelligence Safety Institute), it’s equally difficult to imagine that they’ll be effective leaders at those agencies.

Finally, consider the dynamics resulting from the Supreme Court’s recent decision to overturn four decades of precedent whereby courts were (typically, under normal circumstances) expected to defer to regulatory agencies in deciding how to interpret their necessarily wide legislative mandates. The court’s rejection of this doctrine (“Chevron deference”) heralds a sea change in the ability of all federal regulatory agencies to exercise meaningful oversight. In particular, it makes effective rulemaking prohibitively costly in cases where reasonable people could reasonably disagree about how, exactly, the legal mandate of a federal agency should be interpreted.

Unfortunately, we can expect AI regulation to be rife with this kind of reasonable disagreement. For this reason, the Court’s widely panned decision to overturn Chevron might turn out to be the most important factor in deciding the nature and scope of a second Trump administration’s regulation of AI. Trump has effectively tied his own hands: Insofar as his administration has any interest in regulating AI through federal agencies, those agencies will now face systematic barriers to doing so—enabled by Trump’s first-term decision to appoint Supreme Court justices with very specific and often anti-regulatory legal ideologies.

–Nate Sharadin, a research fellow at the Center for AI Safety, where he works on ethical issues involving AI alignment, research, and effective regulatory approaches to advanced AI systems

Trump and the AI-defense ecosystem

Trump’s victory is likely to bring a period of uncertainty regarding responsible military AI and the development of international norms or laws related to the technology in defense ecosystems. While details were at times thin, the Biden administration did look to make progress at laying out a pathway for national security institutions to develop and integrate AI in a responsible manner. Moreover, the administration sought to promote the development of international AI governance regimes. Documents such as the Responsible AI Strategy and Implementation Pathway laid out ethical principles for Department of Defense use cases of the technology, which more recent policy—like the Data, Analytics, and AI Adoption Strategy—build from.

Outside the Department of Defense, the Biden Administration released a memo in October related to national security use cases of AI; the memo had as clear priorities the promotion of responsible and trustworthy AI and the fostering of international AI governance. The question is whether the Trump administration will look to break away from the inclinations under Biden to promote responsible defense AI development at home and abroad. It seems likely that Trump will pursue a less stringent regulatory environment for AI, a trend that could very easily bleed into government relationships with private organizations involved in developing defense technologies. This is particularly the case due to the close relationships between Trump, JD Vance, and Big Tech players like Peter Thiel.

Moreover, prospects for international cooperation on defense-related AI governance are likely diminished as a consequence of the anticipated fraught relationship with international allies and partners. This could be exacerbated due to pressures of defense technology competition with big players such as Russia and China, further limiting United States’ involvement in global defense AI governance as the new administration looks to avoid any international constraints on military technology development.

–Ian Reynolds, a pre-doctoral fellow at Stanford’s Center for International Security and Cooperation and the Stanford Institute for Human-Centered Artificial Intelligence and a PhD candidate at American University’s School of International Service

Misinformation, disinformation, and social media

Given the Trump campaign’s history of engaging in the spreading of political disinformation, its long-running animus towards social media platforms that banned users for spreading COVID misinformation and election denial content (which they claim is an attempt to censor conservatives), and congressional Republicans’ targeting of social media researchers, any reforms that deal with harmful content related to politically sensitive topics will likely be impossible for the next four years.

In the short term, the impact of continued federal inaction depends greatly on the social media companies themselves; while we should expect X (formerly Twitter) to continue to be a hotbed for conspiracy theories and other harmful content given owner Elon Musk’s close association with the Trump campaign, other platforms may choose to hold the line on their content-moderation policies, for no reason other than it makes good business sense. As X’s financial freefall demonstrates, advertisers do not want their ads sandwiched between harmful posts.

In the longer term, much depends on how the Trump administration and Congress choose to challenge social media companies’ content-moderation policies. In his first term, President Trump took umbrage with Section 230 of the 1996 Communications Decency Act, which establishes that online platforms are not considered publishers and therefore not legally responsible for user-generated content; he signed an executive order that attempted to reinterpret Section 230 in a way that made platforms liable for political censorship.

RELATED:
On November 5, AI is also on the ballot

Any new attempts to restrict platforms’ ability to moderate their own content will likely be fought out in federal courts for years to come. But opportunities for bipartisan reforms centered on foreign influence campaigns and platform accountability for non-politicized harmful content may still be possible. Foreign influence campaigns present a clear opportunity for bipartisan consensus; the spread of disinformation by geopolitical foes such as China and Iran represents a national security risk that is concerning to a number of Republicans as well as Democrats, and both parties could cooperate in advancing legislation that confronts this threat. Similarly, Section 230 reform could be carried out in a positive, bipartisan way that holds platforms accountable for not just foreign influence campaigns, but a broad array of harmful content that is not mired in partisan politics, such as content that promotes violence, eating disorders, or suicide.

Improving our ability to understand online harms is another area for possible bipartisan consensus; information-environment researchers desperately need large investments in scientific infrastructure that can speed policy-relevant research, create consistent measurements, and offer privacy-preserving data access. Additionally, there is hope that regulatory action elsewhere will have positive effects on the US information environment.

The European Union’s Digital Services Act, which entered into force in February 2024, contains a host of provisions designed to protect against harmful content, including mis/disinformation, as well as provisions liberating data for researchers working on identifying and mitigating systemic online harms. Similarly, US states have begun to take a more active role in the regulation of the information environment; two such examples come from New York State; it recently passed the Child Data Protection Act, which restricts the processing of personal information for minors under 18 without parental consent (higher than the federal restriction of 13) and the Stop Addictive Feeds Exploitation for Kids Act, requiring social media companies to obtain parental consent to show algorithmically generated feeds to children under 18.

Given the difficulty of tailoring platform policy to the regulations of multiple countries and US states, regulation elsewhere may de facto result in platforms applying the same policy responses within the United States—even in the absence of significant regulatory action at the federal level. While bipartisan reforms and non-federal regulatory action are unlikely to go as far as those concerned about the information ecosphere may have hoped, limited progress towards a healthier information environment remains possible over the next four years.

–Sean Norton, a postdoctoral fellow at The Princeton School of Public and International Affairs

Tech policies: Innovation or corporate overreach?

In his second term, Donald Trump promises to bring a wave of deregulation to the tech world, relying on corporate interests to push American innovation forward. Depending on his cabinet appointments, which may not resemble a traditional bureaucratic crowd, the incoming administration’s approach may embrace reducing oversight on AI, easing antitrust rules, and reducing regulations on cryptocurrency. This noninterventionist approach raises important questions about how well innovation, profit-seeking, and the public interest can be balanced.

Trump’s AI policy will prioritize private sector gains. With the help of the new Congress, the incoming administration will likely concentrate on rapid progress over strict regulations. This less-strict regulatory climate might help companies move fast—but without adequate safeguards. Trusting businesses to self-regulate is risky and naive; without regulatory and ethical boundaries, prioritizing speed and competition with China could end up sidelining consumer safety and privacy.

Regarding antitrust, Trump’s administration is likely to take a softer stance than the Biden administration, allowing Big Tech to further consolidate. Lawsuits against major players like Google and Apple may continue, but Trump’s view on mergers and acquisitions could enable corporate giants to expand without much interference. Yet, a narrow market diversity could make it harder for smaller companies to compete. Monopolies tend to hinder innovation by curbing competition, and the new administration, with many friends in the sector, is expected to assume that these tech giants will still drive growth. This major blind spot overlooks how monopolies prioritize market share and profit gains over real advancements.

Trade and tariff policies add another layer of complexity. Trump has signaled heavy tariffs on Chinese tech imports to reduce dependency on Beijing and support domestic manufacturing. Depending on how hawkish the new administration will be on China, however, consumer costs will likely be affected, increasing economic inequality. In the end, the average American household will take the hit for any significant increase in tariffs.

As for cryptocurrency, Trump’s expected rollback of regulations could be a boon for the sector, encouraging investment. But there are risks: Deregulated financial products often raise red flags about consumer protection and financial stability. Financial deregulation has led to crises in the past, and trusting private interests to self-regulate hasn’t always worked out well for the average investor.

Supporters of this laissez-faire approach argue that it will spark a new era of American innovation, but placing so much trust in corporate interests to act in the public’s best interest is fraught with risk. Unchecked corporate power has consistently shown a tendency to prioritize shareholder profit over the common good. In the tech industry, the stakes are exceptionally high, ranging from deteriorated digital privacy to a destabilized financial world.

Ultimately, the extent to which Trump’s promises materialize will depend partly on his cabinet choices. Conflicting interests—such as Elon Musk’s ambitions versus traditional oil or hawkish China policies—could battle to dictate priorities. Deregulation may promise growth (a questionable argument in and of itself), but without safeguards, it risks leaving Americans unprotected at the expense of profit-seeking firms.

–Yusuf Can, a coordinator for the Middle East Program (MEP) at the Wilson Center

Artificial intelligence governance

Governance and regulation of AI may not be the first policy priority for the second Trump administration, but it will likely look to put its own stamp on shaping how the US government approaches AI risks amid international technology competition. American strategic goals related to AI are unlikely to change under the Trump administration: the United States will still focus on leading globally in AI development, military applications, and advancing AI progress amid competition with China. However, the means used by the Trump administration for advancing governance remain to be seen.

Executive branch efforts may look familiar. The administration left office before major breakthroughs in large language models grabbed headlines, but President Trump did sign executive orders calling for the development of technical AI standards and testing, tasking the Office of Management and Budget to provide guidance to agencies for regulating AI where applicable, and promoting the use of trustworthy AI in the Federal Government. The Biden administration carried on in this vein but provided much greater specificity in tasking agencies to explore governance in its executive order on AI and October’s National Security Memorandum on AI, which included instructing agencies to use existing authorities to control AI applications. The Trump administration may cancel the Biden executive order and National Security Memorandom on AI, but could allow agencies and bodies like the AI Safety Institute or National Institute of Technology and Standards to continue implementing Biden’s detailed efforts. Whether such initiatives retain the political (and actual) capital necessary to flourish is to be determined.

RELATED:
Biden allowing Ukraine to strike into Russia is much ado for little consequence

State-level regulatory initiatives and voluntary company standards may ultimately become more important for addressing near-term harms from AI depending on whether the administration chooses to set guardrails for the private sector (both the first Trump administration and Biden administrations largely pursued voluntary measures with leading AI firms). Personnel will shape policy: Prioritizing addressing long-term risks from foundational models could come at the expense of shorter-term challenges like bias or disinformation. The major question will be whether Trump 2.0 builds on or disrupts existing AI governance progress.

– Owen Daniels, Associate Director of Analysis & Andrew W. Marshall Fellow, Georgetown Center for Security and Emerging Technology (CSET)

Military drones

I would expect the next US president to pay attention to maintaining, and further developing, the technological edge of military drones. I would therefore anticipate an interest in expanding drone capabilities across all operating domains. This is especially likely to be the case regarding maritime drones, which seem to promise versatile applications. The new US leadership is also likely to be enthusiastic about further experimentation with artificial intelligence to enhance the autonomy of uncrewed vehicles. I would be surprised if, under the new administration, drones did not become agents of algorithmic warfare. And I would expect this without US endorsements of much international regulation. However, major challenges lie in wait to counter small drones. The United States Armed Forces will need to stay ahead of adversaries and improve the protection of American military bases at home and abroad against drone threats––both those that spy and those that kill. At the same time, the pressure to keep the cost of new drone capabilities low should force the new president to create more Replicator-like projects. This will also require navigating the intricacies of the nascent new defense technology industry and adjusting procurement process for software-heavy innovations in drone technologies.

–Dominika Kunertova, research scientist in international security and emerging technologies

Light touch on tech policy and regulation

I think we have every reason to expect that a Trump administration is going to have a light touch in terms of tech policy and regulation. He’s received support from tech accelerationists, has a vice president who was a Silicon Valley venture capitalist, and is being advised by supporters like Elon Musk who have expressed concerns about the onerousness of regulations on innovation. Those supporters are motivated by backlash against a Biden administration that they viewed as skeptical toward tech, whether through FTC Chair Lina Khan’s reach, that of the SEC Chairman Gary Gensler on cryptocurrency, or the Biden Executive Order on Artificial Intelligence. I would suspect Trump will appoint more tech-friendly individuals in those roles and has said he would repeal the Biden executive order. However, I think we have reason to think he will keep the export controls on semiconductor chips that are the center of the tech and geopolitical competition with China and will continue to use tariffs—like in his first term and in the Biden administration—to encourage domestic manufacturing including in the tech space. Overall, though, I would predict greater friendliness toward the tech sector, manifested as fewer regulations and fewer anti-trust cases against the tech industry.

–Sarah Kreps, the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of Tech Policy Institute at Cornell University.

An all-hazard approach to AI

The future of artificial intelligence will crystalize over the next four years, coinciding with a second Trump administration. It could be a period marked by more powerful AI models and a geopolitical and corporate race to develop the most advanced AI. These capabilities could be increasingly incorporated within weapons systems, critical infrastructure, and broader society. This period could also be marked by barriers to continued unfettered progress. Onerous energy requirements, talent shortages, semiconductor constraints, and unforeseen limitations on algorithmic improvements might hold developers back. Perhaps it will be some combination of the two. Regardless, choices made by President Trump and his team will shape this path.

The Trump Administration will need to face the national and economic security threats posed and heightened by AI. Increasingly powerful AI systems could worsen proliferation of chemical and biological weapons, disrupt already-weak nuclear stability arrangements, feed into a hyperactive and muddied information ecosystem, and disempower the very working class that Trump seeks to protect. These challenges are not partisan issues. And the need for the national security community to manage them will not dissipate with a change in Administration.

The prior Trump Administration had the foresight to establish a volume of AI policy before artificial intelligence garnered significant attention following the release of ChatGPT and other models. For example, in one White House memorandum, agencies were encouraged to “be mindful of any potential safety and security risks and vulnerabilities, as well as the risk of possible malicious deployment and use of AI applications” and “consider, where relevant, any national security implications raised by the unique characteristics of AI and AI applications and take actions to protect national security.” The Trump Administration also agreed to the OECD AI Principles, including that “AI systems should be robust, secure and safe throughout their entire lifecycle.” A continuation of these policies as a baseline to address AI safety and security should be expected.

If the next Administration revokes or revises other existing executive orders on AI, the risk of AI development will not be dismissed. In many ways, agencies were building on the original Trump policies. For example, the Department of Homeland Security released a report on the intersection of AI with chemical, biological, radiological, and nuclear (CBRN) threats. It found that “As AI technologies advance, the lower barriers to entry for all actors across the sophistication spectrum may create novel risks to the homeland from malign actors’ enhanced ability to conceptualize and conduct CBRN [Chemical, Biological, Radiological, and Nuclear] attacks.” These concerns have been raised across the political spectrum, and Department of Homeland Security can capitalize on this work to institute other measures that reduce AI risk.

The future of AI governance will also include policies that are, perhaps counterintuitively, not specific to artificial intelligence The laws and policies that will be needed for reducing the risk of an AI-related catastrophe are also approaches that are relevant to all hazards—many of which require their own upgrading and reform given the various global threats we face. These could include crisis planning, resilience of critical infrastructure, and emergency management. For example, the Federal Emergency Management Agency develops Federal Interagency Operational Plans (FIOPs). They lay out the roles and responsibilities, coordination mechanisms, and guidance for responding to a range of crisis scenarios. The Global Catastrophic Risk Management Act in the United States requires these Federal Interagency Operational Plans to be updated to better consider global catastrophic risk, including from AI. The expectation should be that these efforts will continue in the next Trump Administration, as exemplified by the valuable leadership displayed in reforming core all-hazard policies related to continuity of operations and continuity of government policy in the final months of his last term.

–Rumtin Sepasspour, cofounder and director of Policy of Global Shield, an international organization advocating for policy action on reducing global catastrophic risk


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
George Castanza
George Castanza
1 month ago

Pretty easy to predict. Any idea or tech that can generate revenue will get the stamp, no matter it’s effect on humanity.