The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?

By Mia Hoffmann, Mina Narayanan, Owen J. Daniels | February 6, 2025

Sign with words "AI Action Summit" against a blue skyStakeholders at the Paris summit will need to parse how different actors understand the concept of governance. Image: Sophie Animes - stock.adobe.com

Early next week, Paris will host the French Artificial Intelligence Action Summit, yet another global convening focused on harnessing the power of AI for a beneficial future. One of the conference’s key themes is devising structures to employ AI for good, with the primary aim being “to clarify and design a shared and effective governance framework with all relevant actors.”

While the summit’s intent is admirable, this goal has been attempted numerous times with limited success, given the challenges of getting nations with different priorities on the same AI page. It also ties into a broader concern for artificial intelligence in 2025, namely, how (and even whether) governments and companies creating AI will approach developing and controlling powerful new AI systems in a responsible way. In the past few weeks alone, China’s DeepSeek R1—a model approaching OpenAI’s o1 performance at a reportedly much lower cost—hit the market, President Trump announced the OpenAI-Softbank-Oracle Stargate Initiative, a $500 billion plan to build data and computing infrastructure in the United States, and his administration quickly rescinded the Biden administration’s executive order focused on AI safety and testing standards.

New models are arriving on the scene and massive business interests hope to drive AI advancements forward, full steam ahead. Safety has largely been given lip-service, if even that.

Stakeholders at the Paris summit will certainly have a full agenda. They will need to parse how different actors understand the concept of governance. They will draw on insights from the European Union, the United States, and beyond about the opportunities and limits of regulatory, safety-based, and voluntary governance commitments, and whether wholesale or patchwork approaches are preferable. The organizers appear set on presenting an internationally unified way forward, attempting to set the global agenda for the year on safely and responsibly harnessing AI. However, the unique ways in which AI is evolving around the globe and governments’ different preferences for harnessing private sector innovations may make reaching international consensus on AI governance challenging.

Governance: What’s in a word? A key question in the year ahead will be how key actors in the AI ecosystem interpret governance. At present, governance is a malleable, catchall term that governments, companies, and other stakeholders have used to alternatively refer to regulating AI models or the weights that shape their behavior and the companies that produce them; technically assessing safety and risk through testing and evaluation; or even controlling the diffusion of the components needed for creating foundational AI models, which are large-scale, general purpose neural networks. Deciphering how companies and governments interpret AI governance can show whether they agree about how to safely harness powerful new systems.

As summit organizers rightly point out, countries and multilateral groupings like the G7’s Hiroshima AI Process, the UK’s Bletchley Declaration, and the private sector-driven Tech Accord to Combat Deceptive Use of AI in 2024 Elections have developed governance frameworks in parallel. These efforts are similar in some areas—naming broad objectives for mitigating risks to societies and humanity, for example—but sufficiently different, for instance, in their technical specificity or target audiences that they are not easily integrated. Despite the summit’s lofty goals, trying to arrive at a single internationally recognized framework for AI governance could prove both challenging and frustrating. Governments that have different approaches to regulating their private sectors may not agree on a single understanding of what regulatory governance looks like, and companies have incentives to understand AI governance in line with their business interests. These are challenges international summit meetings will continue to struggle to resolve. Acknowledging the strategic ambiguity of the term and sharing specific best practices on governance may be more productive for making progress on various regulations, standards, or voluntary commitments around AI.

RELATED:
On November 5, AI is also on the ballot

How will the EU’s regulatory approach fare? For the European Union, governance has primarily meant regulation. Last year, the European Union forged ahead by adopting the world’s first comprehensive regulatory framework for the development and deployment of AI. The AI Act marks a significant milestone for artificial intelligence governance and, given the lack of comparable regulations, provides a unique blueprint for other nations: South Korea and Brazil proposed subsequent regulations with strong parallels to the European Union framework. The European Union has gained a leadership position in governance by being the first to lay down rules of the road, significantly influencing how AI might be regulated even beyond its borders. While other nations are still weighing their strategies towards governance, representatives from the EU and its member states can leverage their unified vision at the summit and use it to push for an international governance framework that adopts key characteristics from the AI Act.

The European Union’s attention has now shifted toward implementation. The AI Act relies heavily on technical standards and codes of practice to define what compliance looks like for developers. From an AI governance perspective, the ongoing efforts to define compliance requirements will be critical for determining if the act can successfully mitigate risk and protect European citizens from algorithmic harms, such as exposure to AI-generated disinformation or discriminatory bias from systems trained on unrepresentative data sets. Foreign policy-wise, implementing the regulation offers the European Union another opportunity to expand its influence over global AI governance efforts. As the United States government reassesses its approach to AI oversight and with the future of the  US AI Safety Institute uncertain, the European Union is poised to fill the gap by shaping global interpretations of what “trustworthy AI” looks like through standards and best practices.

What is the new American approach? Across the Atlantic, the change to a new administration in the United States has left the future of federal AI policy in flux. President Trump has rescinded the Biden administration’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI and directed a review to assess where the order was at odds with enhancing America’s global AI dominance and “human flourishing, economic competitiveness, and national security.” Given that Biden’s executive order represented a meaningful step toward developing robust government standards and testing protocols, the findings of the Trump administration’s review may illuminate how AI safety standards and evaluations, and the bodies that conduct them, fit into an “America First” vision.

Executive order 14110 distributed responsibilities for AI testing and evaluation across the US government. The US AI Safety Institute under the Department of Commerce has developed guidance and best practices for supporting safe and trustworthy AI. Under the previous administration’s policies, the US AI Safety Institute was tasked with conducting voluntary pre- and post- public deployment testing for private sector frontier AI models and testing for AI risks with other government agencies, including the departments of Energy and Homeland Security. These bodies focus primarily on evaluations of AI national security risks (including chemical, biological, radiological, and nuclear risks), which may incentivize the Trump administration to allow their work to continue. More clues about whether the AI Safety Institute persists and which aspects of AI safety are salient to the Trump team will unfold in the months ahead. The United States’ willingness (or unwillingness) to commit to proposals presented in Paris may hint at the extent to which safety features into the administration’s larger AI strategy and whether the new administration views governance as a unilateral or multilateral exercise.

RELATED:
Memo to Trump: Develop specific AI guidelines for nuclear command and control

A patchwork quilt of governance? As centralized American governance measures undergo a period of transition, states may take up the mantle of regulation. Over the past year, several American states have passed AI-related laws and others are poised to move ahead in 2025. These laws can provide meaningful oversight but also form a patchwork of state regulations that may unevenly protect against risks.

Although the evolution, and ultimate demise, of California’s SB 1047 grabbed headlines in 2024, California also enacted a series of bills that crack down on certain deep fakes, protect performers’ digital likeness, and promote digital literacy. Colorado enacted a law mirroring parts of the EU AI Act, requiring developers and deployers of high-risk systems to use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination. Tennessee legislators passed the ELVIS Act, providing individuals property rights over the use of their name, photograph, likeness, or voice. Early this year, New York legislators introduced a bill inspired by SB 1047 that provides whistleblower protections and requires AI companies to develop model safety plans; New York’s Department of Labor is planning to require that employers disclose when mass layoffs are related to AI adoption. It is likely that other states will impose additional requirements over the course of the year.

The state-based approach showcases how regulation may be approached on an issue-by-issue basis, which has the advantage of targeting specific types of AI harm but could create gaps or compliance difficulties for firms across state lines. Considering the merits and drawbacks of such an approach at the Action Summit could offer insights for the way ahead.

The opportunities and limits of a voluntary approach. When regulation and legislation are politically infeasible, governance through voluntary commitments from the private sector offers an alternative path. However, the value of such commitments may be limited. Signatories are often incentivized to advocate for measures in areas where they invest anyway. For instance, commitments made by leading tech firms at the White House in 2023 covered practices companies already engaged in, like red-teaming and cybersecurity. One year later, companies had expanded their efforts in those areas, while commitments to transparency and information sharing saw less progress. This follows basic business logic: Enhancing product safety and protecting intellectual property are more aligned with business interests than sharing insights into limitations and internal processes.

But voluntary commitments that reframe existing business practices as meaningful action risk being perceived as participation trophies. They also fail to advance progress on more ambitious governance goals, like safety measures, energy efficiency, or labor standards in data supply-chains. Legislators need to think more on how governments and firms can ensure voluntary steps entail real, significant investments toward harnessing AI responsibly.

There is no shortage of AI governance topics to tackle in the year ahead. DeepSeek R1 has raised policy questions, for instance, about the potential misuses and security issues that could arise from developing powerful open-source models and the extent to which they should be controlled. The Stargate Project’s four-year multi-billion-dollar initiative points toward shifting market dynamics as the AI ecosystem braces for the new administration’s approach. Meanwhile, the private sector appears intent on developing AI agents, aiming to imbue large language models and other advanced models with sophisticated embodied or digital capabilities. The Artificial Intelligence Action Summit can be a proving ground of ideas for national and state governments looking to implement AI governance in the year ahead, but stakeholders must be willing to explore different modes of governing these systems if they are to develop truly actionable recommendations. If not, the summit risks becoming yet another stop on the conference circuit.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
A graphic shows, "Watch the 2025 Doomsday Clock announcement. Learn more." and shows a picture of the Doomsday Clock at 89 seconds to midnight.

RELATED POSTS

Receive Email
Updates