The authoritative guide to ensuring science and technology make life on Earth better, not worse.

On November 5, AI is also on the ballot

By Ali Nouri | October 31, 2024

Ballot with word AI written on it going into ballot box.The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape AI's future according to democratic principles. Illustration: edited by Erik English; original from DETHAL via Adobe.

Artificial intelligence represents one of the most consequential technologies of our time, promising tremendous benefits while posing serious risks to the nation’s security and democracy. The 2024 election will determine whether America leads or retreats from its crucial role in ensuring AI develops safely and in alignment with democratic values.

AI promises extraordinary benefits—from accelerating scientific discoveries to improving healthcare to boosting productivity across our economy. But realizing these benefits requires what experts call “safe innovation,” developing AI in ways that protect American safety, security, and values.

Despite its benefits, the varied risks associated with artificial intelligence are significant. Unregulated AI systems could amplify societal biases, leading to discrimination in crucial decisions about jobs, loans, and healthcare. The security challenges are even more daunting: AI-powered attacks could probe power grids for vulnerabilities thousands of times per second, launched by individuals or small groups rather than requiring the resources of nation states. During public health or safety emergencies, AI-enabled misinformation could disrupt critical communications between emergency services and the public, undermining life-saving response efforts. Perhaps most alarming, AI can lower barriers for malicious actors to develop chemical and biological weapons easier and quicker than without the tech, putting devastating capabilities within reach of individuals and groups who previously lacked the expertise or research skills.

Recognizing these risks, the Biden-Harris administration developed a comprehensive approach to AI governance, including the landmark Executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The administration’s framework directs federal agencies to address the full spectrum of AI challenges. It establishes new guidelines to prevent AI discrimination, promotes research that serves the public good, and creates new initiatives across government to help society adapt to AI-driven changes. The framework also tackles the most serious security risks by ensuring that powerful AI models undergo rigorous testing so that safeguards can be developed to block their potential misuse—such as aiding in the creation of cyberattacks or bioweapons—in ways that threaten public safety. These safeguards preserve America’s ability to lead the AI revolution while protecting our security and values.

RELATED:
Interview: Lawrence Norden on US election security

Critics who claim this framework would stifle innovation would do well to consider other transformative technologies. The rigorous safety standards and air traffic control systems developed through international cooperation didn’t inhibit the airline industry, they made it possible. Today, millions of people board planes without a second thought because they trust in the safety of air travel. Aviation became a cornerstone of the global economy precisely because nations worked together to create standards that earned the public’s confidence. Similarly, catalytic converters didn’t hold back the auto industry: They helped cars meet growing global demands for both mobility and environmental protection.

Just as the Federal Aviation Administration ensures safe air travel, dedicated federal oversight in collaboration with industry and academia can ensure responsible use of artificial intelligence applications. Through the recently released National Security Memorandum, the White House has now established the AI Safety Institute within the National Institute of Standards and Technology (NIST) as the primary US government liaison for private sector AI developers. This institute will facilitate voluntary testing—both before and after public deployment—to ensure the safety, security, and trustworthiness of advanced AI models. But since threats like bioweapons and cyberattacks don’t respect borders, policymakers must think globally. That’s why the administration is building a network of AI safety institutes with partner nations to harmonize standards worldwide. This isn’t about going it alone, it’s about leading a coalition of like-minded nations to ensure AI develops in ways that are both transformative and trustworthy.

Former president Trump’s approach would be markedly different than the current administration’s. The Republican National Committee platform proposes to “repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” This position contradicts the public’s growing concerns about technology risks. For instance, Americans have witnessed the dangers children face from unregulated social media algorithms. That’s why recently the U.S. Senate came together in an unprecedented show of bipartisan force to pass the Kids Online Safety Act by a vote of 91-3. The bill provides young people and parents with tools, safeguards, and transparency to protect against online harms. The stakes with AI are even greater. And for those who think establishing guardrails on technology will hurt America’s competitiveness, the opposite is true: Just as travelers came to favor safer aircraft and consumers demanded cleaner vehicles, they will insist on trustworthy AI systems. Companies and countries that develop AI without adequate safeguards will find themselves at a disadvantage in a world where users and businesses demand assurance that their AI systems won’t spread misinformation, make biased decisions, or enable dangerous applications.

RELATED:
The campaign volunteer who used AI to help swing Pakistan’s elections: Interview with Jibran Ilyas

The Biden-Harris Executive Order on AI establishes a foundation that must be built upon. Strengthening United States’ role in setting global AI safety standards and expanding international partnerships is essential for maintaining American leadership. This requires working with Congress to secure strategic investments for AI safety research and oversight, as well as investments in defensive AI systems that protect the nation’s digital and physical infrastructure. As automated AI attacks become more sophisticated, AI-powered defenses will be crucial to protect power grids, water systems, and emergency services.

The window for establishing effective global governance of AI is narrow. The current administration has built a burgeoning ecosystem for safe, secure, and trustworthy AI—a framework that positions America to lead in this critical technology. To step back now and dismantle these carefully constructed safeguards, would surrender not just America’s technological edge but the ability to ensure AI develops in alignment with democratic values. Countries that don’t share United States’ commitment to individual rights, privacy, and safety would then have a greater voice in setting the standards for technology that will reshape every aspect of society. This election represents a critical choice for America’s future. The right standards, developed in partnership with allies, won’t inhibit AI’s development—they’ll ensure it reaches its full potential in service of humanity. The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape AI’s future according to democratic principles or surrender that future to those who would use AI to undermine our nation’s security, prosperity, and values.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments