There were plenty of alarming and dispiriting developments in the present-day world of nuclear weapons policy this year, but none garnered as much public attention as a historical film that exploded onto the big screen in late July. Christopher Nolan’s biopic Oppenheimer dramatized the life of J. Robert Oppenheimer, the physicist who led the development of the atomic bomb at New Mexico’s Los Alamos Laboratory in the 1940s.
The critically acclaimed film was a box-office smash hit, introducing a new generation to the history of nuclear weapons. It didn’t hurt that Oppenheimer opened on the same day as Greta Gerwig’s playful satire Barbie. The striking dissimilarity between the two movies became the basis for jokes, memes, and other “Barbenheimer” comparisons that began on the internet and quickly became a cultural phenomenon that motivated many moviegoers to see both films.
Contributors to the Bulletin’s “Voices of Tomorrow” section, which features essays and opinion pieces by rising experts, offered some thoughtful reflections on the movie event of the year. They were among the best pieces published in the section in 2023.
This year’s sampler leans heavily toward nuclear weapons, which seem to be on the minds of more young people than in other recent years. But the collection also includes some pieces at the intersection of nuclear weapons and two other Bulletin core topics, artificial intelligence and climate:
By Emily Strasser
The author, whose grandfather worked at the Oak Ridge site where the US government enriched uranium for the Hiroshima bomb, expresses hope that Oppenheimer will lead to more support for nuclear abolition but criticizes the film for failing to show any images of Hiroshima or Nagasaki and for neglecting the impacts of fallout from the Trinity test—and instead depicting the horror of nuclear war through Oppenheimer’s imagination.
By Emily Faux
In this entertaining essay, the author dissects the “Barbenheimer” phenomenon and offers readers thought-provoking insights into how nuclear weapons and war are perceived in popular culture. She writes about why pink and black are associated with Barbie and Oppenheimer, respectively; what lies behind humorous discussions of the proper order in which to view the two movies; and what “Barbenheimer” reveals about the gender divide in the nuclear realm.
By Louis Reitmann and Sneha Nair
This piece makes a well-argued case for including LGBTQ+ people in nuclear policy discussions—not to advance any social agenda, but rather to improve nuclear security and reduce nuclear risks. The authors explain why the perspectives of queer people are relevant and beneficial for nuclear policy outcomes.
By Kayla Lucero-Matteucci
Regular readers of the Bulletin know that it focuses on existential threats such as nuclear weapons, climate change, biological pathogens, and artificial intelligence. The Bulletin also pays special attention to how these threats increasingly intersect with one another in ways that can create new risks and amplify existing ones. The author of this “Voices” piece recognizes this growing convergence, warns that the nuclear field is too insular, and calls for a cross-disciplinary approach to researching and mitigating major global risks.
By Cameron Vega
This piece recognizes the intersection of two enormous threats to humanity: nuclear risk and climate change. Experts and policy makers have long recognized that nuclear war would have devastating climate effects, triggering a years-long “nuclear winter” that would disrupt food and agriculture systems. But those climate impacts have not been incorporated into US nuclear weapons policies and strategies, even though they have major implications for any contemplated nuclear weapons use. Climate science can shed fresh light on the dangers of nuclear weapons and revitalize the field of nuclear ethics, the author writes.
By Peter Rautenbach
This writer also explores the intersection of two threats: nuclear and artificial intelligence. The prospect of fully automated military systems that remove humans from decision making has already raised concerns about “killer robots.” This piece goes a step further, warning that the proposed solution—keeping humans in the loop—is not enough to make AI systems safe, and in some cases could even make AI systems less safe. The dangers include the human tendency to become overly reliant on automated systems and to unconsciously assume that they’re working correctly.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.