The authoritative guide to ensuring science and technology make life on Earth better, not worse.

What the G7 countries should do at their next summit to regulate AI

By Elena Simperl, Johanna Walker | January 31, 2024

G7 Italia sign with flags of participating nationsArtificial intelligence will be on G7 Summit's agenda this summer. Image by FlyOfSwallow via Adobe Stock

With last fall’s UK-hosted AI Safety Summit now firmly in the rear window, this summer’s G7 Summit is on the horizon, where artificial intelligence promises to again be on the agenda. And rightly so: The impact of AI on humankind is set to dwarf previous step changes in technology. Artificial intelligence offers both tremendous potential and immense risk. Accordingly, its regulation is important business for governments at national and international levels.

At November’s summit, countries across the world—including the United States and China—signed the Bletchley Declaration. In it, nations pledged to tackle “frontier AI” together, identifying AI safety risks of shared concern and collaborating on policies to mitigate these risks. Frontier AI is defined as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

Undeniably, international collaboration is needed when it comes to ensuring safety in AI systems, so the fact that the declaration had a global reach, and included nations like China, and builds on top of ongoing cross-government work by the OECD, GPAI (The Global Partnership on Artificial Intelligence), and other fora, is encouraging. If agreement on AI can be reached at an international setting like the upcoming G7 Summit, real change can be seen at national levels too.

However, an agreement on frontier AI—primarily concerned with large language models—by itself is not enough. The focus is currently on frontier AI as opposed to foundational AI (models that were trained to perform a wide variety of tasks). Further, the concerns surrounding the existential risks of frontier AI are at best overblown, and at worst, a distraction. Instead, regulation should require that concepts of safety and responsibility are ingrained in all AI systems, so they address the social inequalities plaguing training data, those who input data, and those who rely on AI systems. Such regulation must also include provisions to educate the public as these systems continue to develop, so end users understand the capabilities and limitations of AI applications, or become more AI literate.

New forms of generative AI, primarily OpenAI’s ChatGPT and Google’s Bard, have come to dominate media headlines recently. Yet, a whole range of very powerful AI systems in use over the last decade and even in the previous years had already been causing real harms, exacerbating existing inequalities in society. For example, major age and race biases exist in autonomous vehicle detection systems: A person is more likely to get struck down by a self-driving car if they are young and Black, as opposed to white and middle-aged. Age and race biases exist in autonomous vehicles because the data used by car firms to train their models are often unrepresentative and skewed towards white people in their middle age.

Studies have also shown that some AI facial recognition systems, which have been around in some form since the 1960s but much more widely used in this century, detected Black women as people with a far greater level of accuracy when they wore a white mask, showing a bias towards white males. Pedestrian detection systems associate being white with being a pedestrian, resulting in unfair, potentially life-threatening, outcomes.

RELATED:
Interview: Lawrence Norden on US election security

These kinds of safety problems need to be tackled now, rather than solved in the future.

AI regulation should deal not just with data bias, but also with data appropriateness. If you train an AI model on data that is unrepresentative, of poor quality, or not matched to a model’s function, the results will suffer. It’s also not enough for data to be clean or representative: It also needs to be fit for its intended purpose. In a recent experiment, a GPT-3.5 model trained on 140,000 Slack channel messages was asked to write content. The system replied, “I shall work on that in the morning.” The response reflected what users were saying in their work chats when asked the same thing. Instead of writing emails, blogs, and speeches as requested, the model echoed the concerns and nature of its dataset—putting it off until the next day. By using a fundamentally unsuitable dataset, albeit one that superficially appeared appropriate, the model performed an entirely different function than anticipated.

To avoid these pitfalls and perils, experts must move to a more data-centric view of AI. This means applying best practice principles used in managing risks in AI models to the datasets on which they are trained, as well. They then need to take active steps to establish the tools, standards, and best practices to ensure that this data is accurate, representative, and free of bias.

How AI is handled or trained is a social issue, as well as a technological one. The organizations involved in creating or enriching data must bring in underrepresented groups to ensure they’re making fair decisions and that their models can be used safely and responsibly. Furthermore, this ethical approach needs to be ingrained across the data value chain. For executive boards this means understanding (and mitigating) the social impact of their products; for the gig workers who are actually labelling data to train AI systems this means fairer working conditions. Embedding fairness for AI end users will need regulation of practices all along this chain.

The demand for reliable and fair data for AI training purposes is urgent. Researchers predict big tech will run out of high-quality data by 2026. The need for representative, ethical, and usable data will thus only become more pressing. Collective engineering groups straddling both academia and industry like ML Commons, an AI engineering consortium, are working to create open and diverse datasets for use in the commercial applications end users are most likely to interact with. But there’s much more to do.

So, what about the existing frameworks of AI ethics? Ethical principles for AI systems at design, development, and deployment have been established by organizations like UNESCO, but studies with AI engineers show time and time again that there is too much ambiguity in how to operationalize them.

Global standards on AI ethics, such as UNESCO’s 2021 “Recommendation on the Ethics of Artificial Intelligence” ratified by all 193 member states, looks to enshrine a “human rights approach to AI” built on principles of responsibility, transparency, privacy, and explainability. Experts have attempted to empower AI practitioners to audit their work against established ethical frameworks. This includes incorporating active ethical monitoring alongside existing legal regulations such as the General Data Protection Regulation. However, low funding in research suggests that governmental support so far has not extended from establishing frameworks to operationalizing them. To ensure that AI regulation can go beyond the splash of a new framework announcement, the work needs to be put in at an operational level.

RELATED:
The path to compulsory voting

Ahead of the 2024 G7 meeting, lawmakers thinking about effective AI regulation should also focus on AI literacy. AI systems are an increasingly large presence in human lives, whether that’s in automated human resources systems, self-driving vehicles, or generative AI such as ChatGPT. But do users really know how these systems work, or what the hidden costs behind them might be?

As AI usage becomes widespread, more energy is required to train and run these increasingly larger systems. Research from the University of Pennsylvania suggests that if humanity continues on its current trajectory of AI usage, by 2030 global electricity consumed by computers could rise anywhere between 8 and 21 percent, exacerbating the current energy crisis.

One study even found the training of one large language model generated 626,155 pounds of carbon dioxide, approximately the amount emitted from 125 round-trip flights between New York City and Beijing.

Similarly, when people utilize AI, they are often unaware of how it arrives at its conclusions, some of which could contain inaccuracies. Increasingly, artificial intelligence is being used to make decisions around whether an individual can apply for a mortgage or will be covered by health insurance, but it’s not clear to the average consumer when exactly this is occurring and how the model is coming to its conclusions—let alone if those conclusions are fair.

It is the job of AI practitioners to help individuals on this journey, to clearly communicate the scope and benefits of AI technologies and be transparent about their costs. Empowering people to use AI with confidence will help the adoption become widespread enough to ensure that the benefits will truly reach everyone.

Ultimately, the G7 Summit is an opportunity once again for global leaders to get AI regulation right. To do so, they must move beyond the flash and noise of large language models to tackle the AI systems already doing harm today. This will involve looking beyond just the technical aspects of AI, but to the social aspects of it as well, and how humans as a society want to integrate diversity, trust, safety, and fairness into the datasets they use.

The task is a difficult one, but failure to do so will be a world in which the AI systems of this year and next fail to deliver on the promises of a better world.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments