Public health agencies are using AI chatbots to ease workloads. Is it a good idea?

By Kimberly Ma | December 21, 2023

The WHO headquarters.The World Health Organization (WHO) headquarters. Credit: ©Yann Forget / Wikimedia Commons / CC-BY-SA.

Public health agencies may lose 130,000 workers by 2025. Low salaries, burnout, and other factors are driving employees away. Better funding, aligned to deal with the real risk of future pandemics, would help to keep programs running smoothly, but government investment in public health has historically followed a boom-and-bust cycle—and it looks like that will continue for the foreseeable future. As a result, health departments are looking for ways to do more with less. Increasingly they may be looking toward a problematic but perhaps effective solution: artificial intelligence (AI) chatbots.

One prominent state-and-local-public-health association has been encouraging practitioners to consider AI’s potential, especially its ability to increase the public’s access to information. AI has long had an important role in data analysis. But with the advent of powerful chatbots that can mimic human language and use data to produce coherent writing, a recent survey found that departments were already using AI to generate content, “including text generation for reports, first draft communications, [and] drafting job descriptions.” Increasingly, understaffed and under-resourced public health agencies may soon be relying on AI systems like ChatGPT to produce the reports that guide policy and action and the messages the public sees and hears. At the same time, while AI may do a good job in some circumstances, officials will have to grapple with how the technology can be used to spread health misinformation and disinformation, and its well-documented capacity to just make things up.

There’s a real risk that large-language models like ChatGPT contribute to online disinformation and misinformation. In a call earlier this year for the safe and ethical use of AI, the World Health Organization (WHO) worried that AI responses “can appear authoritative and plausible to an end user” but be “completely incorrect or contain serious errors, especially for health-related” matters. Similarly, the organization warned AI may be “misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.” Just as media organizations have been caught publishing AI-generated content riddled with inaccuracies, public health workers need to ensure they are not accidentally producing well-intentioned deliverables with critical errors. And in an environment when adversarial countries, antivaxxers, and politicians operate individually or in networks to spread disinformation online, public health agencies will be up against bad actors with the same technology they have.

Researchers have already found bots and trolls spreading false messages about vaccines. It is not hard to imagine how increasingly “human-sounding” and authoritative-sounding bots or trolls could pose major concerns to future risk communications in the next public health emergency. For its part, the WHO wants concerns about AI to be addressed “and clear evidence of benefit be measured before their widespread use in routine health care and medicine— whether by individuals, care providers or health system administrators and policy-makers.”

RELATED:
The disruptive technologies year in review

Researchers, however, are also looking into leveraging AI against malicious actors by using it in fact-checking, the identification of trolls based on behavioral cues and other data, and the analysis of the web to identify proliferating misinformation.

I tested ChatGPT using multiple questions on a range of topics—including infectious diseases, vaccinations, and even gun safety—just to see what it would generate. I even intentionally tried to trick it into producing the incorrect answer multiple times. My experience suggested that, overall, ChatGPT’s information quality is robust, and as a backup, it also often redirects users or recommends users to go back to the original data source (for example, the websites for the Centers for Disease Control and Prevention, the Food and Drug Administration, and the WHO.) While anecdotal, my experience suggests ChatGPT may have a role to play in public health communications.

ChatGPT and other large-language models’ strength lie in their ability to intake and digest large amounts of data that would normally take much longer for a human.  One study (interestingly, one in which the researchers used AI to write about AI) found that large-language models can help produce literature reviews, summarize public health data, generate predictive models of public health outcomes, and even identify the over-prescription of certain medications. These tasks are all part of the day-to-day work of public health agencies. They also contribute to administrative efforts like grant-writing, which require arduous reporting of both numbers and qualitative achievements.

If a large-language model could quickly summarize the efforts of a public health agency and cut some of the time for generating reports or assisting with grant-writing, that would open time and resources for the existing workforce to provide the human touch where it is needed—be it in staffing vaccination clinics, or working directly with community members to address vaccine hesitancy.

RELATED:
Drink the Kool-Aid all you want, but don’t call AI an existential threat

Researchers and companies are also looking into using large-language models for translation, an important part of public health work in many areas. As the models improve, they could break down language barriers and improve the ability to provide risk communications and information access to populations who speak less-commonly translated languages, including refugee and other immigrant communities.

Large-language models also have the potential to independently and directly generate public health news, though this will likely ultimately require human reviewers. While it appears that most public health agencies have yet to create policies around doing this, such content generation might enhance public health’s ability to communicate well and efficiently with their constituents.

This is not to say that AI should be replacing the human public health workforce. Health security scholars and public health workers should understand that accepting or incorporating AI into their world is not the same as allowing it to replace humans. Rather, with the right safeguards in place, including data security and protection of personal identifiable information, AI assistants may be able to more quickly accomplish tasks that are necessary but burdensome.

Though there are indeed risks affiliated with using large-language models, there is a positive potential for AI to improve rapid, widespread, and accurate public health information campaigns. And in an era where public health continues to struggle with insufficient funding and other problems, AI can also lend the CDC and state and local public health departments an extra hand as a force multiplier.

The question is not whether AI is coming to public health departments; the reality is that it is already here, with California and Pennsylvania openly announcing their intent to incorporate AI into their state agency operations. Officials will need to figure out how to capitalize on AI’s strengths and decrease its harms.

The views expressed in this article are my own and do not necessarily represent the views of the National Security Commission on Emerging Biotechnology, the Commissioners, Congress, or the United States Government.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Keywords: ChatGPT, public health
Topics: Biosecurity

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments