By Julie George, May 16, 2023
ChatGPT is here, and its user base is growing faster than TikTok.
In a 2017 survey by the AI software company Pega, only 34 percent of respondents thought they had used technology that incorporates artificial intelligence (AI). However, the actual number is much higher: 84 percent of the respondents—which included people in the United States, Britain, France, Germany, the Netherlands, and Australia—had used an AI-powered service such as a virtual home assistant, chatbot, or software that makes predictive suggestions.
The United States needs better AI literacy and understanding of this emerging technology, as AI intersects many aspects of American lives, including defense. For example, in October 2022 the Defense Department issued a formal request for information—often the first step in the process of calling for bids and issuing a government contract—to identify sources for its AI, machine learning, and data science workforce. In its words: “As the [Defense Department] expands its workforce in the AI workspace, it is crucial that it maintains a qualified and experienced workforce that can match industry innovations both in speed and execution.”
This illustrates how AI has become a national security issue—and how an AI-educated workforce (and an AI-savvy Defense Department) is the way forward. As AI expands, it will be imperative to bolster the United States’ thinking about how humans and AI can learn from one another, and to understand the opportunities, risks, and challenges of AI innovation. Although AI can be a complex technology to grasp, the Defense Department and other federal agencies can “decode” AI through training, initiatives, and investments that will help the country prepare for a future in which humans inevitably use AI on a greater scale.
To do so, the United States needs “human-centered AI,” which involves humans throughout the research, design, training, testing, and decision-making processes of AI systems. This approach leverages both machine and human intelligence.
But the country also needs AI-centered humans.
The national security link to AI. The 2018 Department of Defense Artificial Intelligence Strategy defined AI as the ability of machines to perform tasks that typically require human intelligence. The Pentagon expects AI to strengthen the military, increase the effectiveness and efficiency of operations, and enhance homeland security.
The US government invested $2.5 billion in AI research and development in fiscal year 2022, but the United States is not the only country ramping up federal spending on artificial intelligence. With respect to military funding of AI, an October 2021 report by the Center for Security and Emerging Technology estimated that annual Chinese military spending on AI was “in the low billions of US dollars.” According to the National Defense Industrial Association’s magazine National Defense, this level of funding of AI is on a par with the Pentagon’s investments. Other countries that are leading in AI investment activity include Israel, the United Kingdom, Canada, India, Japan, Germany, Singapore, and France. From both a national and global vantage point, it is clear that interest in artificial intelligence is expanding rapidly.
Many people already use AI regularly without realizing it—for example, individuals encounter AI through popular virtual assistants like Apple’s Siri and Google’s Assistant, quick translations of language, major online platforms such as Amazon and YouTube with recommendation algorithms, and tagging objects or people in images. AI does all this without becoming the dystopian superintelligence that critics have been warning about for decades. Yet AI has its pitfalls. For example, research has shown that training datasets can amplify biases; algorithmic decisions lack transparency and accountability; and biased criminal-justice algorithms make questionable predictions about sentencing.
Regardless of whether one is a technophile, technophobe, or in-between, we all need to recognize more nuance in the relationship between AI and humans. AI needs humans, and humans need AI.
AI needs humans. In recent months we’ve seen Elon Musk make drastic changes to Twitter. He disbanded the Human Rights Team led by Shannon Raj Singh and laid off thousands of Twitter employees—including much of the content moderation force. Behind computer screens, these teams worked to combat misinformation and disinformation, increase accessibility for people with disabilities, and protect users facing human rights violations worldwide. One team worked on ethical AI and algorithmic transparency.
Humans are crucial when it comes to these social and dynamic settings, both in society and the military. Ultimately, AI and its algorithms are constrained. For example, algorithms cannot understand parody, sarcasm, satire, or context the way a human can. Indeed, humans are fundamental to coding processes, AI systems, and platforms.
In 2020, the Defense Department published its recommendations for AI ethical principles, which would apply to combat and non-combat efforts. Then-Secretary of Defense Mark T. Esper said, “The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order.” The principles focus on five critical areas: responsibility, equitability, traceability, reliability, and governance. At the core of each of these principles is the critical role of humans who will exercise judgment and work to minimize unintended consequences and bias.
Humans need AI. AI can complete tasks and perform better than humans in several notable areas. For example, AI could potentially provide more accurate medical diagnoses, especially in the field of radiology and pathology, because of AI’s ability to train on large sets of images, extract patterns from data mining, and identify specific, relevant features for diagnoses. Some research has shown that an AI program was able to detect breast cancer in mammograms, notably in the early stages of cancer. AI can also translate one’s speech while retaining his or her voice through Google’s AI, transcribe audio quickly, and proofread work.
Moreover, AI can learn from AI. For example, Google’s AutoML and Microsoft DeepCoder can build the next generation of AI. These two machine-learning systems can not only retain the code given to them by the researchers, but can also investigate how the codes fit together, how they work, and how to learn other codes. In simple terms, AI can absorb large amounts of data, pick up on patterns, and provide relevant outputs at an incredible pace.
AI is not just widely used in everyday life. Society cannot ignore the expanding use of artificial intelligence in warfare and future conflicts. Semi-autonomous drones, which are guided by human operators, are already being used in the Russia-Ukraine war for surveillance and target identification. One can imagine that AI and human operators will increasingly work together in these conflict settings, especially with advanced drones. For example, the US Switchblade 600 requires a human operator to choose targets while viewing a live video feed.
One of the reasons people have distrusted AI is because the enabling algorithms are perceived as a “black box.” The lack of explanation for coding decisions, as well as for the datasets used to train the algorithms, creates the potential for bias in AI. The necessary skills, limits of data quality, and fear of the unknown are additional issues in bridging the gap between humans and machines.
While the barriers to adopting AI are challenging, they are not insurmountable. With increased AI literacy, present and future adopters of AI can work to develop, deploy, and use the technology in responsible ways.
Putting humans in the driver’s seat. “AI-centered humans” flips the concept of “human-centered AI” on its head. Rather than having humans interact at different stages of the decision-making process, AI-centered humans would, rightly, take the driver’s seat. For example, when the US Defense Department adopted its five ethical principles for the use of AI, it brought together AI experts in industry, government, academia, and the general public. Additionally, Stanford University’s Institute for Human-Centered Artificial Intelligence hosted an inaugural Congressional Boot Camp on AI last year, at which 25 bipartisan congressional staff members discussed recent developments in AI. Dialogues like these are not siloed in the technical community. With diverse perspectives and expertise, and an increased understanding and awareness of AI and its applications, humans can better assess the risks, opportunities, and limitations of AI.
There have already been some significant initiatives on this front. For example, in June the Association for Computing Machinery will hold its sixth annual cross-disciplinary Conference on Fairness, Accountability, and Transparency, bringing together computer scientists, social scientists, law scholars, statisticians, ethicists, and others who are interested in fairness, accountability, and transparency in socio-technical systems. The association is the world’s largest computing society, and its conferences are widely considered the most prestigious in the field of human-computer interaction.
Germany has taken an AI-centered human approach that is inclusive, evidence-based, and focused on capacity-building. Specifically, the German Minister for Economic Affairs and Energy funded a free online course, The Elements of AI, to increase AI literacy. Users of the course can take the course at their own pace without prior experience in coding or specialized math skills. This is a step in the right direction.
Moving forward, the United States should devote more national attention, financial resources, and programming to strengthening AI education across federal agencies and civil society. Perhaps more important, the US federal government should formalize an AI education strategy with timeline-specific goals, highlighting both short-term and long-term aims. Specifically, US policy makers need to prioritize an AI-informed society, ensure transparency, and best assist the military.
There has been some progress along these lines; for example, the Pentagon’s 2020 AI Education Strategy highlights priority areas and skills required to accelerate AI adoption from software and coding to data management and infrastructure. The strategy focuses on how to build up AI capabilities, raise AI awareness for senior leaders, and provide training on the responsible use of AI. While this is a good initial step, the strategy lacks the specifics of a timeline.
In the past year, the Joint Artificial Intelligence Center rolled out AI education pilot courses for thousands of Defense Department employees, spanning education for general officers to coding bootcamps. It would be beneficial to extend these initiatives beyond the Defense Department and to organize them into annual, five-year, and ten-year plans. The United States would greatly benefit from robust educational initiatives and AI investments across its departments—especially in Defense, Education, Homeland Security, and State—to strengthen the country’s national security.
In March 2021, former Google CEO Eric Schmidt and former US Deputy Secretary of Defense Bob Work, who led the National Security Commission on AI, wrote in the commission’s final report: “America is not prepared to defend or compete in the AI era.” However, this does not have to be the United States’ future when it comes to AI. Decoding AI through AI literacy is a critical national security issue. AI infiltrates almost all aspects of our daily lives in the United States. Governments, Big Tech, and the general public all have a vested interest in AI and its societal implications.
This entire op-ed was written by ChatGPT. Just kidding! Julie George (a human) did.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.