Military Applications of AI

Defending against “The Entertainment”

By William Regli, April 25, 2018

Amid the published angst about AI and its hypothetical threats, more attention ought to be given to the threat that AI-enabled entertainment poses to our brains and our civilization.

“the so-called perfect Entertainment… that danger of Entertainment so fine that it will kill the viewer… The Entertainment exists.”

Infinite Jest (1996), David Foster Wallace, pp 318-319

 

I have yet to try out any serious augmented reality games. Stuff like “Pokémon Go” actually scares me a bit.

I have a memory from some years ago about another hot video game. I’d heard about it from a friend who is a medical doctor: “It’s called Doodle Jump, it’s addictive!” he warned me. Indeed, in navigating to its entry in the Apple iPhone App Store, one sees the sort of label normally deployed by the Surgeon General for products like cigarettes: “Doodle Jump. BE WARNED: Insanely Addictive!” and “WARNING! Doodle Jump is addictive and will cause you to lose large amounts of your day.”

With a recommendation by a trusted friend to try this new game and not wanting to be left out of an “everyone’s doing it” moment, I downloaded Doodle Jump, which I later learned was the product of two Croatian brothers (Igor and Marko Pusenjak) working out of their garage. I saw that it featured an adorable central protagonist, hilarious antagonists, and rhythmic gameplay that gently stroked the dopamine receptors in my brain. I played. And I played. And I played. How long has it been? My doodler falls, and I start over again. Surely I could do better?

Doodle Jump was released on April 6, 2009 … or April 6 in the “Year of the Depend Adult Undergarment” (YDAU), if we go by the “revenue enhancing subsidized time” chronology of the late David Foster Wallace’s (aka ‘DFW’) 1996 magnum opus Infinite Jest. Infinite Jest is a masterful post-modern commentary on the human condition, with central motifs about addiction, tennis, and something called The Entertainment. The Entertainment is a movie directed by an auteur as his final film, and it is so compelling, so mind-bendingly addictive, that it completely captivates those who view it, to the point that they forget about all else, are rendered catatonic, and eventually die as a result of just staring at the screen. I have begun to wonder if we, on this nonfiction side of the line, have reached the era of The Entertainment and, if so, how concerned should we be.

Doodle Jump, suitable for ages 4-plus, has been downloaded over 10 million times since its release. Reports indicate that Pokémon Go was downloaded more than 15 million times in its first seven days of availability.

Considerable and increasing press coverage and academic discourse have fretted about the potential existential threat posed to humanity by lethal autonomous weapons (sometimes called “killer robots”), runaway artificial general intelligence (AGI), and the possibility of machine sentience. Some even attempt to create an analogical equivalence between lethal AI or AGI and the threat to humanity posed by the existence of nuclear weapons. Indeed, the package in which this article appears was created in coordination with a workshop that examined the issues might arise if AI ever meets nuclear weapons. The trajectory of the advances in AI and machine learning point to the possible (some would say “likely”) creation of weapons systems that are more autonomous and less completely under the control of humans. Still, while these are matters for grave concern, we can find plenty of evidence that the age of existentially threatening autonomy has not yet arrived (see http://www.rand.org/pubs/research_reports/RR443-2.html, for example) and, in the opinions of a similarly large and vocal cohort of scientists (see https://rodneybrooks.com/blog/), is likely to still be some ways off.

What I do find concerning, amid all of the published angst about AI and its hypothetical threats, is how little chatter there is (especially in the mainstream computer science literature) regarding the potential threat to humanity posed by AI-enabled entertainment, or about what the computing and communications technology enabling The Entertainment might be doing to our brains and our civilization. Indeed, might AI-empowered, big-e Entertainment be worthier of our immediate collective concern than any potential threat from killer robots or future AGI alone?

Events like the 2016 release of Pokémon Go add to the evidence that we are already living in the age of The Entertainment: TV shows specifically designed to be binge-watched, clickstream-optimized to maximize our clicking, with injection advertising to highjack our eyeballs. Products are recommended to us, relieving us of having to think to purchase them ourselves. Our social media feeds and the text of news articles are automatically synthesized. The rise of widely accessible augmented and virtual reality, with created new opportunities. DFW’s hypothetical future state of a life-overtaking Entertainment has jumped across the fiction/nonfiction line. The Entertainment exists.

Would that we had a better scientific understanding of the technologies we unleash. I have begun to think computer scientists need to assume appropriate responsibility for what is unfolding. Where to start?

One might take the position that study of issues like Internet addiction, cognitive processes associated with computer-immersive behavior, the physiology of identity formation, and the like are matters for psychiatrists, anthropologists, and medical doctors—not computer scientists. Rather than addressing these issues in an analytical manner, we are often naïvely rushing toward them, as the highly addictive nature of our technical creations makes for great business plans. Demonstrating you can monetize a new form of digital addiction could, after all, make for a great round of ‘Series A’ fundraising—no matter how dangerous or unexpected that addiction could be to society.

Something is wrong here, and I would argue these issues should be core computer science questions. Consider the early days of nuclear science and how our understanding of such phenomena has evolved. Marie Curie kept radium in her desk drawer, and Enrico Fermi, with famous nonchalance, operated the first nuclear pile under the University of Chicago football stadium. After many lessons learned, nuclear scientists themselves developed an understanding of nuclear materials, and defined how they should be regulated and what training was needed to handle nuclear materials safely.

In computing, we are beginning to see such controls around matters of cryptography, cyber security, and cyber weapons. But I know of no such regulations and controls on the distribution of The Entertainment and the central role AI and machine learning technologies are playing in its creation and distribution.

In today’s computer science field, what form might such controls and safety measures take? To even get that question on the table will require convincing software engineers that their programming products (a game, an online advertising technique, etc.) may need to be treated as a potential danger on par with bio-engineered life forms and radiological materials. Although not all software poses a potential health hazard, the basic skills of programming required to create safe or “hazardous” software are the same. Any education of software and programming professionals needs to explore these issues within the technological and scientific context of the discipline.

Many of these computer- and device-anchored technologies present moral quandaries that strike at the heart of ethical codes for our professional societies. For example, the Institute of Electrical and Electronics Engineers (IEEE) code of ethics’ rule #1: “to accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.” The Association for Computing Machinery’s code says, in its very first sentence, “As an ACM member I will … [c]ontribute to society and human well-being,” and later, “[a]void harm to others.” The moral and social implications of our technological creations need to become pillars of the education and training of computing professionals. Additionally, we need to create some means of discourse by which we can objectively evaluate such technologies and the societally challenging issues they might create—and do so in a manner consistent with the value we place on personal liberty and rights to free expression.

So here is where the need to expand the traditional notion of “computer science” becomes evident: Can we even answer questions about safety, health, and welfare of the public when it comes to the computation-based Entertainment? By what means would we objectively label something “hazardous” or assign it a rating? And by rating I do not mean subjective rating (PG, G, NC-17, etc.) or the wide but controversial set of research literature on the dangers of videogames, television, movies, etc. Given that the Church-Turing Thesis is a founding tenet of computer science, then we can acknowledge that the human brain is a type of computing machine. When we deploy The Entertainment, what we really are doing is programming a human-machine system—part of the program runs on silicon in the computer, and part of it runs on the carbon and chemicals in our human brains. Other areas of engineering have ratings based on rigorous scientific standards: The bridges we drive on are rated for loads; the crashworthiness of vehicles is simulated and physically measured; medicines have recommended dosages; our aircraft are subject to rigorous flight-testing. We need to develop the computer science needed to rate potential risks and harms when we are programming the human-machine system.

Let’s choose an optimistic viewpoint, at least for the moment. Given that we are at the precipice of the age of symbiotic human-machine systems, how might the technologies of The Entertainment be deployed in a positive way, perhaps better channeling humanity’s collective intellectual energies into things that are awesome? During the early nuclear era, President Eisenhower had his “Atoms for Peace” initiative, and a similar positive spirit is certainly the focus of those looking seriously at human computation, crowdsourcing, and other human-machine teaming approaches. One of Pokémon Go’s successes, after all, is getting kids and adults out and running around, interacting in the physical world instead of just gluing themselves to a screen. I see a vast opportunity landscape, but fully exploring it requires that we expand the notion of “computer” to include the carbon-based ones: We are no longer just programming computing machines; rather, we are programming a human-machine system.

As we run this global-scale, uncontrolled social science experiment that intricately links our lives to artificially intelligent computers, what changes to our species will result? I’m reminded of a famously Strangelovian footnote to the Manhattan Project: The scientists observed that it was scientifically plausible for ignition of a fission reaction of an atomic bomb to trigger a fusion reaction among the nitrogen nuclei in the Earth’s atmosphere and, as a consequence, create an inferno that would incinerate the entire planet. While such a result was considered only remotely likely, the scientists at Los Alamos actually studied the question and used their science to calculate what might happen and, ultimately, dismissed this particular apocalyptic possibility.

Yet, while some worry about nonexistent and potentially impossible-to-create sentient AI (AGI), we as a society are pouring tens-to-hundreds of millions of dollars into digital technologies that distract, manipulate, program and Entertain us. Fifteen million downloads of Pokeman Go in one week? What science could have calculated that, or assessed what the end state of this chain societal reaction might be? What might it look like if an Entertainment were weaponized? We might imagine a system that aims to create a lasting effect on the human side of the human-machine system, chemically altering our brains, manufacturing opinions, reinforcing patterns, and exacerbating behaviors that are not in our individual or collective interest. Frightfully, such a scenario does not require imagination. The Entertainment exists.

I’d like to hope the results of our thus-far-uncontrolled experiment in Entertainment will be a net positive. But if the potential threat posed by AGI and autonomous weapons technologies merits the level of discussion and sense of urgency it currently receives, it would seem that we also need to have an even more urgent and vigorous discussion about the technology and morality of The Entertainment. We have to make the development of the new computer science and the moral and ethical norms of The Entertainment immediate priorities for the discipline of computer science and for our society.

And we need to start now …. right after I get that Pikachu off my front stoop.

As the coronavirus crisis shows, we need science now more than ever.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Support the Bulletin

View Comments