By Annie Dorsen, October 27, 2022
In the mid-1960s, artists started using computers to expand the possibilities of visual art, music, and poetry. That generation learned how to code by reading a phone book-sized manual and committed themselves to programming. Algorithmic artist Roman Verostko, a member of this early group, drew a contrast between the process that an artist develops to create an algorithm and the process through which the art maker uses an already developed set of instructions to generate an output. He explained that it is “the inclusion of one’s own algorithms that make the difference.”
I’ve experienced this difference firsthand. In my theater works to date—a combination of algorithmic art and performance—I’ve collaborated with programmers to design and write the code that generates each evening’s show. For an upcoming piece, however, I wanted to explore text- and image-generating software like OpenAI’s GPT-3, Dall-E, and Midjourney. The results were as suspected. Writing one’s own code feels like any other art-making process: full of uncertainty and trial and error, as the artist gropes around for a method that will produce the desired results. Plugging prompts into artificial intelligence, or AI, models, on the other hand, feels more like playing the slots, robbing the creator of a process that stretches the imagination.
Promoters of AI tools like to talk about user creativity. But I’ve found the relationship between my prompts and results to be unsatisfyingly obscure. It’s almost impossible to know what difference using one input word versus another makes, since the algorithms that produce the resulting text or images are entirely locked down and inaccessible. Further, all algorithms have computational bias. They allow certain possibilities and preclude others. This bias is often hidden to the user. If users don’t know why the model responds as it does, or what data it’s using, or how adjusting the code might affect the outcomes, the only real pleasure is in seeing the result. And then seeing what they get if they try it again. What will it make if they do it now? Or what if they change a word and try again? Pay another nickel and pull the lever one more time.
In a recent New York Magazine article, David Holz, the founder of Midjourney, commented on his model’s “default styles.” He described the aesthetics in vague, catch-all terms, such as “imaginative, surreal, sublime, and whimsical,” before getting down to brass tacks: “It likes to use teal and orange.”
Much has been written about the cheesy aesthetics of AI-generated art. A bigger issue is the exploitation of living artists, who are neither credited nor compensated for the use of the existing works that feed the programs’ datasets. (It’s also worth noting that even as these companies scrape millions of images from the internet, appropriating the work of others for their own commercial ends, the code running these models is protected by copyright.) Others have also pointed out the enormous energy consumption of AI models, and the massive amounts of user-generated data collected by them. All of that is true.
But there should be an even deeper concern: These tools represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind. Professional artists aren’t a cause for worry. They’ll likely soon lose interest in a tool that makes all the important decisions for them. The concern is for everyone else. When tinkerers and hobbyists, doodlers and scribblers—not to mention kids just starting to perceive and explore the world—have this kind of instant gratification at their disposal, their curiosity is hijacked and extracted. For all the surrealism of these tools’ outputs, there’s a banal uniformity to the results. When people’s imaginative energy is replaced by the drop-down menu “creativity” of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration.
By immiseration, I’m thinking of the late philosopher Bernard Stiegler’s coinage, “symbolic misery”—the disaffection produced by a life that has been packaged for, and sold to, us by commercial superpowers. When industrial technology is applied to aesthetics, “conditioning,” as Stiegler writes, “substitutes for experience.” That’s bad not just because of the dulling sameness of a world of infinite but meaningless variety (in shades of teal and orange). It’s bad because a person who lives in the malaise of symbolic misery is, like political philosopher Hannah Arendt’s lonely subject who has forgotten how to think, incapable of forming an inner life. Loneliness, Arendt writes, feels like “not belonging to the world at all, which is among the most radical and desperate experiences of man.” Art should be a bulwark against that loneliness, nourishing and cultivating our connections to each other and to ourselves—both for those who experience it and those who make it.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
View Comments
I hate spellcheck for the same reason that we will hate AI: it flattens our world, it takes robust expression and channels it into something maudlin.
In the same way that we move from 10,000 languages, to 6, then 3. what have we lost? It will cheapen and at the same time socialize visual art's reach. Books and movies are already being written with AI's help. Music also. We must find our inner world somewhere else, away from the glass you are reading this now.
The change is colossal. As we reach 8 billion next month, we must find other ways to value the individual.