You Don’t Hate AI, You Hate Capitalism
I first tried AI the way many people did: I fell prey to a viral marketing trend. In late 2022, photo-editing app Lensa briefly broke the internet when users flooded social media with its uncanny AI-generated avatars. As my feeds overflowed with yassified portraits of friends in bootleg Marvel gear, I couldn’t help but think about my mother. What would Mom look like as an Avenger? I’d been making images of her in my art practice for more than 15 years, using every modality I could manage. AI seemed like a funny, if unexpected, next step.
My first reaction was simple amazement. The representational prowess of AI was shocking. But while the avatars in my feed were all GigaChads and Stacies in space, the renderings of my mom were remarkably more morose. The algorithm didn’t seem to know what to do with an androgynous barefaced older woman. The best it could manage were clichés of a cougar or a crone. I looked at my fierce friends and I looked at my mutated mom—all the normative visual tropes were there, but they were twisted, roughly hewn. Everything was so eerily familiar and yet so indescribably weird.
I suddenly thought, I’ve finally found the perfect tool.
Up until then, photography had been my primary medium, in large part for its dual ability to alienate a subject while also drawing it more into view. I had a little catchphrase: Taking a picture of trash is like crowning a prince. Conversely, making a postcard of a landscape can be a kind of dethroning, an unceremonious flattening. The mechanisms of this transformation have slowly revealed themselves to be one of my most important and bedeviling subjects (something I explore at length in my book Hello Chaos: a Love Story).
In photographing my mother, I had to confront the social conventions that underlie every aestheticized gesture. I crammed her into pitiless high heels and bound her breasts in forced masculinity. What voice did these visual conventions have, and how did they speak through the singular subject of my mother? How did my mom and my representations of her make these social and aesthetic expectations sing—or scream, or laugh, or cry? What assumptions do we draw upon to categorize what we see, and what are their political implications?
AI has, in essence, scaled up this analytical approach to representation and taken it to the next dimension. While photography represents what something is, AI represents how something works. It models and performs the visual connections we make and the embedded expectations that shape these connections. By turns elegantly and awkwardly, AI emulates the social conventions, visual expectations, and organizational structures that give pictures their meaning.
Mom (Blonde), 2015.By trawling through the vast store of human production on the internet, AI systems have crystallized a unique form of collective knowledge. This machine form of intelligence, inaccessible in total to any individual artist or author, reflects, embodies, and amplifies the analytical intelligence that arises from our collective labor. When we look at the output of AI, we see alternately yassified and mutilated glimpses of ourselves and our communal structures. AI images are funhouse reflections of a sociopolitical reality receding in the rearview mirror.
Many people are unsettled by what they see in this warped reflection.
When I began sharing my experiments with generative AI—culminating in Cursed, a book of images that deliberately embraced the tool’s inherent visual distortions—I was quickly drawn into the heated debate on AI and creativity. Critics saw the wrongness depicted in my images as a direct analogy to the wrongness of the technology itself. In their view, creativity has an inherent, fixed morality that AI is poised to corrupt. They argue that AI’s crude interpretation of our collective consciousness threatens both the essence of creativity and the livelihoods of those who depend on it.
This raises fundamental questions: what exactly is creativity, and to whom does it belong? Who is its rightful beneficiary? And in defending it, what are we truly protecting, and what is at stake?
The most vehement criticisms of AI art revolve around two key, interrelated issues. First, critics contend that AI diminishes the creative process by replacing human imagination—which is inherently unpredictable and contextually rich—with a formulaic approach. In other words, the machine is doing all the meaningful work of art, and doing it wrong. Second, AI models are trained on ‘stolen’ artwork, which makes them fundamentally illegitimate. Essentially, AI art is cheating on the test and allows users to cheat too. It is both a spiritual and literal theft.
These criticisms are grounded in a deeply ingrained belief that originality is the definitive criterion for assessing creative value. AI art is often labeled derivative, seen as inherently secondary to the superior creativity of humans. True creative expression is posited as a sudden and novel rupture or disruption, a big bang of creativity, rather than a cumulative, collaborative process. There is a prevailing belief that the artist with the best ‘receipts’—the claim to being the first to express an idea in a particular form—should have sovereign rights over that expression.
Whose labor matters?
Mom (Goldfish), 2016.The issue of originality raises further fundamental questions of identity and belonging: What differentiates me from others? How do I recognize myself and how do others recognize me?
These were central concerns in my work with my mom. As I relentlessly photographed her over the years, her image became tied to my public identity as an artist, and vice versa. People routinely mistook her for me in public, telling her that they loved her “self-portraits.” Who, then, was the author of this project? Who should get the artistic credit and the capital it generates—me or Mom (she was the one doing the thankless work of wearing heels, after all)? Or is it the audience that validates our idiosyncratic familial dynamic as a form of art? And what about the people who made my camera, my computer, or my editing software? Whose labor is most valuable?
Alan Turing, the grandfather of artificial intelligence, predicted that, in the face of the mechanical reproduction of their roles, the “masters [experts with specialized knowledge or skills] would surround the whole of their work with mystery and make excuses, couched in well-chosen gibberish, whenever any dangerous suggestions were made.” Creative labor has long been shrouded in such mystery. In the context of capitalism, art has always had to appeal to mysticism to justify its fundamentally unproductive, experiential nature. It is seen as an ineffable sacred act that supersedes the other labor that attends it. This has led to a personality cult of the individual creative genius who holds exclusive ownership to some magical artistic impulse. We celebrate Jeff Koons, not the assistants and fabricators who construct his work.
It makes sense, then, that some artists would be skeptical of a technology like AI that appears to be attempting, rather successfully, to lay bare the constituent parts of expression, potentially undermining the mystique that has long protected the authority of this individual creative genius. AI both erases and exposes everyone’s receipts, revealing whose contributions to art-making are, and are not, being valued.
While much attention is given to the exploitation of artwork in AI training, less focus is placed on other forms of labor. For instance, OpenAI employed Kenyan workers to label harmful content such as pornography, violence, and hate speech in order to train its content moderation AI systems. These laborers were paid less than minimum wage for their challenging and often traumatizing work, which was crucial to making the AI models commercially viable. The role of such labor practices in shaping AI algorithms has been largely overlooked, while the exploitation of artwork and other expressive data in training sets dominates the AI discourse. The disparity suggests that the labor of these workers is less valuable—less inherently “human”—than the labor of art-making, even as their critical role in the machine learning process demonstrates otherwise.
The critique of AI on the grounds of artistic theft reveals the embedded labor hierarchy of mental over manual—and expressive over formulaic—labor. This is neatly encapsulated in a popular tweet-cum-meme: I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes. (There is also an interesting parallel here to the status of craft in the art world, which has historically been marginalized because of its association with women’s work and domestic labor.) Some forms of labor are valorized more than others, and the metrics of this valorization are often unstated and rooted in murky, discriminatory logic.
To make machines (and masters) seem intelligent and original, it is crucial to hide the labor and workers that enable their operation. This invisibility and lack of credit is the very exploitation people protest when they accuse AI developers of stealing artwork to train their generative models. The data that fuels AI’s collective intelligence is stripped of authorship and monopolized by the organizations that own the algorithms. Similarly, artists are often seen as the sole creators and owners of their work, despite the many forms of labor involved in bringing their ideas to life and giving those ideas meaning.
A problem of scale
Untitled from Cursed, 2024.To the extent that AI diminishes creativity, it is that, in the eyes of the algorithm, the output of a conventional artist (a photographer, say) and the output of anyone else (a meme shit poster, say) have the same value; they differ only in register. AI is accelerating an ongoing institutional collapse of authorship and taste. The high-culture museum has been exploded into an open-air county fair, and the elites—the masters—are scrambling to retain their special status.
But who are the masters of this newly consolidated county fair?
A common thread in the critiques of AI is the fear that the machines are siphoning our creative energies to fuel their own activity. AI acts as an insatiable autonomous engine, indiscriminately consuming intellectual property and natural resources while offering nothing in return, or something we neither need nor want. We are increasingly living inside the corporate imagination of algorithms designed to maximize the profits of the Big Tech companies that engineer them. Our thoughts, desires, and identities are mediated by the mechanisms of the market and anodyne commercial morality, which attempt to enclose common sense in order to control and exploit it.
While market competition and labor hierarchies can spur innovation and reward excellence, they become unjust when the rankings and rewards, which should be provisional and contingent, become rigid and fixed—or mechanized and automated. AI systems embedded with the incentives of platform capitalism threaten to solidify and amplify existing inequalities, pushing capitalism toward an even more despotic and dangerous phase.
As technology analyst Benedict Evans observes, “a difference in scale can be a difference in principle.” Those with greater power and capital bear more responsibility for their impact. Likewise, the social and environmental consequences of AI fall on those who develop and maintain it, as well as the governments that regulate it. Instead of fixating on the individual fragments captured by AI, we should harness our collective power to advocate for greater public oversight and involvement in AI research and development. At the same time, we must hold Big Tech and the institutions that enable their disproportionate influence accountable. This does not absolve individuals of responsibility, but we must avoid expecting more moral purity from them than from corporations and governments, simply because individuals are easier targets for criticism.
In critiquing AI, often fairly, we back our way into a critique of our own current value systems. In confronting the potential hegemony of AI systems and the companies that can unfairly leverage them, we confront our own internal hegemonic impulses to lay claim to value that should by rights be distributed in society. Under capitalism, which strives to privatize and commodify everything from shoes to attention, artists need to appeal to a notion of individual creative supremacy in order to survive, but such supremacy should not be turned into a universal ethical or moral right.
To truly understand, challenge, and reinvent technologies, we must engage with the social relationships that shaped them. Everyone contributes labor to society, and it is the collective energy of this labor that constitutes the powerful algorithms and the meaning these algorithms have within the society that created them. By that logic, it is not only the artists whose work has trained, or “inspired,” the AI models who should be protected and remunerated, but every person alive. And to take this logic to its conclusion, not only should all living persons be compensated for their existential labor, but so should everything on earth: nonhuman life and the environment that nourishes and enables our flourishing should be seen as equal in value and worthy of systemic protections. Basic rights and safety should not be predicated on one’s position in any hierarchy.
AI can be used for our benefit, not just our exploitation
M0M (Moth), 2023.Let us not forget how truly weird AI is. As much as it seems to be ushering us on to a conveyor belt of deadening sameness—a Marvel universe-ification of the mind—it also surpasses our individual imaginative capacity; it offers us something literally superhuman. It presents us with radical opportunities to self-reflect, confront uncertainty, and change.
Which brings me back to the Marvel-ing of my mother. As I photographed her over and over, and subsequently fed her images into AI, the social and aesthetic expectations at play in the pictures started to loosen their grip. They became exposed and then weathered, familiar yet grotesque, manageable and movable. We saw, for example, how a simple tube of lipstick could radically transform Mom’s image while remaining a disposable object. It was simultaneously an authoritative symbol—containing an entire history of femininity—and a common plaything, as powerful and deterministic as it was interpretive and arbitrary. We were as subject to its power as it was to ours.
We must approach AI with the same mix of curiosity and rigor, humility and assertiveness, recognizing that it too is not a monolith but a tool continually shaped by our collective activity.
Just as my work with Mom revealed the configurability and contingency of social conventions, AI challenges us to confront the shifting dynamics of technology and the equally fluid systems of value it reflects. Our task is to navigate this unfolding technological and cultural landscape with political integrity, focusing the emotional energy that AI has activated in us on the society that employs it.
It feels fitting to close with borrowed words from a collective voice, the 2017 manifesto of Logic(s) magazine:
“We want to ask the right questions. How do the tools work? Who finances and builds them, and how are they used? Whom do they enrich, and whom do they impoverish? What futures do they make feasible, and which ones do they foreclose?”
“We’re not looking for answers. We’re looking for logic.”
Copyright
© Art News