These past few months are making me feel like my defense of AI is one of my most controversial opinions, which says a lot about the quality of discourse around it, given the rest of the things I believe in. So I decided to put the arguments against AI usage together, from most sensible to the most annoying, and address them somehow.
Not extremely environmentally costly
One of the most common negative beliefs about AI I ever came across is that it takes much more energy than any other form of media and is therefore uniquely bad for the environment (“boiling whales alive”, as one Twitter user put). But that is not true, engaging with AI is in line with other kinds of computer usage in terms of energy cost. Comparisons in the following several paragraphs are taken from this independent research post. Please give it a read and try to redo the calculations.
First of all, it’s important to differentiate between the amount of energy put into producing an AI model (a model is basically a finished program) and the amount of energy that goes into image/text generation later. Training a brand new model is costly – but it only needs to be done once. Training one AI model takes about as much energy (40,000 kWh to 60,000 kWh) as making between 2 and 4 cars (17,200 kWh). But an AI user is not training models.
When it comes to working with AI, generating images is one of the most energy intensive operations (between 3 Wh and 7.5 Wh per image). But the alternative, creating an image through the means of digital drawing, is not better. In order to reach the 7.5 Wh threshold in Photoshop, you’d need to use it for no longer than two minutes. I do not use Photoshop, but most art programs are somewhat comparable in terms of energy usage, and most of my drawings take me about two hours to complete, so go figure.
But it is also true that most dedicated AI artists also spend much longer time refining their art than casual users. Running Stable Diffusion (one of the most popular AI image generators) for one hour uses the same amount of energy as working in Photoshop for between one and two hours. When we assume a similar duration of usage, it is a little more costly than common art programs, but not by much.
So, no, generating an AI image is not a big energy waste.
Not a cause of harassment
There have been multiple scandals surrounding AI deepfake pornography and similar subjects. In these situations AI has been used by abusers as a tool of victimization.
Many other people addressed similar arguments made about other media – e.g. kodocon porn and grooming, or social media and online abuse as a whole – shutting down a whole way to self express, used by most in a benign fashion, just because some repurpose it for abuse, is not a solution.
I am against spreading of fake media where a real nonconsenting person is depicted sexually. I do not, however, believe in taking away the entire technology that can be used to create such media (be that AI, Photoshop, or anything else).
Any new technology both gives the abusers new ways to abuse and victims new coping mechanisms. You can trace human history as far back as you want, and you will always find people who have been stalked and sexually harassed. The only way to resolve it is to tackle the culture that validates the harassment.
Not stealing from artists
A lot of people have a wrong idea about how AI image generation actually functions. They believe that images used during the training stage are still stored in some kind of a database, and that the model pulls them out of it based on keywords and presents a collage. That is not true. The information in the following paragraphs is taken from here and here.
It is true that image generator training requires use of sample images. But these images are not stored inside the model. Instead, they are used to make the model learn how to create concepts from scratch. In a very simplified way, it learns that “red” means hex code FF0000 and “ball” means a circular area on the image, so it can produce a picture of a red ball without needing to pull out an existing reference from anywhere.
This is a tough idea to process, because concepts like “learn” and “understand” traditionally imply sentience. If you like, you can say AI works via associations between a concept’s most typical appearance (visual elements repeated in lots and lots of sample images) and certain key words. But I personally don’t see a problem with saying AI learns or understands, I think shutting the door on the prospects of computer sentience is a fear response.
So, no, the result you get while generating an image is not a mashup of some human-made images.
Some do believe that this process still is a form of art theft, because the artists whose art was used for training were not financially compensated. This is a more realistic concern, and it starts a discussion about inspiration and derivative work. I learned to draw cat elbows (a very underrated part of cat anatomy) by watching a DeviantArt artist Vialir, and I do not think I need to pay them for it (or credit every time I draw a cat). I do, however, agree that a commercial company deliberately using someone’s art as educational materials should warrant some payment – but not because it makes the end result inherently unoriginal and more derivative.
However, many people are currently approaching this matter from a completely different angle and promote extending copyright laws to art styles. If such laws pass, they will uniquely favor big art studios (e.g. Disney) over small artists and ultimately make things worse. Another concern at the core of this initiative: if someone makes art in my style, they won’t commission me. But framing profits you did not gain as money you lost is an error we’ve been through already while discussing why movie piracy is not theft. As with many other things, we can say that the problem is capitalism, not the technology.
And in any case, it is possible to train an AI model entirely on Creative Commons images or images from consenting artists.
Not “rotting your brain”
People who like to complain about the society’s presumed degeneracy have been using AI as their scarecrow. They often attribute common long Covid effects to it – brain fog, low attention span, increased tiredness – and it works, because many would rather not think about Covid and pretend that a mental disability is punishment for laziness.
The idea that engaging with a new media too much causes regression is a classic – they said it about smartphones, regular phones, computers, TVs, newspapers, some genres of books, books as a whole. If the society was actually regressing every time this was said, we’d not have any new technologies at all.
But in this particular case, it is especially ridiculous, because the same people who say students are mentally regressing because of AI are saying that students need to attend in-person schools and universities. Covid did not disappear.
As for the actual impact of AI on people’s cognitive functions, it isn’t black and white, and scientific research into this sphere highlights both benefits and drawbacks. And when exploring potential drawbacks, researchers point out parallels with impacts of previous technological breakthroughs and recommend solutions focused on mitigating the risks rather than quitting AI usage.
Not “soulless”
“Soulless” is a catch all term that anti AI people use when they want to talk about how it lacks artistic merit.
They want to claim there is no intention and no creative process behind AI art, but they forget the person who writes the prompts and how complex and multilayered the procedure may actually get. AI images do not appear by themselves, there is still someone who wants this image to appear and refines the description. This is, by itself, a creative process.
It all circles back to a desire to define art through the amount of effort the artist put, a belief that “artist” is a status you earn rather than a description of what you do.
As a manual digital artist who struggles a lot with art (and not due to lack of practice), I oppose this classification. It is unfair and does not even reflect how people evaluate art in their day to day life, when they’re not actively thinking about AI. I know people who create much more visually appealing images than I ever could with much more ease and speed. Am I supposed to get angry at them, claim that I am worth more because I spent more time and pain on something that looks like their 5 minute sketch? Some people do that, but I don’t want to.
People also are awful hypocrites at pretending they think all human-drawn art is beautiful and valid. The online artist community has its preferred/trendy art styles and will largely overlook everything else. One of the outcomes of the anti-AI crusade is that manual artists start getting harassed for drawing like an AI. There is no set definition of “soul” or “meaning” in art, nor there should be.
But the soul argument seems to be the core argument against AI for many of the opponents, because, once everything else has been addressed, it always comes up. It may point at the actual reason why AI art bothers people so much, they don’t take well the loss of divinity they constructed around art as a process. There is a general consensus that some things a robot just is not supposed to do.
Which brings me back to the question of AI sentience that I mentioned previously. We are not at that point yet – there are no truly sentient AIs. But there inevitably will be. What then? It doesn’t seem like the society is more eager to accept AI the closer it is to behaving like a human, it seems like the exact opposite is happening. Are we heading towards a new form of oppression against a new form of life, like sci-fi predicted? I feel like it’s the right time to take a stance.
Leave a Reply