First off I'd like to make it clear that my goal with this blog is, in general, to promote my own work and talk about things within the world of art and entertainment that interest me, so I intend to steer away from these types of critical opinion pieces as much as I can. This article will be a (hopefully) very rare exception, but that I felt I needed to make as AI has not only become a controversial subject within the art community, but also one wrought with a lot of misconceptions that needs to be put in the spotlight. Also note that I will generally be focusing this article on Generative AI and text-programs like ChatGPT, since these are the only variants of this broad, and often vaguely defined, new technology that are actually affecting the art community.
Let's start with how the name itself, Artificial Intelligence, is a blatant misnomer. It's a marketing ploy. It sounds cool, like something out of a Sci-Fi movie, and it makes people interested in the product. It sounds futuristic. It promises progress. We may not have the flying cars that cartoons promised us when we were kids, but look! Here's artificial intelligence, just like they had in Star Trek and numerous other Sci-Fi movies, TV series and books. But when you look behind the curtains, you don't find some brilliant supercomputer that knows the secrets of the universe. There's no KITT, Data, or even HAL 900 waiting to greet us. Granted, there is no small man hiding behind there either. There is in fact a machine (in the broad sense of the word), but we must be honest with ourselves and look beyond the misleading and hyperbolic marketing and accept the fact that there is nothing remotely intelligent going on here.
So what's up with my strange title for this article? It is a reference to the Infinite Monkey Theorem, which, in short, states that "a monkey hitting keys at random on a typewriter for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare." The core idea is basically that if you type random letters for long enough you will eventually, by pure chance, produce complete words. Continue to do this indefinitely, and eventually you should get a complete text seemingly made with creative intent.
You might recall having seen this on an early episode of The Simpsons were Mr. Burns shows Homer that he has a room with a thousand monkeys chained to a thousand typewriters while proudly stating that "soon, they'll have written the greatest novel known to man." He then inspects what one of the monkeys has written and reads; "It was the best of times, it was the blurst of times." Needless to say, Burns was not happy with the outcome of his scheme.
Before ChatGPT the best you could do was a room full of monkeys. Copyright © 20th Century Fox Television |
So what does this have to do with AI. Well, when you peek behind the curtain and look at text-writing software like ChatGPT or "art" software like Midjourney, what you will find is that so-called artificial intelligence is really just the algorithmic equivalent of a thousand monkeys on a thousand typewriters, typing for a thousand years until one of them accidentally, by pure mathematical chance, types out the entirety of Hamlet, or at least something quite similar. The only real difference here is that a computer, by the virtue of being a machine, can do this thousand year process in less than a second.
And since it is designed to store data, such as a complete lexicon and thesaurus, and receive feedback from its human users, it was from the get-go given plenty of proper words to assemble in a random order, thereby sparing its programmers of having to sift through complete nonsense like "blurst." Apply this same base logic to pixels, where instead of a lexicon it was simply given access to a vast library of images (and some times through social media platforms), it could, after many years of failure and feedback, create facsimiles of paintings, photos, etc.
And this is by no means some crazy theory of mine, this is exactly what has been going on for the past ten years, only that now that these programs have reach a point were they work to a mostly satisfactory degree, even compared to just a few years ago, the people trying to sell it to you tend to skip this part of the story and present the software as a borderline magic genie scenario. But if you've been paying attention to the development of this technology you might recall just how little AI imagery resembled anything at all less than ten years ago (as of writing this article in 2024).
AI imagery ten years ago was a blurry, warped, nightmarish mess that was more mocked than admired by the small percentage of the general public that was paying attention to it. A prompter might type something as simple as "man standing in a room" and the virtual monkeys would produce some vaguely human-shaped blob standing in front of a mess of straight-ish lines that was the closest it could get to depicting a room. But eventually these blobs began to take on more human characteristics, and the rooms became genuine rooms.
So how did this change come about? What was this software even doing in the first place? Well, as already suggested, it was simply fed a plethora of images and told what those images depicted. Then it was fed even more images and told what those depicted. And this went on and one until the thousand monkeys working at a thousand typewriters every second, storing the data provided by its human keepers what was right and worked and what didn't work, until it had eventually gathered enough data to begin to recognise patterns in the photos. The programs began to find commonalities that corresponded to the text being given by their human operators. So when the computer detected the word "mouth" there was always a thin slit within the oval shape with the two smaller shapes and the long thing in the middle. When the humans typed the word "nose" it was always underneath the two round things and above the mouth. When they typed "eyes" there were usually two of them and they were above the nose and the mouth. And so on, and so on, until the software eventually "figured out" what a face was. Though, even just a few years ago these programs could still not recognise what a face seen from the side was supposed to look like, because now there was suddenly only one eye, the nose was in profile, the mouth was a different shape, etc. so it had to accumulate this data as well. It didn't recognise it as a face, it simply had to accumulate a new data set.
Now this is all admittedly a gross simplification of the complex data gathering process that went on, but what I am trying to demonstrate here is that the software never actually "learned" anything. It can't think in terms of "this is a face," or "this is a hand." Especially not the latter, as AI still tends to add too many or too few fingers to hands. But when the humans type the word "face" it has merely gathered enough images to recognise similarities in the pixels and their arrangements so that it can copy/paste this into new contexts. There is nothing intelligent going on here, just a lot, and I mean a LOT of data gathering.
You know how when you text on your phone it will always suggest the next word? This can often be quite useful in speeding up the process, especially when you're on the go, but it can also be quite frustrating as it often makes really silly suggestions. Why is that? Well, it certainly doesn't have any clue as to what you are actually thinking, it's hardly a mind reader, but after having read through people's texts, including your own, the simple program installed on your phone will start to correlate the commonalities in your texting habits, again, not through any actual thinking process, but simply by logging the order in which you tend to type words, and then it tries to predict the most likely outcome whenever you type a word or letter it recognises.
AI is no more "intelligent" than the predictive text software on your phone. |
For example; if you type "I" it remembers that previously when you (and other people) typed "I" it was followed up by words like "am" or "will" or "did" or "didn't", etc. And if you then choose "am" it decides that the next word, again based merely on frequency, is likely an action like "going." If you pick "going" the next word is usually a locale, like "home." And suddenly you can type a complete sentence like "I am going home" without initially having to actually type more than the letter "I". Of course, the moment you decide to write something out of the ordinary, that is something completely different from what you've typed before, not to mention something much more complex, then this program will always get stuck and can turn into more of an annoyance rather than an aid.
Whenever you see some AI "art" or some complex ChaptGPT text online, you are effectively looking at the exact same technology as the predictive text software on your phone, just far, far more complex. But at its core, it is the exact same thing. Give a computer enough data, be it text or pixels, and it can now create something very detailed and seemingly complex. That is, until you start scratching the surface.
AI is by its very nature incapable of genuine Creativity, it can only throw words or pixels together until it hits on something that resembles the real deal, and the more it learns to eliminate the undesirable outcomes, the more efficient this essentially mindless process will become. But, and this is a big but, it can never truly make something new. It can only imitate what has already been made.
So, if AI isn't actually intelligent but simply a very cleverly programmed copy/paste software, why are so many angry about it? Why all the heated debates?
Well, I can mostly just speak for myself and convey my own concerns about this technology, so here are my problems with this technology, or rather, how it's being used, as well as my concerns about the long-term consequences.
MISANTHROPY PEDDLED AS PROGRESS
Art may be a business, but regardless of any commercial aspect it may take on, it is still a form of human expression to some degree. We all know the cliches, and we have unfortunately begun taking them for granted, especially in today's social media world where we are exposed to more things than our brains can properly process. But unless you are extremely cynical about all of this, we all know, quite intuitively that art is meant as a form of expression as much as it can be simple entertainment. We all know that a good book, or a good painting, is more than just typed words on a piece of paper or globs of colourful goop on a canvas, or pixels on a screen for that matter. We all know that if its good it is not only produced with skill, but with passion, with intent, that it transcends the simplicity of the process or the material it was made on, or the tools with which it was made.
But then why can't AI simply be used to help speed up the process? Well, simply put because there is no process. There is no tool, there is no thought, and there is certainly no skill. Its just data. The only human aspect involved is the initial idea, usually in the form of a handful of types words or phrases. And anyone who has taken a creative writing class will have heard some variation of the phrase; "Ideas are a dime a dozen. The real value lies in what you make of it."
Claiming ownership of an AI image or text is a bit like commissioning an artist or writer, waiting for them to finish it for you, then slapping your name on it when you get the finished product. Of course, this has happened many times; ghost writers is very much a thing, and several famous historical painters have claimed the work of their students as their own. But at least in those cases, once found out, we can simply credit the real artists. With AI we have no such luxury. That is, unless we consider where the software got its data in the first place, but more on this later.
One of my biggest issues with the way AI is being used to "create" art is that it is completely unearned. We humans intuitively appreciate hard work, we admire people with skills because we know it took a long time for them to get to that level. Learning, growing, improving ourselves, these are some of our greatest virtues. We admire people with skills, be they artists or athletes. Just as we admire someone for getting fit by rigorously working out, we admire the artist for honing their skills over time. With AI, however, this all goes away. All you need is an idea—by far the easiest part of the process—and with a bit of basic typing you can basically get to the finish line in seconds. To me this is effectively the art-world equivalent of using a cheat code in a video game, and is antithetical to the process.
If athletes started using mechanical legs to "improve" their performance, would you still care? |
In recent months I've watched AI "art" flood social media, be it Facebook or art pages like deviantart. The frequency is obviously the result of how quick and easy it is to do. Some may be indifferent to this, but to me this is the equivalent of a scenario where gamers would have to endure streaming services like Twitch being flooded by anti-gamers who turn on every imaginable cheat code and employ a gaming algorithm that will play the game for them. Imagine if a "gamer" were to stream themselves simply turning on a game, activating a "play it for me" algorithm," then they simply put down the controller and lean back in their chair with a smug expression and stream themselves not playing the game. Image then if 50% or more of every gamer stream you scrolled through featured people like this. No challenge, no skill, plain and simply nothing. This is what it often feels like being an artist or an art appreciator scrolling through social media and art sites nowadays.
But AI isn't just unearned, it is simply a way for lazy people to feel accomplished while doing hardly anything, and at the expense of the art community at that. I have already seen several artist online lament that they can no longer make a living of art because people are producing similar imagery through AI programs (more on this later). Which brings me to my next point; commodification.
Social media set the stage for this.
Even before AI "art" became a thing I had noticed a disturbing trend online. Art was becoming commodified, it was becoming more and more generic and soulless. I couldn't quite put my finger on what precisely was wrong until I read a post by an artist who announced she would no longer post her work on social media. Her reasoning was quite interesting. She explained that the more she listened to the endless stream of feedback from anonymous strangers that her art had started to change, it had begun to morph into something that she eventually realised was no longer hers. She had let an autonomous collective dictate her style and output through likes, through comments, through which images got large amounts of views and which ones didn't, and in a strange way her art stopped being hers. Instead, it was made by the collective, and the result was that it started to become generic. It started to loose its soul, because she was no longer really in charge of it.
Social media has become as much a curse as a potential blessing to many modern artist. |
Now lets remove the artist entirely, and let the algorithm take over. Let's replace the intuition of this individual and replace it with an algorithm that can scan the internet and combine elements from thousands upon thousands of images within less than a second, picking the "best" elements not from any real understanding of it, but simply from evaluation its popularity based on how many "views" or "likes" it has, and other arbitrary metrics, and then compiling it all into a facsimile that loosely matches a few typed words. What you get, is so called AI "art."
And what you get, though it seems elaborate and talented at first glance, once you look closely, once you start paying attention to the details, and especially if you're already someone who possess a scrutinising eye for art, you will begin to get an eerie feeling while watching these algorithmic imitations. I know I do, and I am far from alone in this. These images simply feel lifeless and generic in a way that I have never witnessed before. The overall sensation is quite uncanny, especially when you start to develop and intuitive response to it. If you're familiar with the concept of "the uncanny valley" you know what I mean,
This all reminds me of a rather sad comment I read online recently:
"AI accidentally made me believe in the concept of a human soul by showing me what art looks like without it."
Now, regardless of whether you believe in a literal soul or not, I think it is quite tragic that there are those who are willing to sell humanity so short the way I've heard some people do recently by claiming, whether directly or indirectly, that creative thinking can be broken down to a mere algorithm.
This is why I have decided to be so bold as to call AI a form of misanthropy, because it is by its very nature anti-human, anti-individualistic, and anti-skill. It is a cheat code to life peddled as progress.
A PENCIL IS A TOOL, AI IS A REPLACEMENT
Some will obviously take umbrage with the accusation of AI merely being a way to cheat yourself to success, and there is one counter argument that is consistently, and I must say, often reflexively, brought up in response; "It's only a tool."
Now, I do understand where this sentiment is coming from. All digital software that has been used to produce art or some form of image has at one point received criticism and been accused of being "fake" to some degree, and the counter argument has consistently been some variation of "it's only another tool." But does this really apply to AI? It's another digital software people can use, yes, but can you actually consider it a tool like Clip Studio Paint or Photoshop (before it began including generative AI that is).
Isn't the whole point of AI software like ChatGPT and Midjourney that it will do nearly all the work for you? Isn't that their primary selling point? How then can it be classified as a tool? Even with a digital painting where the canvas, the brushes and the paints are all simulated, you still need a human hand to operate the mouse in order to put the brushstrokes on the blank page. Even a 3D modeler needs to move a cursor around to mold the desired shapes in the virtual world. There is intuition and skill involved. There is as much trial & error as would be on a physical painting. With AI you have none of this.
"It doesn't matter what tools you're using" is a poor argument when you've eliminated the creative process altogether. |
Having an AI make you a painting based on an idea is a like a kid convincing his dad to do his homework for him, then handing it in to the teacher the next day claiming it as their own. The kid didn't really do anything, and neither does prompters who rely on AI. Just like the kid didn't solve his math problems they didn't make any decisions on composition, or the choice of colour, the software did it all for them. And were did the software get it all from in the first place? It got it from accumulated data gathered from millions of images online, images made by actual people through hard work.
To be completely clear, I personally don't care if people make their art using a pencil, a paintbrush, or a computer mouse, but when someone sets out to create I expect them to Create. To let the computer do 99% of the work for them is to neglect one's moral responsibility to produce something for oneself. And if they have no interest in being an artist, if they feel no need to spend the time to learn the required skills, then that's fine, but then they should at least have the decency to employ the people who do.
AI INCENTIVISE GRIFTING AND DISCOURAGES GROWTH
In an age of spam bots and seemingly endless online scams it is unfortunate that AI should have arrived on the scene when it did as it has only multiplied these issues tenfold. However, since my focus here is related to art and creativity, I will not stray into this territory. Suffice it to say, there is more than hollow art to be concerned about.
One of the most frustrating aspects of the extreme corner-cutting that AI provides people who aren't willing to put in the work is that so many have quickly adopted it into dishonest business practices. It is not only common for people to create false artist names and sell their "art" online, presenting themselves as a sort of art dealer for artists that doesn't even exist, but what is especially heinous is when people like this specifically tell AI programs to imitate the style of current artists. I briefly touched upon this earlier, but feel it is important to elaborate on this point.
Within just a year or two of AI image generating programs becoming advanced enough to produce serviceable imagery I started seeing posts online by well established artist lamenting the fact that they could no longer support themselves financially on their art. The reason was, somewhat ironically, partially due to their own popularity online. You see, these artist had been in high demand for several years, but when AI came along many people started simply telling the software to make images in the style of these artists, therefore eliminating both the time and cost of a commission, or even the much simpler price of buying a simple print from them. There have even been attempts by people to make a profit of these "in the style of..." images.
As we've already established AI cannot Create anything on its own, it can only accumulates data and reproduce facsimiles. So when a prompter ask for an image in a specific style what effectively happens is that the algorithm searches the internet for the specified artist, "downloads" all of their work, or at least the pieces that fit the specific criteria of the prompt, and copies as much of the information it has gathered in order to produce something that to the untrained eye can pass for their work. I don't think I need to tell you why this is a huge problem.
Many will exploit any situation that leads to easy money, and it doesn't get much easier than with AI. |
And if you think the AI is merely taking "inspiration" from it, as many have claimed, remember, it can only accumulate data and reproduce it according to established patterns. It can't think in such human terms. Don't believe me? It is not uncommon for AI to reproduce an artist's signature for the simple reason that it was part of the accumulated images. In some cases the signatures are even legible. In some cases, the letters are scrambled or warped, but the style of the hand is still recognisable. Now why would the software do this? Simple. It cannot think, it cannot reason, it can only copy and redistribute individual elements according to its programming.
In short, AI "art" is not only unearned, lazy, lacking in true Creativity, it can be straight up theft. Unfortunately, there are currently no copyright laws that can take such a complex and indirect approach into account, but hopefully this will one day change. However, on a somewhat more positive note there are at the very least many people invested in AI imagery that implement an "honour code" to never include living artists in their prompts. I think this is very commendable, but I still think it ignores too many of the larger problems with AI software and it is unfortunately not a widespread principle yet.
I also want to make it clear that many of the people who use or promote AI are quite clear that they aren't against artists or human creativity, and I am more than willing to give them the benefit of the doubt. I think these people are quite sincere and I want to make it clear that I'm not trying to present these people as villains. But I do think there is a lot of naïveté at work here and an unwillingness to look at the bigger picture.
In my experience AI users seem to fit into two categories; blatant grifters, of whom there's not much more to say, and those who are simply fascinated by a new technology and don't think much about the potential long term consequences.
One of my biggest fears relating to AI is the potential repercussions it will have on the current generation of children that will be growing up with the knowledge that hard work is no longer required for a saleable or functional product, be it a "painting," a facsimile of a novel, and so on. Will anyone even bother to learn new skills when you can simply type a few words (presumably with spell-check on) and get a detailed painting within mere seconds, or a full novel, or even a full on movie, as some companies are currently trying to make a reality? Will they have the incentive to try, fail, try again, and work hard to acquire a skill when the end result (even if it is an algorithmic knockoff) is so close at hand?
Hopefully not a vision of things to come. Wall-E (2008). Copyright © Pixar Animation Studios. |
I hope the answer is yes. I hope that my athlete metaphor from before holds up, and people will continue to prioritise skills over easy gains and look down upon any process that is effectively a form of cheating. But still, you can't blame me, and many others, for being concerned about this. People have an unfortunate tendency to choose the path of least resistance, and AI is the greatest cheat code for life we have yet to create.
THE CHECMATE MACHINE
I am continually fascinated, and frankly a bit baffled, by our collective tendency to always try and make machines that can replace us.
This whole current debacle about AI reminds me a bit of the Deep Blue chess-playing computer from the 1990s. Considered a milestone in artificial intelligence, it was the first machine to win a match against a reigning world champion and since then we have simply taken it for granted that a computer will always be able to outperform virtually any human at chess. And yet, once the novelty of this machine had died down, so did the use of the technology itself...more or less. Sure, chess-playing software is still a thing, though it is pretty much exclusively used to train humans by effectively "dumbing it down" so that it is actually beatable, and even this is only so that one can eventually take on a real human opponent. There are no computers playing matches against other people anymore. No one considers chess to be a dead sport, quite the contrary, its as popular as ever.
In the end Deep Blue has mostly become a historical curiosity. It may be considered an important milestone in computing, but it is a small chapter, if not a straight up footnote, in the history of chess. And in the grand scheme it is still not considered all that important. Perhaps our view of humanity was simply different back in the 1990s, but people back then seemed to have intuitively understood that there was nothing of societal value to be gained from Deep Blue. And this makes complete sense. Why would anyone need a machine that can always beat you at a game? Where's the fun in that?
As often is the case, we are more concerned with "Can I?" than the much more important; "Should I?" |
And that's why I've even brought up Deep Blue in the first place, because I hope this is what AI "art" will become in the future. A historical novelty. A curious footnote. Or at least something that is repurposed into something more useful. I hope we will all one day come to our senses, as so many already have, and acknowledge that there is no real value in images or text made by data-collecting algorithms. That this easy-way-out takes the "fun" out of life, just as the checkmate computer did for chess three decades ago.
IN CONCLUSION
And finally, since I have been focusing on AI seen from the viewpoint of the art-world I want to clarify that yes, there certainly are fields in which AI can be a genuine boon to humanity. Namely the sciences.
AI is after all undeniably great at one thing; data collecting. That is after all what it was made for. Even its ability to analyse imagery can be quite useful when analyzing large amounts of data. I know from experience how pleased archivists are with AI as it now allows them to scour through an inhuman amount of stored data, making it easier researchers to find documents that previously would have been buried so far (both literally or figuratively) that it would have been a time consuming and labour intensive project just to dig it out. AI can likewise be used to transcribe old documents, it can be used to identify damage on old analogue film (though people still have to evaluate what to remove or not), it can be used to look through telescopes to identify new planets or potentially dangerous asteroids, it can be used for gene-mapping, and so forth, and so forth. But it is worth mentioned that in these situations AI is doing what machines are actually supposed to do; remove tedium so that humans can step in to do the creative and intellectual work that gives us all purpose in life.
There's a quote that's been going around lately that I feel cuts right to the heart of the matter quite succinctly (and humorously).
"I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes." - Joanna Maciejewska
Granted, letting machines remove all "hard" work is in and of itself a potential issue as well, and something which also warrants further discussion in the coming years, but the important thing here is that if there is one thing that pretty much everyone seemingly agreed upon until recently, it is that our creativity and ingenuity is is a fundamental part of what makes us human, and removing that from out lives can only lead to problems.
PS. I also want to make it clear that my monkey analogy is just that, an analogy. It is meant to illustrate AI's lack of a proper intellectual understanding of the essence of art and information in a somewhat relatable way. Actual monkeys, unlike machines, by virtue of being sentient animals, do actually have some kind of creative intuition and genuine understanding, though obviously not in a way comparable to us homo sapiens. A fish might have been a better analogy, but they can't type, and as far as I know there is no such as thing as 'the infinite fish theorem.'