< Back to Index Posted: Nov 4th 2024
Infinity Is Trash (And That's OK)
Note: this was originally posted on Cohost in July 2023, and has been migrated over.
A Short Hike is a perfect videogame. I will tell anyone who will listen about this. That doesn't mean it's better than your favourite videogame, or all videogames should be like it, but it *is* perfect1. I've often said that something I want more than anything is a DLC for another season on the island - like autumn or winter. I would pay *anything* to play more in this world and see and do more things. The desire to want more of a thing we like is very natural - it's why we get excited about sequels, remasters, mods, fanfiction and fanart. But like a lot of human desires and drives, it's one that's dangerous to overindulge. This is a short blog post about generative AI, the myth of infinite content, and the joy - and skill - of making art out of trash.
Comfort and Credibility
A lot of research has been done into what it takes to make a new technology stick. For businesses, for example, Tidd and Trewhella say that it's a combination of comfort (how easy it is to integrate the new technology) and credibility (how much tangible benefit it appears to provide). Society isn't a business, but I think we can extend this thinking pretty easily - we'll call credibility something else instead, maybe 'value' for want of a generic word. For example, virtual reality had a fair amount of value to people - it's cool, people wanted to play with it - but lacked comfort - it was expensive, it needed a lot of space, it cuts you off from the outside world and it's super tiring to use for long periods. For a flipped example, NFTs were pretty comfortable - at some point they became actually pretty easy to use, trade and buy. But they lacked any value whatsoever. Most people didn't understand the point of them, weren't interested in them, and couldn't be excited about their potential use in the future.
Whether you consider VR or NFTs to have failed (please do not @ me, I cannot stress this enough), we can definitely agree that both technologies have had a bumpy ride. Generative AI is now emerging from 'thing you read about in science articles' to 'thing you read about in tech articles' and is trying to become 'thing you read about in product descriptions'. It faces the same challenges: comfort and credibility. In terms of comfort, most generative AI systems are doing okay, for now. Lots of people play with Midjourney, ChatGPT and Copilot every day, their interfaces are largely straightforward (even if you need to know a bit to get the most out of it) and they're cheap to use (for now). This is at least partly because comfort is being bought - companies like OpenAI are pouring money into both making these products and distributing them for cheap. We don't really know how long this can be maintained, and what comfort will feel like then, but that is another discussion. Let's assume for now that's not changing.
Value, or credibility, is another issue entirely. Generative AI systems have a kind of credibility at the moment in that they're just very playful and fun to use, which is an end in and of itself. People enjoyed using AI filter apps because they were unusual and novel, and talking to ChatGPT entertained many people when it first launched. Their long-term value is unclear still, though. ChatGPT is being used by a huge percentage of students to help complete homework, it's being used by media companies to write articles, and it's being plugged into a thousand different existing platforms. But there's also a lot of evidence that it's not really fit for purpose - it's frequently making mistakes, sometimes legally dangerous ones; it's been used in completely inappropriate and legally sketchy ways; and there hasn't really been much of a widespread analysis of whether these systems do more good than harm.
If you've staked money or credibility on the success of generative AI, part of your argument has to be that a bigger, more amazing future is coming. Whatever failures these systems are exhibiting now are just quirks, and once they get ironed out we'll be faced with something amazing - the value, or credibility, of this technology long-term. But what will it be?
Infinity and Spelunky
I've wanted to write this blog post for over a year now, since reading this post about a hypothetical future in which generative AI systems create an infinite amount of Spider-man content for people. This is a very common claim about generative AI systems. A variant of the claim promises not an infinite amount of content, but a scale so huge that it is effectively the same thing - recently John Riccitello of Unity talked about generative AI and said “Somebody is going to make a Godfather game. They're going to put 100,000 NPCs in an environment in Brooklyn and they're going to be autonomous.” This is not actually infinite, but to the average human player it would feel equivalent to infinity.
Infinity isn't a number - it's a concept. We can't experience an infinite amount of something in the same way we can experience all three Lord of the Rings movies, so if we want to talk about infinity or infinite things where it appears in entertainment or art, we need to think about what role the infinity is playing. In Minecraft, for example, the pseudoinfinite surfaces of the worlds it generates are there to give the player certain feelings - the feeling that there is always more to explore, the feeling of being lost, the feeling of there being something new over the next horizon. The infinite nature of Minecraft's worlds isn't there so that we can consume all of it, or because we might run out of space, it's there because the *knowledge* that it is infinite does things to the way we play.
Another example of this is Spelunky, which I would argue is probably the game that brought the idea of procedural generation - using algorithms to automatically create game content - into the modern wave of indie games. There are lots of other games that use procedural generation that were very famous around or slightly before Spelunky launched, but Spelunky showed how to integrate procedural generation to change the underlying play experience of a genre of game. In his book about developing the game, Derek Yu noted that when he was coming up with the idea of Spelunky, he reflected on what he didn't like about platformers:
What didn’t I like about platformers? I didn’t like the repetitiveness of playing the same levels over and over again, and the reliance on memorizing level layouts to succeed. What did I like about roguelikes? I liked the variety that the randomly-generated levels offer and how meaningful death is in them.
Spelunky showed a clear template for how to use procedural content in a game. Procedural content could change a rote learning experience into an improvisational one. That doesn't mean it was better or worse than before, instead it was a different kind of experience. The number of levels in Spelunky is not relevant, what is important is that:
- The player does not know which level they are about to play.
- The player is not able to play so often they can remember the layout of every level.
A game with two levels which randomly picks between them every time would satisfy the first property, but not the second. Players who play a lot of Spelunky will eventually begin to detect patterns and feel the flow of the generator, but generally the second property still holds even after a lot of play. Learning and reading generative content is a topic for another blog post. Infinite content - and Spelunky only has pseudoinfinite content, you could theoretically play every Spelunky level - is just here to have a secondary effect on the player. Infinity is not the point.
Infinite Content for Fun and Profit
Some3 generative AI salespeople are pitching something different right now: the aforementioned idea that you can generate an infinite amount of content that is as good as your favourite TV show. A lot of people are critical or suspicious of this claim, but it can be hard to put into words why we feel this way. There doesn't seem to be a reason on paper why this isn't possible, it just feels weird. But you might also be looking at some of the cherrypicked results from generative AI systems and thinking, well, maybe it is possible? Maybe I don't like this thing, but it's going to happen anyway. Sometimes technological progress isn't what we want it to be, and that's sad, but we can't stop it.
Here's the problem, though: *once you make a type of content infinite, you turn it into trash*.
I'm using the word 'trash' here not as a marker of quality, but instead in the same way we might describe TV as 'trash'. Trash TV isn't actually bad - people love it, they enjoy watching it and get a lot out of it. But it is designed to be consumed in a different way; we don't really care about it as a long-term experience, and it's more about how it fits into other experiences (like relaxing at the end of a long day, or getting some comfort in a difficult time). Some games are trash. Some food is trash. Trash is an important part of the rhythm of our everyday lives.
One of the important things trash can do is add texture to another experience or allow us to focus more on something else. You can talk to a friend over trash TV, or flick it on halfway through when you get home from work, or miss a month's worth of episodes, or keep half an eye on it while you do something else. Spelunky's levels becoming trash means people rarely, if ever, discuss a particular level design in Spelunky. Instead the level design gets out of the way and shifts the player's focus onto other parts of the game: thinking about interactions, possibilities, what might come next. Trash is enabling something specific about Spelunky's design; it could not exist and achieve its goals without having this trashy aspect to its content.
Sometimes this can actually help us enjoy some kinds of passive content more, too. For example, Minecraft's infinite worlds means that I can watch someone play Minecraft without knowing what will happen next, or having seen the level/world/map before. This is great for Twitch streamers, because it puts part of the game itself into the background, and shifts focus onto our experience of the streamer's personality and playstyle - I'm enjoying their reaction to a new experience, and I don't know what they're going to experience next either so it's novel even if I've seen someone else play it before. I've often heard game designers talk about how good procedural content generation is for the age of Twitch streaming, and this is one of the reasons why.
However, unless you have a plan to use this trash in service of something else - like improvisational roguelike gameplay or dynamic streamer challenges - then turning your content into an infinite stream will just leave you with trash. Netflix series don't work like Spelunky or Minecraft. I'm not watching someone else react to the series, and I'm not looking to have my attention diverted to some other part of the watching experience. For shows like The Witcher3, or movie franchises like the Spider-Verse, I am trying to sit down and properly engage with something that has a beginning, a middle, and an end.
More importantly, converting it into trash might actually harm some of the strengths of these existing experiences. Something I've said for years when talking to journalists about AI that generate games is that personalised content in particular completely destroys the way we share our experiences of media and culture. I don't want to customise my Netflix show so it features my face or has an ending I wrote - I want to go on the internet as soon as the new episode of my favourite anime drops to see everyone else posting frantically about it4. Infinite quantities of content robs us of the ebb and flow of culture, the periods of development and growth, discussions and waiting. Hollow Knight: Silksong was announced in 2019, and the lack of a concrete release date still has now become a running joke in the community. But the pain and humour and joy of waiting for it - although we might not want to admit it - is part of the joy of being a fan of it.
The End
I want to be clear - endless generation, and the generation of trash specifically, has a role to play in our culture. My entire career has been dedicated to this idea, in fact. I believe procedural generation and generative AI can be used for myriad amazing things, to tell stories we couldn't without it, to create new experiences like Spelunky did. Unfortunately, as is common with technology that is either new or newly rediscovered by some people, the most common tendency is to simply apply it naievely to existing things. What if the thing we already had, but we slapped this new technology on it? It's the lowest effort approach to the idea, and in this case it's not going to ever materialise5.
There's also a number of other angles on infinite media that I didn't want to work into this already quite long post. For example, the infinite media proposal is often made under the assumption that an infinite number of unique storylines exist. I've never asked a writer their opinion on this but I don't really think that's really true. You can make a 'new' story by changing the colour of someone's hat or making the villain use ice powers instead of fire powers, but endlessly churning over stories is exactly what turns something into trash in the first place. It doesn't seem like a meaningfully sustainable idea5, again. Something that we rarely think about is that some things are simply impossible because of the fundamental nature of the universe. There may not be ways to generate infinite perceptually unique episodes of a single TV show. Just because we can imagine technology doing something does not make it actually possible.
Writing about AI has become increasingly difficult lately. The backlash against generative AI is powerful and emotive and heartfelt, but it is also increasingly so strong that it is often overreaching. I don't blame anyone involved for this - they've been lied to, misled and now feel attacked on top of this. But it makes it hard to pick out the important AI topics to critique in this space. There is so much wrong with what is happening in AI and so much awful stuff being endorsed and supported by people who ought to know better. However, actually talking about this can often lead to these criticisms being overinterpreted to a blanket critique of algorithmic generation. I now genuinely struggle with how to phrase my research when people ask me what I do, because I feel very proud of my work about procedural generation and game design. But 'generative AI' in the large-model machine learning sense now dominates 99% of the discourse, and has fully poisoned it.
Generative AI techniques are like a set of oil paints, and today most people are trying to use it to replicate the effects of watercolours. We don't need to do that. We have watercolours and the artists that use them make beautiful work. Instead we should be asking: what new things can we paint with these beautiful colours?
Thanks to Florence Smith Nicholls, Chris Allen and Lisa Kasatkina for giving feedback on an earlier draft of this.
Footnotes
1 In The Man Who Loved Only Numbers, Paul Hoffman's biography of mathematician Paul Erdös, Hoffman describes Erdös' analogy of The Book. Erdös didn't believe in god but would refer to a 'book' that the non-existent god had in which was written the most perfect and beautiful proof of every mathematical statement. There were lots of ways to prove any given mathematical truth, but when Erdös saw one he thought was particularly beautiful or elegant he would describe it as being 'from the Book'. I like this analogy a lot, and I think about it in the context of things which aren't provably optimal or perfect in an objective sense, but have a sense of being perfectly-formed. A Short Hike is from the Book of Videogames.
2 Florence Smith Nicholls pointed out to me that Netflix, who commissioned The Witcher, are increasingly interested in what they call 'ambient TV', which is related to this idea of trash. They also pointed out that this highlights why the Writer's Strike has come to a head in the way that it has - people want to make and enjoy non-trash as well as trash content, but commercialised generative AI incentivises the creation of the latter, at the expense of everyone involved.
3 So many people I know now have or work for AI startups that I feel obliged to add little disclaimers here - I don't know every AI company or product out there, I'm sure some are great! If you think you are working for a good company you probably are, I dunno man. I write these posts trying to aim for a bigger picture, and the bigger picture is full of snake oil salesmen on steroids.
4 I am being a total poser here, I don't watch any anime series, but imagine I did and I was cool.
5 I should also add, most people proposing this are complete shysters who don't actually believe in their claims. A lot of people's approach to looking for fame, funding, new jobs or new opportunities is just to continually make stuff up, and sometimes it works. Basically, talk is cheap, and anyone claiming they can make infinite high-quality TV is more than welcome to try. The real issue we've hit with the AI boom is not the nature of the claims being made by people, it's that government policy, industrial production and research directions are now being affected by claims alone, rather than practice.
Posted November 4th, 2024