Dagstuhl - Forward Models for Game Design
Posted Dec 18th, 2019
This week I’m at Dagstuhl, a research center in Germany that hosts computer science seminars where people come together to discuss the future of fields like artificial intelligence. I’m meeting with games researchers, designers, mathematicians and more to think about what the future of games and AI will look like. This is the writeup of the second workgroup I was in, about building forward models of game design, led by Georgios Yannakakis. I’ve also included a (very) short summary of what the other groups did at the end of the post!
When we talk about forward models in games research, we usually mean the models AI agents use to predict what will happen next. For example, if I’m looking at a monster in DOOM and I press shoot, a good forward model will tell me that this will kill the monster (and maybe give me some reward - to let me know that killing the monster was good). We use forward models all the time in game-playing, and lots of fancy AI these days actually build their own forward models by testing the world and seeing what happens.
Georgios was interested in building forward models for game design rather than playing. So if we have a game design, and then we take some action (adding a rule, tweaking the level design) can we build a model that simulates (or predicts) what will happen to the game design. Will it become harder? Easier? More playful? Less strategic? Building models like this might allow us to build design assistants, model the differences between designers, or build more interesting autonomous game designers like ANGELINA!
So we started off chatting about what this kind of model might look like and where it might come from. What we’d really like is data about a game design and how someone built it, piece by piece. But generally we see games as finished objects - we get little glimpses sometimes, like patch notes, postmortems, or special events like game jams. But in general, we get to play a game only when it’s done.
Left: Alex makes a change to the game as Julian and Georgios watch. Right: Julian making a move (and probably a joke about it). Click to enlarge.
As we discussed this, I mentioned a boardgame I’ve been working on, in which players design a game using cards. Because this is done step-by-step, a game like this would be a great way to gather data about how games are incrementally designed. I’d even brought all the bits of the prototype with me to Dagstuhl! So after lunch we broke it out, and hacked together a single-player version where one person would design a game (and then watch two people play it). Whereas my original design was heavily adversarial, this singleplayer version had a weird collaborative element, where the individual rounds of a game were 1v1, but the overall process was a 4-player co-operative design game.
This turned out to be great fun, as it allowed us all to think about our own personal, real, human forward models of game design work. We’d watch two of us play out the game, and then debate over what change should be made - was this game bad because it was too easy for blue, or too hard for red? Should we make the game faster and more explosive, or pull back some of its more unpredictable factors? We spent a long time debating how rules might work, and spent over two hours tweaking and evolving a very simple game in very subtle ways.
Because the game space is so small - a 4x4 grid, with a 4 turn limit on movement - it was often very easy to figure out how the game was broken, or which player it was biased in favour of. But it was also just complicated enough that often it would take us a few minutes. Sometimes we would put forward theories about who would win before the game had even begun; other times a player would pause after their second turn and instantly request a restart (we took this to be a good sign - replaying meant they had rethought their strategy and estimated better odds); and sometimes we would realise something about an earlier version of the game while playing what we thought was a ‘better’ version.
We plan to write up our findings about this process, and hopefully look at how to develop a game like this into a tool for collecting data about how people design. Thanks so much to Georgios, Julian, Alex and Mark for being in the group with me.
Other Workgroups
At the end of each day we all give a short report on what we got up to. I took a few notes on each group so that you can get a rough flavour for what topics are being discussed here. I’ve also noted down the group leader in each case, so you can contact them if you want more info.
Search in Videogames (Mark Winands)
A discussion group on new strategies for search in games, using heavyweight examples like Starcraft 2. A lot of discussion about how to abstract a game before searching through it, and what that loses (or gains)
The One About Abstract Forward Models (Diego Perez Liebana)
A lot of games are too complicated to create formal models of - too much data, often irrelevant but not always. What could we do instead? This group hacked Pommerman (a GVGAI game) to artificially limit the AI agent’s field of view (essentially adding fog of war to the game) which reduces the scope of the forward model in exchange for losing some information that might make you a better player.
Open-Ended Skill Discovery (Sebastian Risi)
A very high-level and mind-expanding group focused on open-ended problems and identifying new tasks or abilities in a space that is fairly unconstrained. Can we build systems that we leave alone to explore spaces, to try things out and figure out what can be achieved, rather than explicitly setting goals and techniques for the AI to use to achieve them? Very interesting, very complex.
AI Support for LARP (Christoph Salge)
A discussion-based group that looked at opportunities for AI in the LARP (as well as tabletop RPG) space. LARP in particular has strange and extreme problems for AI to solve, like co-ordinating thousands of people’s stories simultaneously and across physical space, as well as exciting new opportunities, like giving people wearable technology or a sword that talks!
Human-AI Co-operation and Competition (Setareh Maghsudi)
This group got up to a lot, although I was pretty tired and it was hard to follow the technical aspects of the presentation! One highlight was the implementation of a co-operation game where two people must write down the same number with no communication other than periodically revealing the most recent number they wrote. They’re looking to build infrastructure to analyse human strategies here.
Zero Learning (Olivier Teytaud)
Another group where the technical elements were a bit hard to understand during the 6pm coffee crash, but this group discussed zero learning (as seen in AlphaZero, but also many other projects) in which “zero” human knowledge is involved, in the sense that learning is primarily done offline. A lot of interesting anecdotes and projects mentioned, and what sounded like a cool advanced discussion group.
Explainable AI (Nathan Sturtevant)
A very exciting-sounding group that looked at different aspects of explainable AI (what do we want to explain, who to, in what level of detail, for what purpose) and even built some prototype visualisation systems, as well as demoing work in the area done by some group members. This is a topic that could probably sustain its own Dagstuhl week, the group looked like they’d enjoyed their discussion a lot.
Posted Dec 18th, 2019.