21 August, 2018
Building on yesterday’s post, let’s look at how Storybricks could have worked. I’ll go into a little bit of the design philosophy as well as some of the initial design we had about an original IP to showcase the Storybricks AI technology.
I figure it’s been a couple years and some people might find this interesting enough. Of course, time dims memories so bear with me.
Modeling character emotions and motivations
The first question is how to model characters on an emotional level. It turned out to be really easy, because psychologists have done a lot of heavy lifting in this area. There is a model called the “Big Five” or OCEAN model, standing for: Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. These five areas, measured on a continuum, explain a personality very well.
Taking Openness to experience as an example: A high value would be someone who wants to go out and explore, find the hidden things in the world, dig deeper than the others. A low value indicates someone who enjoys the cold comfort of routine and hates to break out of it. Of course, these are a continuum, where most people will fall into a middle. For example, I’m hesitant to visit a new restaurant, but once there I love trying out new foods that sound interesting.
(And as an aside, you can use this model to look at player motivations for playing games. An interesting post to read when thinking about what excites people about games and what they play.)
But you might look at that and think this is too simplistic. Indeed, that’s why each of the five domains has 6 facets each (PDF warning), for a total of 30 measurements each. The total measurement of a domain is made up of individual measurements, so one’s interest in the fantastical is one element of Openness to new experiences: if you have a vivid imagination you’re more likely to be open to experiences, but if you rarely daydream you probably are going to be less open to new experiences.
Then you have emotions. This is a bit more tricky to model, but I was fond of the Plutchik’s Wheel of Emotions. This breaks down emotions into four groups of opposites and helps understand how emotions influence actions. For example, we derive “joy” from having something new and worth something. We want to repeat this experience, so we tend to accumulate things that have worth to make us joyful. On the other hand, losing a valued object cases “sadness”, so we try to avoid that. Someone who is “hooked’ on the feeling of joy might be said to be greedy.
Using both of these we can get a good picture of someone and model their personality, emotional state, and desires.
Putting it into practice
A lot of this relies on AI programming. Although I don’t shy away from technical detail on this blog, I don’t think I want to give a full tutorial on the basics of game AI. Luckily Ben Sizer, aka Kylotan and a former Storybricks programmer, wrote such a primer for me to link to. This is a fairly comprehensive look at the basics of game AI.
The initial version of our work used two main features explained in the primer I linked: Influence Maps and Utility Systems.
Influence Maps are a way to represent the state of the world. Something that can satisfy a particular need “emits” information about itself. So a valued object might put out an “aura” of information to attract characters who want the joy of collecting the object. This becomes even more interesting when we understand that this aura might vary in size based on information a character has. A character might know about a very valuable prize in a distant land from rumors or stories, after all, influencing their actions; after all, what adventurer worth their salt isn’t drawn by promise of treasure in a distant land?
Of course, there may be other emotions as well: fear of repercussions of taking the object if it appears to belong to someone else, for example. Weighting the emotions and personality, the character can decide what they want to do to satisfy their needs and make them happy. But how do you calculate this?
Utility Systems are how you calculate a complex bunch of weights like this. A fearful character might ignore an item that gives off a lot of value because they fear being punished for taking it. A character with high conscientiousness might not want to break they law taking something from someone else if they think the item belongs to someone else. But a character who would derive enough joy from the item might take it despite the other emotions and influences. But this is where some great stories come from: the character who steals an object that caught their eye only to have to deal with (or maybe simply run away from) the immediate consequences.
So for every action, we calculate the emotional result from that action. Once calculated, the character chooses an action from the options available. Often the result is “pursue this other goal that satisfies more needs” if the action doesn’t satisfy enough needs with low enough cost.
But where would it all go?
The interesting thing is when you put this into a setting, and we were working on one before we started working on EverQuest Next. But perhaps that’s best discussed tomorrow!