GDC 2009: Monday

Man, I’m busy and getting my mind blown out.  I have a lot to write but haven’t mastered the blog short form and time finding to write it yet.  But here’s some of my notes from Monday, as I catch up.  The usual applies – these are at-the-time stream of consciousness notes, unedited.  Pure live-blogging with the added disadvantage of not actually being live.  I have to write quickly to keep up, probably missed everything, apologies to speakers I just didn’t understand.  It’s been a fantastic conference so far, and you’ve all been a part of it.  This is nowhere near as good as having actually been here or having the actual audio, I’m afraid.  But it’s something to remind us all with.

GDC 2009

Monday.

Animating in a Complex World: Integrating AI and Animation

Alex Champandard and Christian Gyrling

GDC 2009 is here! I’m here liveblogging the start of the AI Summit. Very much been looking forward to this on – so much of what I do these days is animation related, and these guys love it too. Alex starts off digging into locomotion systems. He points out locomotion is hard because there’s so many dynamic factors – angle, timing, changing target, different possible path, different postures. The navigation system needs to deal with all of these, in a complex and dynamic environment. He assumes a provided direction and a speed. Ideally, a full path to follow. But if all you get is a target point that’s fine. Also, a blend tree (anim playback, do procedural, cross fade, etc). So in his version one, you go from idle to a walk loop and just start sliding towards your point. Then, if interrupted, you switch to run loop and go to the new cover target and standing cover anim. But we’ve got foot-skating. So next consider Animation-drive motion. But this is constrained by the animation lengths themselves. So we have to do special alignment work of the motions. Now this looks perfect when going forward. But some blends are awkward, such as idle to walk. So let’s add transitions. First, assumes the mover is modular, and then that it can query the animation system for tags, bone queries, and current animation. Here we’ll need special annotation or tags in the animation to line up the transition times correctly. It’s great when you can automate this with motion graphs. Now, to get good turns, let’s add parametric motion. This helps provide continuous coverage throughout all possible animations. Still, we can’t arrive at a point smoothly, and we can’t interrupt smoothly. So let’s use parametric transitions. This really means better information – footplant constraints. Using this you can pick transitions that match these tricky conditions. Character locomotion is fundamentally about taking steps. And because we need all these steps we’ll need tools to help automate as much of this as possible. Mirroring can save a lot of time too, but it’s still life-sucking. It doesn’t scale well.

Another way to try is step-based planning. Since steps are so important, split your motion into left-to-right steps and right-to-left steps.. With a bit of runtime CPU, you can then use your A-star planner when you do your path plan to actually plan out the steps as well. It creates a step-based plan that the AI tries to match with anims. Lesson #1: two phase motion synthesis makes the animation look better, you can spread the error over the path. Lesson #2: it’s still hard and requires performance (4-8 ms). This hasn’t been tried in a shipped game yet, but instead of doing the search at runtime, let’s hash the search which is optimal enough to use as a cotnroller. We can use Reinforcement Learning to put together the search field. So you can leverage the control system with tools before the game. Ultimately we can hope to see continuous planning – a paper is mentioned that covers where this is going. Alex summarizes by saying up to parametric animation those things are pretty straightforward. Getting into parametric transitions and beyond gets trickier, so it helps to have a lot of experience, and you might need it.

Christian steps up to talk about architecture integration. He calls out controlling animation directly from AI – it’s a bad interface and bad code. Classically the AI is the puppet master. Decision talks to locomotion, tight coupling between what and how. His new integration is more drill sergeant and private – AI gives orders, is proactive, Character is reactive, but there’s no coupling to the AI. The Character just provides a request handle back to an order. The AI can just request the status for that order, using the handle. They drove towards this separation because they had so many different controller needs – AI, scripting, animators all needed a different interface into locomotion then the decision engine. So what is the Character? The Character has Character Logic and Animation Logic (for locomotion). The Navigation system is separate from the Character though, it’s an autonomous processor. The Character handles AI orders. It does world/navigation queries. Christian then says it does path finding though, so no clear how it’s separate. Characters have animation modules that control animation choices, as wells as voluntary and involuntary movement, aka walk vs. hit reactions. Modeling the Character like it’s a doll, just receiving orders. This separation is so nice and simple they are considering moving all the decision logic to script because it’s basically exactly what the designer wants. Navigation and animation complexity is now put localized tot he Character. The AI can decouple it’s update and frame rate from the animation, reducing CPU load and going multi-core. Plus, they can share their behavior logic. One of his biggest wins is how it integrates scripting. He exposes behavior parameters, but there are also exposed script behaviors as well as full script control for things like in-game cinematics.

He also calls out his pathfinding mesh. Dynamic navigation map, every frame rasterize the nav mesh into a high resolution axis-aligned grid. Generated at runtime to include both static geometry and dynamic objects, obviating the need for a steering behavior completely. He shows an example, that starts with a decision to go to cover, then a static path find that says cover is available and kicks off a nav map job. The AI just waits for completeion. The next frame the rasterized nav mesh comes back and now hazards and dynamic obstacles are identified. Why steer when you can path find? So he does a dynamic path find. Once movement is started, the Character starts the run to cover and reserves the cover spot. A lot of bugs in Uncharted from cover “leaks” – not allocating/releasing correctly. This gives them one place only to handle cover. As the Character gets close to cover, it does a special entry animation, re-pathing to where that animation starts. Half a meter away, adjusting speed and direction to hide the animation switch. Christian contradicts Alex in calling out he doesn’t need pose matching due to the speed of the Character, taught that by an animator. Then the animation starts and the character ends in cover. The blending is ongoing still when the Order is completed to the AI, so the AI can choose what the next action is before the blend ends. To end, Christian calls out how well abstractions and well-defined interfaces help – in particular a Character abstraction to hide the complexity from the AI. Now they have Character programmers as well as AI programmers. Only 3 programmers to do animation and AI. Finally, he calls out scripting as essential for a game, so embrace it in your architecture.

Question from Borut. Ending the animation, did you do any special AI transition handling? Alex recommends a queue. What if the Order is the wrong order and the Character realizes it? Christian says there’s no such thing as a bad Order, just bad times for Orders. (Not so sure about that – there can be invalidated situations that can break an Order mid-stride.)  Asked again, Christian says the Character can have the Order changed mid-destination, but it should be done by the AI. Handling multiple orders simultaneously? Like in dialogue? Christian: The Character feeds information to the AI to choose the right things, and will support multiple things (face, etc), but the Character reserves the right to ignore the Orders if it thinks they look bad.

2008 AI Postmortems: Spore, Gear of Wars 2, and Bioshock

Neil Kirby, Eric Grundstrom, Matt Tonks, John Abercrombie

Some lightning Q&A from the various AI programmers. First up, Eric on Spore. Spore’s too big for 5 minutes, so focusing on behavior trees. Some challenges: they avoided scripting – no great debugging, engineers end up doing most of the work anyways. Instead wrote a system in C++, state machines with actions and behaviors, but after a year ran into a bit of trouble. Groups were a challenge (how to do leaders?) And simulation vs avatar was tricky (predator/prey models, avatar pacing, 5 different levels/modes of play). Some successes: behavior trees, player vs. environment rather then living world, and giving obvious feedback about what the AI is doing, particularly relationships with the player. Avoiding n-squared complexity of player feedback. Very similar to his AIIDE 2008 talk. Very happy with using proven AI tech, avoided extra risk. Focus on player’s experience and visualizing the player’s mental model. Keep the simulation under the hood, but don’t simulate what the player can’t see. Keep the game transparent – either show what the AI is doing or simplify it down.

Matt’s starts up the Gears of War 2 AI. Some winners: AI commands – FSM hierarchy, stack based, basically the unrealscript state stack without the sticky mass and using multiple files/objects. A single AI controller with n states underneath it and transitions. Breaking states out into it’s own object, using inheritance where necessary. Similar to actions in a plan using a stack.

Another winner, debugging tools, as much as they can think of – AI Logs per character for every state transition, “BugItAI” automagic! so that testers could better pinpoint where the problems were coming from, Toolchest of AI debug commands. Man this talk is completely packed, there’s no room at all to see or write. Another winner: Constraint-based path planning? Using object queries to find valid path plan destinations and a goal evaluator to choose the best results. Flexible/reusable from script. Helps support multiple AI types that need different kinds of pathing. And last – “Punchlist meetings” – getting stakeholders in a room together to play through the level, finding the list of bugs. Output was everyone was involved, everyone was invested. Can more easily set priorities and stay focused.

On to non-winners: epic battles between scripting and AI. Script overrides needed AI behaviors, streaming bugs. Used a debugging tool that highlight guys running script, so that they knew who was failing because of script. Procrastinated behaviors, that was another non-winner. Deferring them means they have to redesign things late in the game. Totally preventable, just needed better information for what the purpose was. And last thing, difficult to do smart things when environment doesn’t allow it (particularly in a cover game). Most of Gears2 had big scope and scale and so environments were less open. So little side flanks available, and made the AI look less smart.

Question: Yes, sticking with FSM. Question from Dave: What if current action on stack invalidated? Yeah, people die, etc., do a lot of popping and through messages that start aborting actions. Question: Yes, precompute everything you possibly can, because have more memory room then CPU room.

John from Bioshock goes up last. They had an AI strike team: one ai lead, one animator programmer, one ai contractor, 2 animators and an animation lead, as well as about ½ a designer. All sat together. Invaluable setup. Not throwing things over the wall. Also went right, rewrite as little as possible. Start off with a solid construction. Only build what you need on top of that. Try and reuse your stuff and don’t reinvent the wheel where it’s not necessary. Third, they scheduled experimentation in, in their case Little Sister/Big Daddy interactions. They hadn’t done it before, so added 2 months of programmer time to experiment. But don’t screw your schedule to do it. Fourth thing, ragdoll recovery system (get up system). Used Havok Animation initially to do it, but sofas, stairwells didn’t respect cylinders that well. Proactive solutions didn’t handle edge cases really well. Reactive system worked best. Test cylinder when it’s done. If it’s in geometry, the bad case, get closest waypoints, do reachability tests to see how close can get to mesh. Then move physics cylinder to waypoint, and interpolate mesh to cylinder as it gets up. System was nearly cut for time, but a number of weapons needed it, and they got it working.

What went wrong? They thought they had very little scripting, because they wanted non-linear game spaces. Oops. Random spawners, items, moving AI patrols, they thought they were systematic. Made it difficult to control scripted sequences. But since they didn’t plan for scripting in their simulation though, training level, demos, and worse, e3 press needed scripting. Also, interfacing with designers was poor. No designer on the team meant level designers didn’t know how to use AI. If the designers had told AI team what they were trying to do, the AI team could have helped set that up. Plus, attributes were done outside AI strike team, so gameplay was different from what AI strike team intended. Lastly, AI performance was always slow. Sometimes a lot of ranged attacks line checks that needed to be done in the one frame. That was a huge mistake. They had to optimize the heck out of it, but lesson learned – be ready for asynchronous checks.

AI and Designers: Mind the Gap

Alex Hutchinson, Soren Johnson, Joshua Mosqueira, Adam Russell, Tara Teich

Man, these guys are getting set up. Both moderators called them up for a bruising. First Soren asks where’s the line? Alex says AI is a player service. Good AI is like your Dad playing you in tennis. How does the AI lose in a fun way? Tara: want the AI to look smart – not just fold over. Programmers have to make it look smart. Somehow turning into a lecture from Creative Directors, not sure what’s new here yet. Adam: Fable AI was ambient player experience. But they spent a lot of time simulating the town life without the player there. They weren’t designing the AI correctly at all because they weren’t using the player’s perspective. Tara: we want the simulation, but we don’t need it. So much for a fight here. Soren: Sid (Meier) used to call out “are you making the game fun for the player? Or for yourself?” Ex. Civ has these deviant units (subs, spies, nukes) that are so different from rest of systems that for AI the work is disproportional for better AI somewhere else. Tara: and random can be good enough – players will attribute stories to them. Alex: Yes, AI can be story-generating, decorative. Soren: But some people play games for mastery – for other reasons that need AI (finally! Push back!) Alex: Important to always think about game design as systematic. Can’t do sequences of special events – no learning, schedule blows out. That said, the outliers in systems are the things people remember. Soren: How do we train designers more systematically? Alex: No one’s figured out how to train designers at all. Everyone’s learning the hard way. Designers need to be headstrong and driven, are really dificult. They have to learn to compromise. Tara: Programmers more focused on rules, Alex: Distressingly common – engineers knowing things are wrong but proving it by doing and wasting the company’s money. Joshua: you want the friction because 2 perspectives can give good stuff – likes agile, strike teams. Alex: Being able to think like an engineer is a core design thing. “Engineers don’t listen to me” really means designers can’t talk to the engineers, it’s just gibberish. It’s a problem on the design side. Frequently the designers just go around the engineer and find somebody they like instead. He’s trying just doing 80% of the design and handing it off to the engineer that way, a relationship where the designer is doing more then training engineers to suck eggs. Over designing just leads to engineer fury – yes you can trick engineers through fury by the end of it, but not clear it’s worth it.

Tara: Most programmers are poor communicators. Have to interface with people. Good AI programmers are designers, and designers have to trust them. Adam: Like tool’s programming. Are AI programmers just tools programmers then? Joshua: 2 perspectives, working together gives better design, day to day. Alex: The AI programmer ideal rarely matches the level design, even in script-heavy games. Adam: A lot of AI is becoming about presentation. Many AI programmers are just running to keep up. Alex: Wants a customer service relationship, but force the designer to discuss and debate. Always wants to think “I want to agree 110% every time.” Soren: Problem with AI programmers? Joshua: Tend to be logical optimizers. Can miss the fun experience. A bit too right. Adam: Don’t have a plausible model for how AI makes plausible mistakes.

Soren: Cheating? Making AI competitive in RTS games, some cheating works just fine, but other types really drive them crazy. How do we work together on this? Tara: No need to cheat when can do the right thing. Value is in trying to map real human intuition, as long as the player can’t tell. Adam: I completely believe in cheating in the code. Alex: In-fiction cheating very different from out-of-fiction cheating. Joshua: Don’t get caught, don’t get caught, don’t caught. Adam: Experience design is all about cheating. Soren: Cheating why AI programmers think like designers – all about player experiences. Ex. Puzzle Quest – people swear AI cheats with gems, and he thinks game would be more enjoyable if they went into code and cheated to make sure that it wouldn’t happen. Joshua: Player perception is reality – even if AI isn’t cheating, if player thinks it is then it might as well be cheating. Alex: But I hate rubberbanding – want room for player expression and player ability, player failure. Adam: Hate doing that in academia too – rubberbanding the students diminishes the education. If their complaining that means they still care.

Soren: What would you redo? Tara: Design happening early without programmers being present. Engineers need to learn not to say no a lot that early, but they need to be there. Adam: Scripters working separately from AI guys, 24 hours a day script would override AI simulation in Fable towns. Joshua: Company of Heroes – didn’t do people working together on squad AI on the strategic level as well. Alex: Spore – didn’t learn lesson of caring about things player cares about early enough. Solved a difficult problem no one cared about – making creature planet and space planet look the same for not a strong enough reason.

Question: Soren: Yes, more you can show of your AI the better. Very conflicted feelings about fog of war, for example. Dave Mark: AI Designers? Alex: Kinda like level designers in a systems-based game. That jobs been around forever, it’s the level designer. Adam: Programmers who make the switch get loved. Soren: Think we’ll see more and more of this overlap, like Technical Artists. Q: Collaboration AI? Alex: Social Ais, definitely, very different problem space, more room to play without failing. Adam: Companion design really does expose AI design in a more meaningful way.

On to my panel on human characters.  Wish me luck!

Advertisements

2 thoughts on “GDC 2009: Monday

  1. Pingback: Game AI Roundup Week #12-15 2009: 9 Stories, 1 Job, 1 Cartoon — AiGameDev.com

  2. Pingback: Post-GDC Ramblings « IA on AI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s