GDC 2009: Tuesday

Again, the usual applies – these are at-the-time stream of consciousness notes, unedited.  Pure live-blogging with the added disadvantage of not actually being live.  I have to write quickly to keep up, probably missed everything, apologies to speakers I just didn’t understand.  It was a fantastic conference, and you’ve all been a part of it.  My personal additions are in ().  This is nowhere near as good as having actually been here or having the actual audio, I’m afraid.  But here’s something to remind us all with.

Breaking the Cookie-Cutter: Modelling Individual Personality, Mood, and Emotion in Characters

Dave Mark, Phil Carlisle, and Richard Evans

Wow!  The Summit is packed.  I wasn’t able to sit down through Richard and Dave’s talk. Both focused on personality traits and using rules to create gameplay around manipulating AI. Phil is up talking on emotion. He’s citing a lot of references, slides are probably very good, grab them when they go up. One was a book by Ortony, X?, and Collins on appraisal. Seemed to be quite inspirational. He says so much of what he needs comes down to knowledge representation, about objects, agents, events. Now on to Darwin – emotions are partly evolved. Forms the basis of fight or flight. He noted that involuntarily his physiology forces him away from danger. Trigger mechanism, Subconscious. Dr. Paul Ekman spent his life studying evolutionary facial expresses – the FACS coding system. They were looking for universal facial expressions, as proof of evolution. But discovered it’s not the complete evolution picture. Facial expressions communicate feelings, and game faces tend look dead. Scott McCloud explored this as well in “Understanding Comics” – using iconism to understand faces, emotion, can work even better the realism. And universality as well. Studies show children have a fuzzy image of faces at birth anyways, we may get our iconic comprehension from our developing brains. W. Scott Neil Reilly’s thesis is really good too, Ken Perlin’s stuff, lots of similar research that can help.

His implementation, uses blackboards for knowledge representation, separate appraisal unit. Blackboard can actually talk directly to animation to communicate game data. The blackboard is a great way to get started, although I’m not clear why.

Bodies have involuntary emotional reflections. Communicate via our animation modeling – posture, face, gesture. And it’s 2-way in real life – getting someone to smile actually makes them happy.

Question: Getting more? Dave calls out my point on the blocker is time. Phil says indies are going to do a lot here fast. He’s surprised more people haven’t copied the Sims. Question: Sims 3 using planning for actions? Richard: Locally, but they cut the long-term planning. We have a lot of Sims in the game. Question: Isn’t this an n-squared feedback problem, too deep? Richard: You only see as much as you want to see. We can present the data and player’s will discover pieces of it. Dave: play to human innate knowledge and intuition, so that personality and emotion work the way expected.

When Good AI Goes Bad:  Tools, Technique,s, and Strategies for Testing and Debugging AI

John Abercrombie, Phil Carlisle, Alex Champandard

Alex: How important is design? Phil: Important to know feature set, everyone, all the way down to test. Alex: When to start? John: Day 1, when you need them is likely to late to make them. Alex: not many people think about testing/debugging when designing code. John: On Bioshock, problem 1 is that I didn’t write it in up front, problem 2 is that test didn’t know what should happen. Make sure to write it down and share it! Phil: If we do iterative design, you have to be able to do iterative test as well. John: Documentation usually can’t keep up, so wait till the iteration stabilizes. Phil: Accept responsibility for taking care of it personally. If you do Scrum, get testers on the team. Fight designers just wanting to sit together. John: Yes, don’t let them through it over the fence. Phil: particularly if you have an external publisher testing.

Alex: Building test plan? Phil: Do it just in time. Fill out the matrix as a team and break up the work. John: Make test plans at least by alpha, at least when you have a realized AI design. Phil: Don’t depend on docs, depend on people. John: Hard when teams are huge. Phil: Lean on agile, Scrum methods to handle scale.

Alex: Programming techniques for debugging? John: Start with logs. College style. Script debugging is particularly hard. Try and get a debugger. Otherwise will end up waste a lot of time. Phil: Scripting languages are great, if you can get other people productive can save you a lot of time. Alex: Takes quite a long time to get a scripting language environment mature, error fixed. Phil: Some designers are great at script, be wary of giving everyone scripting languages. Give them training. John: Give designers enough rope, but not enough to hang themselves. Alex: On CoD, scripters are almost coders. Phil: Build debugging tools for scripters because you’ll need them too. John: Regrets not having better documentation. Let them go wild but be there to help.

Alex: Visual editing of behavior trees. Phil: It’s great. Giving other people tools can make you really productive. Having really productive tools can get farther then even extra programmers. Alex: Yeah, a tree view is most structured and flexible way to do tool work. Alex: Kismet script for behaviors? John: Not sure. It’s easier for logic, if laid out well. Not just a ball of text. Does make coding it more difficult.

Alex: Automated testing? Phil: Yeah, made a version of Worms that could run with no controller. Made it platform independent because we got away from the platform. Then able to automate AI smoke tests over night and get a lot of data. John: Would run Bioshock “war maps” where AI would just fight and player would randomly teleport around, looking for crashes and AI problems. Phil: Basically substituting AI for playtesters. Alex: What kind of logging? John: Debug logs one of first things we did – categories, debug level. Used for speech, movement, behaviors, everything. Alex: Seems like first place you go for bug info. Phil: 2 minds on logs. Kinda have to do it, but think there’s a better way to do find bugs. Can be hard to see bugs. Filters themselves are very helpful. Alex: Also per-NPC narrows it down. John: Great to get logs from testers. Alex: Asserts? John: Ignore what you can. Avoid trying to put yourself where you have to assert. Usually in AI they are content-related, so should be able to content. Phil: Easy for people to just ignore the asserts. John: Use automated submission of bugs to track these. (Great idea!) Have your bug solution handle 1 bug for many reports well.

Alex: Pause/Play/Step? Phil: Useful. Alex: Debug views? John: Paths, grenade arcs, ability to focus camera on AI but still able to move player with stick, also could pause Ais but keep sim running, feet location history. Phil: Yeah, that’s really useful. Particularly for screenshots. John: Debug views usually done as part of initial test pass of original work. Expectations on the design side for what’s visible. Like sense cones. John: Can use views to also prove you’ve got work done.

Alex: Designer metrics? Phil: Coverage testing – like what weapon is selected, can really inform what design work needs to be done. Hard data. Have an alternate automated way to store this data, and design for it up front. Can build it architecturally with access functions like Insomniac attributes.

Alex: Builds? John: validate your content using verification. Phil: Build these automated tests into your checkin tools. CPU power is like free, really, so exploit it. Journaling is huge too. Architect so that you can reproduce from beginning to end. Alex: Yeah, reproducing bugs? John: We’d get logs and screenshots. Tricky because when loading shaders can change timings and lose oscillation bug. So we paused before saving our a saved game and that helped retain the problem. Saving the pause state really helped. Phil: We built using timers and events so we wouldn’t run into that problem.

Question: Test-driven development? Alex: Use it for aigamedev.com. Doesn’t find it helps catch as many bugs. Still need QA to see the behaviors. Phil: Unit tests are great. Anything you can do that takes effort away from yourself is a win. John: Also is great because it tests for breaks in other systems. Audience: Yank log files into excels – gives a lot of leverage to handle formatting. Statement from Christain: Asserts – use emails to alert team on asserts without just asserting the build. Same callstack and email it. (Similar to the auto bug reporting. I’ve had this on multiple projects with good success). Question: Documentation? Phil: Don’t document in gruups of only 5-6. Just do daily meetings, it’s not a good use of time. John: Documentation is tied to distance, more for farther away.

From the Ground Up:  AI Architecture and Design Patterns

Brett Laming

Why care about “design patterns” ? Not enough time, and patterns can give us reliability and production speed. He’s looking for reusability. Building on hierarchy, algorithms, commutability of ideas. He’s built his ideas from observation, introspection, generalizations, bad experience, background. Best Practices: prototype new ideas where possible, prove or disprove the concept. Quick and dirty programming. Play to people’s strengths – use physics guys for math, collision guys for nav mesh help. Get designers writing soak tests and unit tests. Please! – one co-ordinate system. Maximize workflow, particularly as a lead, exploit other team members in other areas. Build a debugging suite with instant pause, flyable camera, layered information, action histories, navigation and behavior info, logs for debug kits. Dump the logs to memory kits and look at it later. And maintain player immersion – limit ourselves to what the player would know or act.

The Think-Act Loop – mimic the player. Use sensory data, think about it, send it to the controller, and act on it. Use blackboards to share sensory arcs, positions, etc. He recommends using a “virtual yoke” as an independent controller component that can clean itself up independently every frame as the Act interfaces and the input to the Think step next frame. He’s storing most game data in it – different kinds of position, vehicle state. Because it’s modular you can change your yoke down the line but still apply it the same way.

Use dt in your frame calculations, even if using fixed time steps. Gives you pause, level of detail for free. It allows to decouple time from your AI work.

Using a functional approach, we build on feeling, knowledge, goals, beliefs, needs. An example of a plan for getting treasure in an RPG. This is goal-based reasoning. Broken into hierarchical small tasks, with actions broken into container objects. The whys come from beliefs and needs. He points out there are atomic level of actions, there are just actions that generate results. Points out external commands can come in and break these decisions, and that’s ok. Just don’t mix the data across. Search-based planning has a lot of good here – mimicking our introspective reasoning. But it’s bad at knowledge representation. Hard to use and track so much data in planning.

Instead, consider procedural approaches, aka behavior trees. It’s hierarchal, simple, limited language with basic transitions. Doesn’t take into account competing children well. In his shopping example, if he needs 2 of something, it’s hard to tell you can buy them at the same times. Still, he thinks this is a good way to go. Moving on, how do we stall on interrupts? HFSM in particular don’t handle it well. Behavior trees, may be easy to visualize as long as they’re simple, but how well can they handle interrupt? It’s a limitation. Another issue that he doesn’t drill into the limitations of goto for behavior trees. He proposes a new model to deal with this – MARPO – which seems to be a localized behavior tree that tracks states on 3 stacks – regular, reactive, and immediate. The stacks are prioritized to pick the best current action. It doesn’t have transitions, missed how. He uses the stack to keep the decision logic in memory. It does require scripted parameterized building blocks, not tools Because it’s a stack, it has an interesting winding ability (that operates a bit like behavior trees recalculating). He does say he would add stacks as he needs to add depth. The stacks are primarily for priority suppression. He is definitely dealing with a memory/CPU nDS limitation here. (I fear though he’s working around the priority problems. It’s a shame he hasn’t spent more time on highlighting the specific challenges he’s trying to solve here. He’s pushing through them and solving them purely with “what would a human do?” Almost too much detail in the slides to effectively understand and compare MARPO to other solutions.) He moves into path finding, covering much of what’s well-known. He’s trying to dig into the animation/locomotion architecture problem, but I’m having trouble following his core argument about the differences between them. There’s a key argument here between him and Alex. He’s going quite wide-ranging here, hitting almost every topic in AI. Almost proposing a method of approach to architecture, a philosophy, with a bit of a summary on how to apply. (But I do wish he went over more of the controversies/tricky things people are doing different rather then treating the entirety of architecture as one all-or-nothing problem. I was having trouble tracking what was argument and what was common sense or general knowledge.)

(I asked Brett more about why he was doing stacks here, since it seemed like an odd choice.  He said it was specifically to solve the scripting problem – if script control is interrupted, his stacks can store the script state without anyone else needing to know.)

The Photoshop of AI: Debating the Structure vs. Style Decomposition of Game AI

Chris Hecker, Steve Rabin, Damian Isla, Stuart Reynolds, Borut Pfeifer

Chris Hecker’s original talk is available on his website. Started with triangles, waveform synthesis. AI for games is a hard problem. Chris claims interactivity in computer games requires a computer in the loop. And yet, there’s the emotional side that forms an art form. You need a human in the loop. Human vs. Computer, in a sense, and the structure mediates statically, as a kind of contract. The style is the filling in of those degrees of freedom in the contract. It should be blendable, efficient, etc. (list from his original talk). In all of his examples, code is always on the structure side, except on the AI side. Anecdote about Sims and Half-life – how everyone is leaving their scripting code language and going to everyone else’s – visual/text. It’s not like Photoshop is accessible, that’s not the right word. The mastery is in the aesthetics. It’s not ease of use. Maya is a mes of heterogeneous structures. But it still has a tight feedback loop – click the mouse and the pixel turns green. Not sending it over the network to your console. On the millisecond basis. We’re very far away from that in code right now. Can code be a successful degree of freedom for an aesthetic problem? (aka does it require code? I still maintain that there’s a data approach possible here as well).

Steve: It’s incredibly seductive argument. Need to verify that we didn’t just miss it. Seems like it needs an if statement, which means code. But he seems any potential in the structure of an actor with sliders or a list of goals. The style comes in after much of the AI work has already been done.

Damian: Stole his idea! Photoshop has a thousand years of art history to draw from. 3D sculpting. A few paradigms of behavior modification to draw from, and improvisational acting seems like the source for AI to draw upon. Reverse engineering scripts to create performance. There’s a tremendous amount of back and forth on the stage – the director and actor work out exactly what each line/blocking is supposed to mean. Great example drawing from a Halo-esque example. Shows how you could parallel the creation of a space marine as an improv actor.

Stuart: What about machine learning? Right now AI dev is so long that you lose sight of what you’re making. Why not do behavior recording? Would give better design control.

Borut: Consider Facade? It’s the high-water mark for much of game AI. How would that work with…

Chris: Wait, behavior capture? Compare it to motion capture. That means we’ve already figured out the degrees of freedom. But there’s also the cleanup problem. Stuart: Can edit out behavior, stitch them together. We’re recording as much of the game state as possible. Can choose what people pay attention to, as you do the recording, playing the game. The training data is the specfication, it’s what you’re trying to get, and I argue it’s even simpler then motion capture. All: Hmmm… Borut: Can you elaborate on the combining. Stuart: you can blend together aggressivity and fire. Chris: What is the floating point numbers? How do you interpret the designer’s gameplay? Stuart: Give examples, give situations. Chris: How is this not “if we had a Turing-complete AI, this would be easy.” How do you give it the interpretation? Damian: yes, the hard part is figuring out the structure, the actual parameters. Chris: What that slider bar adjusts is the hard part. Stuart: Yes, you still have to pick it, but anyone can do it once you’ve choosen the structure. Chris: Excel isn’t very good structure.

Borut: Are we doomed through specificity? Chris: yes, you can dodge through your special case code, clearly works, not much reuse. Damian: it’s not hard to hand-code this if it’s just 5 numbers. Chris: Neural nets aren’t a good structure because they are too opaque a structure. (It’s interesting, because if there are a standard set of sliders, a structure, it’s probably fairly complex and fairly tide to robotics.) Chris: Everything just seems to come back to sliders and switching what code you run.

Borut: What about the Sims’ personalities? Damian: look for orthogonalities, that we can tune independently. Particularly orthogonalities across disciplines. Chris: Can’t just do it on animation level, because it doesn’t read, just changing animations to change style. Don’t need fully improvisational actors, just need to separate out the what and the how. Adverbs thing from Richard Evans is the perfect thing, those could be our structure. Damian: Michael Mateas would call out that the core of AI is procedural, so the core is code. Chris: Huge fan of code, but on Spore’s procedural animation system we wanted animators to animate. We somehow want actors to act

Stuart: Game AI is extreme example of trying to special case everything. Chris: We talked about standardizing AI. To me it’s like, uh, no. Graphics started with NURBs and quadratic surfaces. We want there to be a lot of competition, Wild West, right now. Because we can’t answer these questions yet.

Question: The strength of Photoshop is it’s a simple primitive and then it has transforms. It’s the complexity that can transform the data. Chris: That’s the whole topic of the panel, the focus is on tools because that’s how you manipulate the structure. (Maybe orthogonal basic behaviors with environmental transforms). Stuart: Researchers use decision as basic primitives and you look at the world through decisions. Chris: That’s the if-statement, right? (Wonder how contexts fit into these basics as well, consider a war context versus a home context – the AI shouldn’t change. Maybe the context is the transform? Need to ask Borut).

Beyond Behavior: An Introduction to Knowledge Representation

Damian Isla, Peter Gorniak

We spend a lot of time on what our Ais do but very little on what they know. One of those great neglected problems. He’s coming to the conclusion that all these decision techniques – behavior trees, HFSM, all really are the same thing. Let’s assume it’s solved, and look at the inputs. Define Behavioral knowledge: When to run, shoot, flank: does an ant know where an anthill is? Really, it knows it’s scent trail, so let’s look more at it’s state knowledge. First, our internal representation of things is not the same as the thing itself. It should be really different. Outside objects should get translated by the Agent and pulled by the behavior to “make decisions”. Now, all of AI is about making decisions. KR is about perception and interpretation decisions. So why KR? Because it’s fun to exploit KR mistakes, and there are many new modes of interaction. It makes AI more lifelike, easier to reason about, and make emotional reactions to. It’s important to recognize we’re already doing this, but we’re not getting the full bang for our buck. But KR is about better representations, the search for more expressive power. Build better behavior out of better primitives. We should be dealing with Java, not assembly. Much higher representations of things.

Damian starts by looking at time scales – things we might call facts at infinite time. And then instant knowledge – I have 3 bullets, car is coming towards me. There’s probably stuff in the middle that are true for long times but not guaranteed. All of these might possibly have different representations, time might be the way to divvy it up. 3 other key concepts: confidence in the knowledge, salience of the data (importance), and then prediction, based on that I’ve seen what do I think will happen next, maybe through extrapolation functions or learning.

He shows a really neat basic prototoype that shows an AI that doesn’t represent AI with location as X/Y, he uses an occupancy map to represent probability of where opponent is. He duck around corners and the AI investigates different corners. It’s really simple behavior, Doom 1 era AI, with a sophisticated KR. So simple a designer could write it. But there’s some subtle stuff, like a confused() which adds a pause when he’s really lost the target. Confusion can be represented by something I was confident in is confirmed FALSE. Surprise is when something you thought unlikely becomes TRUE. Both of these can be represented in the occupancy map, in the probabilities the player is in any space. Fairly interesting, simple way of doing surprise.

Target lists, a form of knowledge representation, are just changing objects into local “targets”. Usually position, action, hitpoints, outside perceived data, This is interesting because we can abscribe confidence to this data and decay that over time. Maybe different decays per data piece. Allows AI to make mistakes. There’s probably internally derived data as well – threat level, target intentions, target weighting, Another example – how hiding can get AI to switch weapons when it thought it was close, and be surprised at the range at which player actually is at. Player can trick the AI.

Memory. There’s working memory – volatile current state. There’s short term memory, such as these target lists. Better would be some sort of interpreted curve so you could do some sort of extrapolation. And Episodic memory, which Damian hasn’t thought about enough, other then it’s probably particularly tied to salience.

2 obvious challenges to this problem already – #1 representational versatility. There’s too much variety on screen to represent it all the same way. He proposes polymorphic solution. At the Media lab they used a Percept DAG, a tree where each node made decisions whether to handle the object. Interestingly this works quite well with neural nets,. Challenge #2 – Performance. Could share the process of converting between all agents (aka blackboard). In Zombie game or with the Flood could probably share. Could do some sort of dynamic shortened KR for each agent, or maybe do half of each, depending on object type, like enemies are always local. Or some enemy traits are shared and some are local. He prefers salience threshold approach – per agent represent only the representations of objects that have high salience for someone. This helps us to do load-balancing.

Limitations of target lists. One, doesn’t do relational information well. Where does “behind live?” There’s also the wholes and parts representation – a car’s wheel, a weapon in a guy’s arm, a mob of guys? Could fall back on old semantic net paradigm, but Damian claims it’s representational wankery, this relationship graph. Maybe we could do lazy representation? Fixate on a guy, instantiate the data tied to him, and as attention moves that representation collapses. Again, use salience to determine fidelity.

Peter steps up. First, it’s perfectly fine to have different kinds of representations of different things. Maybe spatial, maybe long term, maybe group. Don’t forget to have clear boundaries and know how to convert between them. Plus, where possible, it’s not duplication to represent things separately in your simulation. Remember, too, it’s not just a collection of facts, KR requires navigation and algorithms to manipulate the data. Also, KR is an old AI tech, so we can use old AI research that robotics has thrown out.

Predicate AI Knowledge. A list of predicates, basically a list of facts. Plus a collection of algorithms that checks for truths or formalism to to do searches. And well-defined external calls to handle things like distance checks well. Gives you a more powerful way to prove out pre-conditions. Ex. Where attacking requires querying the gun and finding weapon is out of range so would normally close. It’s declarative, implies a depth-first search query to prove, we can backtrack on failure. So we can make better queries when out of range. Be more highly expressive. Essentially a declarative scripting language. At the price of efficiency. To make it efficient, don’t compile dynamic problems, aka game situations. Could still compile the domains and map arbitrary names to c++ enums, pre-allocate memory, just have to deal with depth-first-search. Could separate into thread, and make it interruptible by tracking current path so that you don’t have to restart every time. Advantages: can make up knowledge modules, well-specified, clean interface, capture knowledge history and remember, and do unit testing on your captured reasonin. Can reproduce the situation the AI is dealing with outside of the game.

On to a different representation: considering Neverwinter Nights companions and understanding the player. Paul claims that we can get away without really processing the full sentence. He’s looking at different contexts described with minimal language. Between humans, we are usually just disambiguating between 2 or 3 things, more a trigger of when to do it. Player’s share history and context. If we had a KR of intention recognition we could do the same thing. He recorded people’s interplay language. If you want the AI to open the door for you, and you say “Help” near the door, why can’t the AI figure it out? He used grammers that described rule sets. And then parsed the grammer to figure out the rules that can start by the player at any given point. Basically a world representation of what rules govern the world. Can make the whole thing probabilistic and so can get a list prioritized with most likely. Objects can imply predictions about themselves to the AI.

Parallelism in AI: Multithreading Strategies and Opportunities for Multi-core Architectures

Julien Hamaide

Now we’re only a few cores and not a lot of places to put our tasks. In not too long, we’ll have hundreds or thousands of cores – we’ll need to split tasks into smaller pieces. Is this the AI programmer’s job? Yes, we have to convert to multi-thread similar to how we converted to C++. There’s two styles of architectures – Homogenous (one main memory) and Heterogeneous (multiple memories, DMA across). OK, breaking up Sense, Think, Act – Think takes the least amount of CPU. Spread the Sense and Act work over multiple threads.

First, programming for multithreading is programming for cache first. Cache problems are just worse, plus several CPUs are on the same bus, so memory access, ugh. When you change one line of cache it will cause a miss on all other CPUs accessing that line. So, separate read from written data, and keep your data as small as possible – so you can fetch less. Separate your frequently read data from rare stuff – name of entities should be in a separate cache line as position. Layout your data in the same order you read it, and align your data to cache. Remember to keep your data place-independent. This is all really about an optimization problem. Try and inline your classes, for examples, and use the “virtual pointer” at the top to do pointer offset to where the class is. Making your own inline virtual table. Also, SPUs can use shorts for pointer sizes, saving a lot of memory. All this makes it easy to send through DMA, avoid multiple allocation and cache miss. Plus, it’s easy to serialize in the end. To make Ids easier, write pointer to ID conversion classes, that call reinterpret_case on base_pointer+ID. Something that it helps to set up front as you build your classes.

Next, avoid polymorphic classes. The first data is vtable pointers, are specific to the executable, o it varies from main to SPU. It’s not valid after the DMA transfer, unless you do extra patching. Vtable is not local to data, so it surely raises a cache miss too. Oops.

Looking at code and data, avoid ::malloc and ::new. It locks, starving CPUs. Use thread local allocators, particularly a stack allocator that’s easy to implement lock-free. They only take an index to increment (as long as all memory is released at end of frame and can reset the index). Or could just pre-allocate. Or you could just do local variables.

So should we use lock? Yes, avoid lock-free, at least the first time. Lock-free algorithms are not easy to make, and will break all the time. Heavily depends on architecture. First try lock, it’s not always worse performance. Choose the best places to go lock-free. Even more, it’s just not fair to other threads – with lock at least you’re consistent. Some collections are simple and orth the try to do lock-free, priority queue, stack, fifo list.

OK, some high level ways to set this all up. 4 proposals. First double buffering – all tasks in parallel with no dependency stall, because you don’t want any task to wait. Each task can only read from previous frame’s result – so there’s no scheduled order. Last, each task can only write to itself, for the same reason. Otherwise you might get stomped. Thus, double buffered. Only data from outside must be double buffered. But keep it cache friendly, keep read and written data separate, and order it smart. Switching buffers must be viewed as an atomic operation on all entities. He defines Linearizability – the current buffer can be switch when you have linearizability points. So all changes appear immediately, nothing changes while the swap is going on.

His second proposal is a message system. Because you can’t write to other agent spaces, the only way to communicate is to use messages. But we can only dispatch at one point in the beginning of the next frame. So you can get two competing messages, and thus you’ll need a picker for each message that can deterministically pick a message to handle, such as a distance comparison. It’s easy to implement,use a lock-free linked list for inserting sent message, and keep it thread safe by doing it all at once on message update.

Third, Job scheduling system – using a command pattern on priorities to create a “future object”. A command object is just a construct that defines what a task is trying to do. The CommandInterface defines a CommandObject that you can SPUify. Catch is tasks can have dependencies and block each objects. Here, future objects act as temporary query holders that contain preliminary data, and prevent you from queuing off the same task multiple times. It’s like an early “yes”. You can even choose to have the future object block until the query is done, to debug in single thread mode.

Last, an asynchronous request system. There’s constant AI queries about the world. When queries are multithreaded, the AI is pushing them all the time, while code waits for the answer, even though we always know we need the answer. Yes, we always can work around the data, but we’re always one frame behind. This approach registers requests with a frame frequency, and you get a request object that you can treat as the result value. Don’t need to write special code for if the data is not available, it’s just always being processed.

OK, so which techniques does he recommend for the sensor systems? Sensor systems are easy to thread and have few dependencies. For pathfinding, he likes a job queue, but for cover analysis and terrain strategies, use asynchronous requests, he recommends.

There’s still a lot of improvements coming for multithreaded codes, languages and architecture will be changing for at least 5 years. Transactional memory, for example, is new memory that makes lock-free writes easier. Also, AI is not infinitely dividable. Everything we have is just the same with higher precision. So a great opportunity to try new things with AI with the extra tasks we will get. Animation will be able to get subtle, higher quality, more emotion. But we will run into memory data limits fast. We’ll have the CPU for speech synthesis and possibly recognition. You can do something pretty good nowadays with speech synthesis. And it will be easier to do tons of entities, like crowds. Who knows, what else we can try?

Alex asked him, would you sacrifice determinism for better performances? Not deterministic does not mean not predictable. We want deterministic. It’s a real challenge for multi-thread AI. We depend on determinism for designer control. We don’t have to lose the determinism. 8 core demos may be 8x faster, but the simulation doesn’t run correct. So don’t sacrifice if you don’t have to. References Herb Sutters at www.gotw.ca,whose doing new C++ help for multi-threading programmers, as well as aigamedev.com and SPU programing from Insomniac Games GDC 2008.

Question: Experience with lock-free? Not portable between consoles, really hard to unit test. Can take years to find the issues, and if it’s difficult to produce once… plus, you can’t attach the debugger to catch the error in atomic code. Question: Do we want 100% utilization? Consumers want us to use every ounce of power they bought – always more and more, so we should target that. Order your job priorities based on distance from the player. Question: GPUs? Much easier, solved the problem for years with better fixed definitions. Can use GPU some for AI – rasterizing nav mesh, for example. But he’s anticipating unified architectures. Question: Build into architecture or specialize solution per system? His solution is to make an external set of multi-threading systems anyone can use and schedule through. Question: Debugging CPU? Pix. Otherwise is really rare.

Advertisements

5 thoughts on “GDC 2009: Tuesday

  1. Great report! Didn’t even notice you typing away during the summit 🙂

    One addition about unit testing, indeed I don’t “find it helps catch as many bugs” but it does help you build better modular bug-free code in the first place 🙂

    Alex

  2. Wow.. didn’t realise you were blogging :-).

    Anyway apologies about lecture pacing; totally skipped the bit where I tell everybody to take away the ideas and look at the slides later :-).

    To answer some questions, the history behind the stack approach is partly to aid prioritisation but also, from an engineering perspective, to prevent namespace pollution.

    I’ve inherited many code bases over the years, and all generally have some notion of behaviour modification by member variable parameterisation.

    The problem I found is people ended up storing multiple copies of the same information, plugged into information that wasn’t current and in some cases creating contradictory information. It made debugging very difficult and became the major engineering problem that needed addressing.

    Dynamically allocated tasks mean that information exists only as long as it is needed thereby removing the temptation :-). By moving parameterisation inside dynamically allocated tasks, memory footprint dropped significantly, previously messy classes became cleaner and information that you can access is always live.

    If you want more information on this approach I’d recommend reading AI GP Wisdom 4: section 3.3 🙂

    In terms of why we keep the top most goal around on a thread (I think that’s what you asked) this is to maintain the parameterisation of that goal, be that scripted or game assigned. So kill flavoured with don’t use cover and don’t advance behaviour is still remembered.

    Anyway keep up the good work (I missed Pete’s lecture so it was great to get the jist :-)). Sorry as well for not writing sooner but I’ve just returned and am now playing catch up.

    Great, I’ve created an essay again 🙂

    Brett

  3. Pingback: Game AI Roundup Week #12-15 2009: 9 Stories, 1 Job, 1 Cartoon — AiGameDev.com

  4. Pingback: Post-GDC Ramblings « IA on AI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s