ABOUT ARTICLES GAMES SERVICES TUTORIALS
01 Mar 2011

GDC 2011: AI SUMMIT

I spent the entire day in one room in Moscone West, the home of this year's AI Summit. I may be a designer, but I still have some years of programming experience under my belt: it's where my career started, and obviously I still do a fair amount of it for my independent projects. Plus I think it's important for designers to have a working knowledge of other disciplines... but I'll leave the details of that for a planned post-GDC article/rant.

Component Architectures For AI

The summit opened with Brett Laming (Rockstar Leeds), Joel McGinnis (CCP N.A.), and Alex Champandard (AiGameDev.com) discussing component architectures. This is an approach that favors breaking AI problems down into lightweight, encapsulated pieces, and recombining and parameterizing them as necessary. Components might be things like:

  • BehaviorTree
  • Perception
  • MoveController
  • Targeting
  • HeadTracking

Those examples are intrinsic to an AI, but it also makes sense to write components for external triggers as well, for example:

  • TriggerVolume
  • CoverPoint
  • Targetable
  • FocalPoint

Each component implements a narrowly-defined, encapsulated, lightweight piece of logic. Components share an I/O architecture which was likened to elements in an electrical circuit: data comes in, is transformed, and then a result is spat out. It's reminiscent of the recent trend toward data-oriented design, where logic exists primary to transform data. In this implementation, the data tells us things about how the AI should behave: where to move, what animation to play, whether or not to fire a weapon, what state to switch to.

This is better, they say, than the more traditional way of doing things, which has commonly been inheritance-driven and resulted in large, complex class hierarchies with bloated high-level parent classes. There you often run into the catch-all CObject class which implements a bunch of data and logic which some of its child classes simply don't need. That's wasteful and confusing to work with.

With a component architecture you gain a lot of flexibility. You can swap out parts of behaviors to get very different "feels" for new enemy classes, with very little work and without rearchitecting the system. This gives AI programmers - and designers, if your component architecture is data-driven rather than code-driven - the ability to iterate on behaviors quickly and safely, even late into production. And you can sort, filter, and query your scene based on components - iterate over all Targetable entities, for example - without having to ask any questions of the entity itself, or even be aware of it: you simply look at all the component instances, which ideally have registered themselves with a central manager for fast lookup. You can even use this sorting, with an appropriately-written manager, to batch and parallelize component operations: for example, vectorizing multiple ThreatComponent calculations into a single SIMD instruction.

Visualizing AI Debug Info

The second session was all about debugging AI behaviors. This is a tricky class of debugging problem because there's generally not a specific point in code where things are failing: rather, it's a confluence of factors creating a specific state in which some logic no longer produces the desired result, and sometimes this can be very subtle indeed. Also, that state is often quite complex and doesn't lend itself well to textual inspection, e.g. via a standard debug log.

Thus, Michael Dawe (38 Studios, Big Huge Games), David "Rez" Graham (EA), and Brian Schwab took to the stage to show off some of the visual tools they've concocted to solve this problem and - this being GDC - to impart some best practices.

The question at hand is: how much is your time worth? Sure, you could dig through the mountain of data in your debugger, and eventually you could figure out what's going on and put in a fix. Or you could look at a picture of the problem, which is much easier and faster to parse and understand. Additionally, you're not always going to have access to your debugger, or debug symbols, because with AI you'll often be looking at failure cases on a designer's machine or on a test kit in the QA room. Wouldn't it be nice to be able to pull up useful debug information on the spot, from within the game itself?

Of course, building debug visualizations into the game isn't a new idea. The focus of this session, however, was how to build debug visualizations right. And the key takeaways in that regard were:

  • Build a robust library of basic shapes: lines, arcs, circles, and polygons
  • Allow easy access to filtering different types of debug info
  • Keep history for everything!

History was a major point oft-repeated. Being able to step back in time and look at the permutations which got the AI into its current state often gets you much closer to the actual problem than just inspecting the current state in isolation. In every use case, history was limited to 100 or so frames, which was sufficient for nearly all problem cases.

A point which was not raised explicitly, but which I felt was an undercurrent through the whole session (just based on the patterns I saw in the examples), is that visual design matters. A lot. It's not so much about making things look pretty - although well-designed debug visualizations do, in fact, look quite attractive - but it's about making them readable and intuitive. Good visual design is sometimes considered the domain of artists, but I don't think that's true: if you look outside of games you'll discover that visual design is in fact an entire professional field unto itself, and a fairly technical one at that. I see no reason why an AI programmer wouldn't enjoy and appreciate the insights in, say, Tufte's "Envisioning Information" just as much as an artist or graphic designer. And its just such insights which are needed to produce truly effective debug visualizations.

An interesting question was raised during the Q&A at the end of the session: how do these guys have the time to do all this? Certainly many of us have the experience of working against very tight time constraints, in environments where work that's not directly demonstrable to the publisher, or of immediate and indispensable necessity, simply doesn't make it onto the schedule. The answer was fascinating: all three of the presenters are, quite simply, in charge of their own schedules at their respective studios.

Would that it were so for all of us. :P

Navigation: It Isn't Solved Yet

The third session of the day dealt with AI navigation, a problem that's only getting more complex as game worlds get bigger and more detailed.

First, James Analt (Blizzard) described the navigation approach used on Starcraft 2. They employed Constrained Delaunay Triangulation (I'm gonna have to Google that...) to build a navmesh at load-time, based on the locations of cliffs and buildings in the scene.

Mobile units were not considered by the navmesh builder. They were addressed not as a pathfinding problem, but as a steering problem. Units in SC2 employ a variant of Boids steering coupled with a "sight horizon" analysis: basically, plot out all the obstacles on the horizon from the unit's point of view, then aim for the gaps. Units could also push and be pushed according to a set of rules, like "most health wins" or "idling units always lose". This combined approach enables units in SC2 to steer around even concave obstacles, deal effectively with crowds, and flock realistically, and is more performant than spamming a ton of A* checks or polling for lots of world state information.

Alexander Kring (Nihilistic Software) then discussed his team's approach to the problem, which involves partitioning the world into manually-defined "clusters" (rather like setting up vis portals in Radiant or zones in Unreal), then solving paths in two phases: an abstract phase which uses entire clusters as single nodes in an A* graph, then a local phase which builds an A* path from the current position just to the nearest cluster boundary on the abstract path.

This approach yields a couple of benefits. First, CPU time and memory cost are reduced because paths are only calculated and stored to the limit of the current sector. Because the longest possible path is simply the longest path through any single sector, and we know about all our sectors, it becomes a simple matter to allocate memory for the path planner up-front and avoid dynamic allocations and fragmentation issues down the road. And because the pathfinder and the steering component are two different things, they can be parallelized effectively: the steering component can be guiding the unit through his current cluster while the pathfinder is simultaneously validating and updating the path.

Finally, Nathan Sturtevant (consultant, Dragon Age: Origins) discussed Bioware's approach on that game. Much of it was similar to the previous presenters, but two things stood out.

First was the idea of "recursive graph abstraction" (my attempt at giving an otherwise-effusive concept some kind of name). This builds upon Kring's presentation of the abstract vs. local graph by suggesting that you could actually wrap an even more abstract graph on top of that; or, going the other way, you could have a more granular graph beneath the "local" graph. At this point, the terms "abstract" and "local" become relative: "local" is what you're steering on, and "abstract" is what you're planning on. To me this seemed a lot like tessellation: you're basically planning at some low tessellation level N, and steering on N+1.

His second interesting contribution was using differential heuristics (ALT) for unrolling certain kinds of pathing problem scenarios into 1D representations better suited for the kind of heuristic-based traversal that A* prefers. The example case was a floor of a tower, shaped like a nearly-closed donut. The start point was on one side of the split, and the end point just a few feet away, but on the other side of the split. With a basic distance heuristic, this is pretty much worst-case for A, because the goal is the closest node, but the *actual next node is taking you further from the goal for much of the duration of the path (since you have to go all the way around the donut). Differential heuristics effectively "unwrapped" the donut into a straight line, then did the A* traversal in that space, where the distance heuristic suddenly makes perfect sense. (If I wasn't typing this on an iPad I'd put together a sexy graphic to illustrate the point.)

Honestly I didn't totally follow what was going on under the hood with the differential heuristics thing, and that probably had some to do with Sturtevant running out of time at the end, so things got a bit rushed. But this is what Google is for: further reading. ;)

(Contraction hierarchies made it in there somewhere, too, and that sounds interesting, but got totally cut off at the end. So yeah.)

Influence Maps

These are just cool. 8)

Alex Champandard returned to the stage, along with Kevin Dill (Lockheed Martin) and Damian Isla (Moonshot Games) to discuss influence maps. The basic idea is this:

  • Break up the world into cells
  • Assign each cell an influence value
  • Propagate influence across cells
  • Make decisions based on this map

Alex actually went through his entire chunk of the session (the first third) on the assumption that we all just "got" what influence even means in this context. Maybe I'm just behind the times, but I was kinda lost for a bit, so I'll not subject you to the same fate!

A good example of influence would be the combat strength of a unit, at that unit's position. That number would propagate to neighboring cells, decaying with distance, as far as it could before decaying to zero. (This can be more complex, but we'll get to that in a minute.) Then a unit on the other team can query the influence value of a cell he wants to go to, and that gives him an idea how much danger he might be in from the former unit if he goes there.

The whole point of influence maps is to encode lots of data, from lots of sources, across varied and interesting spaces, into a form that's easy to reason about. So you wouldn't just propagate influence from one unit, you'd push the entire team into the map. The influence from units near each other would be summed, creating an area of very intense influence. The visualization looks exactly like a heatmap, and the underlying math is reminiscent of the calculations you might do for a 2D metaball implementation.

In fact, the latter is where the power of distance maps really comes to light. Summing influences and decaying them over distance and time is great, but why not take it further? Apply a threshold, and you can have a very powerful influence stop suddenly at a given range: this might represent a strong melee unit, whose combat range is necessarily limited. Instead of deriving influence from a unit's combat strength, you could derive it from the unit's projected travel time to that cell, which might be appropriate for opponent avoidance in sports game. Or you could derive it from distance to the nearest obstruction, and you get a map that differentiates between good cover locations (sticking to walls) and "out in the open".

What do you do with this data? The most direct application is steering: an AI moves toward the neighbor cell with the highest (or lowest) value, depending on what it wants to do. In a team game, such as CTF, this is especially useful if you map negative values to one team's influence and positive values to the other team's influence. Now: want to attack? Move toward the lowest-valued neighbor cell; as you keep doing that, you're getting further and further into "enemy territory". Want to run away? Move toward the highest-valued neighbor cell; you're following your own team's influence at that point.

Kevin Dill outlined how this approach led to great emergent behaviors in Kohan, a game which was lauded for its excellent strategic AI. The best example manifested as a feint. It worked thus:

  • Enemy is looking for lowest player influence where there is also zero enemy influence.
  • This is how the enemy attacks where the player is weak and undefended.
  • Enemy streams a bunch of units toward this weak target.
  • Player notices the first arrivals and moves his forces to intercept.
  • This changes the influence map!
  • Enemies at the back of the stream re-path to the new area of less player influence.

And what did the player see?

  • Enemy attacked a weak spot.
  • Player moved to intercept.
  • Larger enemy force attacked the spot the player vacated.
  • "Enemy just faked me out! That's smart AI!"

Influence maps are a new concept to me, but they feel familiar because they look so much like the 2D metaball experiments I was doing in December, and the potential field navigation being used by the units in Fail-Deadly. I'm looking forward to doing some thought-experiments - and possibly actual experiments - with this technique in the future!

Casting Your Extras

This session was all about the fast, cheap, disposable AI you need to populate the background: pedestrians in cities, generic townspeople in an RPG, etc.

First up was Dr. Paul A. Kruszewski (GRIP), who broke the problem down into five components:

  • Flows
  • Obstacle avoidance
  • Action stations
  • Spawners
  • Brains

Flows are hand-tagged navmesh sections, such as the navmesh polys along a sidewalk. AI can slide through these areas without pathfinding, because they don't have (or need) a goal. A variant of Boids or potential fields could be employed for steering and obstacle avoidance within these constraints (or you could build an influence map ~cough~).

Action stations are locations with an associated animation, such as an ATM at which the AI plays a "withdraw money" animation. Action stations have a trigger volume, hand-authored, that intersects flows. This interaction may be shallow or deep. If a "flowing" NPC hits the trigger volume, the action station takes over his brain, pulls him off the flow, runs the animation, returns him to the flow, and releases his brain again. Action stations are generally exclusive; that is, only one NPC may use one, and if it's in use, other NPCs will ignore the trigger volume. Some action stations were tagged to queue up NPCs, so you could form a line for the ATM, or wait for a crossing signal at a crosswalk.

Kruszewski ran out of time partway through his segment and didn't get a chance to go over spawners or brains in detail, suggesting only briefly that spawners can be used to ensure the desired population density in gathering areas (such as a park), and that brains are basically just the (simple!) behavior tree component of the AI.

Ben Sunshine-Hill (University of Philadelphia) was up next, and he really has the best name ever. :D

His primary topic was alibi generation, which addresses the issue where a player follows one of your background AI around for long enough that it becomes apparent that the AI is really just wandering aimlessly (if gracefully). By way of example, he showed a video of his pursuit of an armored car in GTA4. His expectation was that at some point it would stop to drop off or pick up money, during which time it would be vulnerable. He discovered, to his dismay, that armored cars in GTA4 have far less sophisticated behavior.

Alibi generation kicks in based on a heuristic that says, "This AI is about to start looking stupid." Ben glossed over the details here, presumably leaving them as an exercise for the audience. I imagine you might build up an "interest" score as long as the AI is in the player's FOV and decay it when out of view, and if it rises above a threshold then generate an alibi for that AI.

So what's an alibi? Simply put, it's a goal: go to the restaurant, go to a house (the player will probably project "home"), etc.

How do we choose that goal? We could pick some random place, but an observant player will recognize the randomness pretty quickly; for a truly convincing simulation we need a better solution.

What if we could fully simulate every AI in the entire city, simultaneousy and in-depth, all the time? That would give us very realistic behavior: every AI could plan paths from goal to goal that represent a sensible daily schedule. But of course this is computationally impossible: its just too many AI and too deep a simulation.

But what if we could somehow turn all that computation into data? Then the problem is reduced to little more than a lookup.

Ben's proposed solution works like so:

  • Offline: Run a full simulation of all NPCs for some period
  • Save off a bunch of probabilities that describe relationships between goals
  • At runtime, use the saved probabilities to influence random behaviors

The probability generation encodes things like, "There's a 70% chance that when leaving a restaurant, an AI will go to a movie theater." We can save off as large or as small a set of these as is necessary to describe the action-space of our game.

At runtime, we apply our heuristic to figure out which AI need an alibi generated and when. To generate an alibi, we pick the first goal at random, then weight subsequent goals based on our saved probabilities. Now if the player follows this AI around, he'll see a sequence of actions which makes sense and is informed by the results of our offline full simulation.

I'm not sure if I'm convinced this level of development effort is necessary for background AI in most games, but at the very least it's a fascinating academic study.

Conclusion: Day One

This was a very informative day for me and somewhat outside my usual fare, being more technical than I'm used to. It was a nice change of pace. ;)

Among things that are less good, however: blogging this much on an iPad. I thought my iPad would be great for GDC on account of its power and portability, and for taking notes it does fit the bill rather handily, but for this amount of writing I'm finding myself fighting the auto-correct way, way too often ("Wasserstein" when I want "was", I mean, come on people, WTF) and managing any punctuation other than a period or comma, when it appears as frequently as it has in this piece, is just a chore.

So... I'm probably not going to write up the other sessions this week in quite as much detail as I did today. Perhaps I'll do some followup posts when I get home to a proper computer.

Catch y'all tomorrow. :)

Posted In:

gdc video-games