31 Jan 2014
January 2014 - The Animation Project

This month's theme is Respawn, and I'm choosing to take it in a somewhat metaphorical sense. This is the start of year two of 1GAM, so it seems a logical time to revisit what I was doing at this time last year.

I actually ended up with two projects this month, because one thing I was doing last January was throwing together a quick and dirty web site to host my 1GAM projects, and since I was only figuring on this being a one year project at the time, it really wasn't designed to accomodate another year. But here I am again, so I had to make some behind-the-scenes changes to make this rig a little less rickety.

Web site upgrades aside, I went back and thought about what I was trying to learn last January, and used that to decide what I'd like to return to, focus on, and see how different the experience is now with a year of experience. What I ended up with is a tech demo that I'm just referring to as The Animation Project.

January 2013 - X.T.

So last year, I'd just come down from the celebration of my first NaNoWriMo win in November 2012, and I heard about this 1GAM business in late December. I'd been toying with some game development off and on over the preceding year, but the last game I'd completed and distributed to others to play had been back in high school. I was in my thirties. That's a lot of no-game-finishing. If NaNoWriMo worked for me, maybe 1GAM would too. So off I went.

I'd experimented with learning Unity off and on, so I decided that would be a good place to start 1GAM. The results and post-mortem writeup can be found right this way, but looking back I remembered that among the many new things I had to learn quickly, I had really focused on how to get a couple of specific things working in that project:

One of my non-Unity projects leading into 1GAM had writing my own 2D engine and toolset in Java sitting on top of LWJGL, and a major basic component of game development I had learned as part of that process was the various pathfinding algorithms and techniques. It seemed logical that I needed to get my head wrapped around that early in Unity.
Mecanim was the hot new technology in Unity around that time, and it seemed prudent to dive straight into that as my primary knowledge of animation in Unity.

Revisiting The Past

So, with all that in mind, I decided on my ingredients for January 2014:

A few months ago, much of Unity's NavMesh capabilities were unlocked in the free version. This is definitely something I need to get experience with, but haven't had time for in recent months. Specifically I decided I wanted to get click-to-move ARPG-style controls and NavMesh coexisting with direct player control. I suspect playing the PC and console versions of Diablo is what originally got me thinking about this - the two schemes aren't terribly friendly to each other, so integrating both should be an interesting challenge.
I've experimented with varying levels of Mecanim complexity, but this time I decided it was time to bite the bullet, drop a little cash on some good animations, and see what I could accomplish with a rich animation set. For those interested, you can accumulate an impressive collection of multipurpose Mecanim-compatible animations from the Asset store for well under $100. Now without the burden of struggling to find or create good animations, I could focus on how to assemble a rich state machine and connect it to the game world. Mecanim had also recently gained some features (animation events and triggers) that I suspected were going to significantly change how I worked with Mecanim.
New Rules
1GAM Year Two has some new rules, one of which is a shift to allowing game-related projects that aren't full games on their own to count as monthly projects. With that in mind, I decided not to even attempt to put together a full playable game this month. The goal was a tech demo, which allowed me to focus all of my efforts on learning.


This was definitely the easy part. Sure enough, Unity's NavMesh tools are quite solid, and getting the basics working was very simple. This was top priority, since NavMesh would also allow non-player entities to navigate intelligently, but the trick here was getting NavMesh to happily coexist with direct player controls.

Should you ever need to get this working, in my experience the key was to totally ignore NavMesh and build the whole control system on direct user control first. Get everything working there, including the hooks into animation and collisions and everything, before you even think about NavMesh. Once all of that is working, start setting up a NavMesh system as usual (as luck would have it, Unity even covered NavMesh in a Live Training session this month, in case you need an intro).

The trick is then to use the NavMeshAgent solely as a logic component. The NavMeshAgent has a couple of properties that for whatever reason aren't available in the inspector - updatePosition and updateRotation. Set each of those to false, and the NavMeshAgent will no longer do any navigation, but will perform all of its usual calculations as if it were moving around normally. (Note that setting the NavMeshAgent's Speed and AngularSpeed will NOT accomplish this - the calculations will be based on the agent's speed, so a speed of zero will make it think it can't navigate successfully.) For this next part to work, you'll need to normalize your player input, so diagonal movement is handled appropriately, but you were going to do that anyway, right?

Now that your NavMeshAgent is thinking but not doing, you can use it as a driver for virtual input. Whenever click-to-move is needed, just identify the destination with a Raycast, set that point as that NavMeshAgent's destination, and set a flag in your script indicating that your character is autonavigating. Now whenever you would normally check for player input, if the autonavigating flag is set, you instead determine where the NavMeshAgent wants to go and simulate player input to move that direction. Assuming the NavMeshAgent's path is valid, it will expose nextPosition as a Vector3. Subtract the player's position from that position, zero out the Y value, and normalize the resulting Vector - the X and Z coordinates are now your simulated Horizontal and Vertical Axis inputs. Wherever you would use player inputs, use those values instead. Your NavMeshAgent is now driving your player by simulating player input, and all of your movement logic remains exactly the same. Now all you have to do is identify the situations when you should stop autonavigating (such as the player using direct controls again), and unset that flag to stop the simulated input.

There are still some edge cases I'd need to sort out if including this in a full game, but overall this seems to be working really well. Very promising for future projects.


I may have gotten a little out of hand here.

I've used some publicly-available motion capture data in past projects, but it seems that one of the recent Unity updates has changed how animations are imported, specifically in terms of translation and rotation of bones, which resulted in those animations not being usable as-is (even the ones I've previously used successfully, so I guess I won't be rebuilding those projects without a major overhaul...). This was very disappointing, since I'd been hoping to work with Mecanim this month. This was enough to drive me to actually spend some money on animations, and I luckily found Props Animations. I have no idea how this library is this inexpensive, and the price is going up as the creator adds more animations, but it's still a no-brainer at its current price for anyone looking for a core set of animations to work with. Very clean, very good stuff.

Using those animations as a starting point, I started working on by far my most complicated Mecanim state tree I'd attempted. I was happy to see that Unity has added a Trigger data type to the Animator, as this solves a problem I'd had to code around (with mixed results) on earlier projects, and the addition of animation events made it feasible to synchronize game events with animations without driving myself crazy.

At last count, my animator state machine for this project consists of 57 states (7 of them blend trees) across three layers, linked by 148 transitions. Most of the transitions have custom blend timings that all had to be manually tested and tweaked to get them looking right while also hitting the important animation points and events.

Speaking of events, I don't think I care to count them but I'm sure there are well over 100 of them. Wherever possible, I tried to specifically link visual changes to animation events to make them as smooth and realistic as possible, so for example the the sword switches from its idle hook to its weild hook at exactly the moment the character's hand grabs the hilt. There are many events that are included for potential future use but not yet used, like footfalls.

A highlight camera is included in the tech demo specifically to focus on the animations.


There was really no time to worry about art assets this month. The player character is a quickly-rigged and slightly-tweaked version of Ranger by Enthymeme and the target of his aggression is WarWolf by Thetank0meter, both from Blendswap.

The Verdict

Easily the most convincingly "real game" character controller/animations I've set up. Watching this guy run around gives me warm happy feelings of Baldur's Gate: Dark Alliance. Maybe I'll make a shameless clone of spiritrual successor to Gauntlet: Dark Legacy using this as a starting point. That's something that needs to be done by somebody, and at this rate maybe that'll be me.

Note: the following builds are available for testing purposes but I don't have the ability to test them. These are Unity builds so I don't anticipate any serious issues, but I can't vouch for correctness or performance of these builds.