Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, April 25, 2012

RigPorn: Halo4 Skeleton and Loco Debug

Found a screen of 343's in-game locmotion debug for anyone interested (click to enlarge)

posted by admin at 12:47 AM  

Saturday, April 21, 2012

Maya: Walking the Line

I am still finding my feet in Maya, on my project, some files have grown to 800mb in size. Things get corrupt, hand editing MAs is common; I am really learning some of the internals.

In the past week I have had to do a lot of timeline walking to switch coord spaces and get baked animations into and out of hierarchies. In 3dsMax you can do a loop and evaluate a node ‘at time i’, and there is no redraw or anything. I didn’t know how to do this in Maya.

I previously did this with looping cmds.currentTime(i) and ‘walking the timeline’, however, you can set the time node directly like so: cmds.setAttr(“time1.outTime”, int(i))

Unparenting a child with keyed compensation (1200 frames)
10.0299999714 sec – currentTime
2.02 sec – setAttr

There are some caveats, whereas in a currentTime loop you can just cmds.setKeyframe(node), I now have to cmds.setKeyframe(node, time=i). But when grabbing a matrix, I don’t need to pass time and it works, I don’t think you can anyway.. I guess it gets time from the time node.

Here’s a sample loop that makes a locator and copies a nodes animation to world space:

#function feeds in start, end, node
	if not start: start = cmds.playbackOptions(minTime=1, q=1)
	if not end: end = cmds.playbackOptions(maxTime=1, q=1)
	loc = cmds.spaceLocator(name='parentAlignHelper')
	for i in range(start, (end+1)):
		cmds.setAttr("time1.outTime", int(i))
		matrix = cmds.xform(node, q=1, ws=1, m=1)
		cmds.xform(loc, ws=1, m=matrix)
		cmds.setKeyframe(loc, time=i)
posted by admin at 11:44 AM  

Tuesday, October 4, 2011

Question: Rigging with MetaData?

As many of you know, I feel the whole ‘autorigging’ schtick is a bit overrated. Though Bungie gave a great talk at GDC09 (Modular Procedural Rigging), Dice was to give one this year at SIGGRAPH (Modular Rigging in Battlefield 3), but never showed up for the talk.

At Crytek we are switching our animation dept from 3dsMax to Maya. This forces us to build a pipeline there from scratch; in 3dsMax we had 7 years of script development focused on animation and rigging tools. So I am looking at quite a bit o Maya work. The past two weeks focusing on a ‘rigging system’ that I guess could be thought of as ‘procedural’ but is not really an ‘autorigger’. My past experience was always regenerating rigs with mel cmds.

Things I would like to solve:

  • Use one set of animator tools for many rigs – common interfaces, rig block encapsulation (oh god i said ‘block’)
  • Abstract things away, thinking of rigging ‘units’ and character ‘parts’ instead of individual rig elements, break reliance on naming, version out different parts
  • Be fluid enough to regenerate the ‘rigging’ at any time

First Weekend: Skeleton ‘Tagging’

I created a wrapper around the common rigging tools that I used, this way, when I rigged, it would automagically markup the skeleton/elements as I went. This looked like so:

The foundation of this was marking up the skeleton or cons with message nodes that pointed to things or held metadata.  This was cool, and I still like how simple it was, however, it didn’t really create the layer of abstraction I was after. There wasn’t the idea of a limb that I could tell to switch from FK to IK.

Second Weekend: Custom Nodes

That Bungie talk got a lot of us all excited, Roman went and created a really cool custom node plugin that does way more than we spec’d it out to do. I rewrote the rigging tools to create ‘rigPart’ nodes, which could be like an IK chain, set of twist joints, expression, or constraint. These together could form a ‘charPart’ like an arm or leg. All these nodes sat under a main ‘character’ node. I realize that many companies abstract their characters into ‘blocks’ or ‘parts’, but I had never seen a system that had another layer underneath that. Roman also whipped up a way that when an attr on a customNode changes, you could evaluate a script. So whether it’s a human arm or alien tentacle arm, the ‘charPart’ node can have one FK/IK enum. I am still not sure if this is a better idea, because of the sheer legwork involved..

Third Weekend: A Mix of Both?

So a class like ‘charParts.gruntLeg()’ not only knew how to build the leg rigParts, but also only the leg ‘rigging’ if needed. This works pretty well, but the above was pretty hard to read. I took some of my favorite things about the tree-view-based system and I created a ‘character’ outliner of sorts. This made it much easier to visualize the rigParts that made up individual ‘systems’ of the character, like leg, spine, arm, etc. I did it as a test, but in a way that I easily swap it out with the treeWidget in the rigging tools dialog.

So how do you guys solve some of these issues?

posted by Chris at 4:00 AM  

Tuesday, July 19, 2011

SIGGRAPH 2011

I am volunteering again in the Studio; giving three small talks at SIGGRAPH, drop me a line if you will be in Vancouver.

Rigging Characters for CryENGINE

How to rig, skin, and export a character for CryENGINE 3. Topics include physics setup, building characters from many skinned meshes, and creating Character Definitions and Character Parameter files. These rigging basics are applicable to most run-time game engines.

Introduction to Python Scripting

In this introduction to Python, a powerful scripting language used by many 3D applications, attendees learn the basics and explore small example scenarios gleaned from actual game and film productions. The sessions are taught in a way that should empower attendees to immediately begin creating time-saving python scripts and applications.

World Creation in CryENGINE

Have you ever wanted to make a videogame? This session shows how to build a small level in the freely available CryENGINE 3 SDK. Topics include: world building and tools (FlowGraph, CryENGINE’s visual scripting language, and Trackview, the camera sequencing and directing tools). In less than an hour, attendees create their own playable video games.

posted by admin at 9:32 AM  

Wednesday, April 7, 2010

RigPorn: Uncharted 2

My friends Judd and Rich gave a talk on some of the Character Tech behind Uncharted 2. Here are the slides.

posted by admin at 8:32 PM  

Wednesday, March 4, 2009

Common Character Issues: Attachments

I love this picture. It illustrates a few large problems with video games. One of which I have wanted to talk about for a while: Attachments of course. I am talking about the sword (yes, there is a sword, follow her arm to the left side of the image..)

Attaching props to a character that has to dynamically be seen from every angle through thousands of animations can be difficult. So difficult that people often give up. This was a promotional image for an upcoming Soul Calibur title, this goes to show you how difficult the issue is. Or maybe no one even noticed she was holding a sword. So let’s look at a promotional image from another game:

Why does it happen?

Well, props are often interchangeable. Many different props are supposed to attach to the same location throughout the game. This is generally done by marking up the prop and the skeleton with two attachment points that snap to one another.

In this case you often have one guy modeling the prop, one guy placing the skeleton, and one guy creating the animation. All these people have to work together.

How can we avoid these problems?

This problem is most noticeable at the end of the line: you would really only see it in the game. But this is one of the few times you will hear me say that checking it ‘in the engine’ is a bad idea. It’s hard enough to get animators to check their animation, much less test all the props in a ‘prop test level’ of sorts.

I feel problems like this mainly show up in magazines and final games because you are leaving it up to QA and other people who don’t know what to look for. There was a saying I developed while at Crytek when trying to impart some things on new tech art hires: “Does QA know what your alien should deform like? And should they?” The answer is no, and it also goes for the things above. Who knows how robotnik grips his bow.. you do, the guy rigging the character.

So in this case I am all for systems that allow animators to instantly load any common weapons and props from the game directly onto the character in the DCC app. You need a system that allows animators to be able to attach any commonly used prop at any time during any animation (especially movement anims)

Order of operations

Generally I would say:

  1. The animator picks a pivot point on the character. They will be animating/pivoting around this.
  2. The tech artist ‘marks’ up the skeleton with the appropriate offset transform.
  3. The modeler ‘marks’ his prop and tests it (iteratively) on one character
  4. The tech artist adds the marked up prop (or low res version) to a special file that is streamlined for automagically merging in items. Then adds a UI element that allows the animator to select the prop from a drop down and see it imported and attached to the character.

Complications

I can remember many heated discussions about problems like this. The more people that really care about hte final product, and the more detailed or realistic games and characters get, the more things like this will be scrutinized.

This is more of a simple problem that just takes care and diligence, whereas things like multiple hand positions and hand poses are a little more difficult. Or attachments that attach via a physics constraint in the engine. There are also other, much more difficult issues in this realm, like exact positioning of AI characters for interacting with each other and the environment, which is another tough ‘snap me into the right place’ problem dealing with marking up a character and an item in the world to interact with.

posted by Chris at 11:25 AM  

Wednesday, December 17, 2008

Kavan et al Have Done It!

Ladislav Kavan is presenting a paper entitled ‘Automatic Linearization of Nonlinear Skinning‘ at the 2009 Symposium on Interactive 3D Graphics and Games on skinning arbitrary deformations! Run over to his site and check it out. In my opinion, this is the holy grail of sorts. You rig any way you want, have complex deformation that can only solve at 1 frame an hour? No problem, bake a range of motion to pose-driven, procedurally placed, animated, and weighted joints. People, Kavan included, have been presenting papers in the past with systems somewhat like this, but nothing this polished and final. I have talked to him about this stuff in the past and it’s great to see the stuff he’s been working on and that it really is all I had hoped for!

This will change things.

posted by Chris at 12:16 PM  

Wednesday, September 17, 2008

Making of the Image Metrics ‘Emily’ Tech Demo

I have seen some of the other material in the SIGGRAPH Image Metrics presskit posted online [Emily Tech Demo] [‘How To’ video], but not the video that shows the making of the Emily tech demo. So here’s that as well:

At the end, there’s a quote from Peter Plantec about how Image Metrics has finally ‘crossed the uncanny valley’, but seriously, am I the only one who thinks the shading is a bit off, and besides that, what’s the point of laying a duplicate of face directly on top of one in a video? Shouldn’t they have shown her talking in a different setting? Maybe showed how they can remap the animation to a different face? There is no reason not to just use the original plate in this example.

posted by Chris at 4:44 PM  

Thursday, July 31, 2008

MGS4 Character Pipeline

Character Creation Pipeline

Thanks to my brother, Mike, for translating  this from the original japanese [here]

The hero, Snake, and nearly all other characters we animate on the PS3 and make an appearance in the game are restrained to the range of about 5 thousand to 1 million polygons (including the face). Also, in both gameplay and “cutscenes” the same resolution polygon characters are used. This allows for seamless transition between the gameplay and cutscenes and makes it easier for the player to get emotionally involved in the reality of the game.

Furthermore, for all other characters, except crowds, the same resolution of polygon characters are used in game as well as in cutscenes. Separate from the resolution models used on the PS3, high rez data is modeled at the same time to generate a normal map. Wrinkles in clothing and other details are expressed through this normal map, created from the high rez model.

Of all the bones within the character’s body, the number that contain and are driven by animation data is roughly around 21. But, in reality a number of helper (auxiliary) bones are used to supplement motions like twisting in the knees, elbows, arms and legs.  These however are not driven by animation data. Instead, they reference values of the basic animation driven joints and move in like manner.


The same method is employed on the PS3, not just in XSI; all you have to do is extract the helper bones’ definition files from the XSI data and you can achieve the same kind of control on the PS3 as well. (Awesome! Rig syncing constraints and driven bones between DCC app and game engine)
Since there is no actual motion data stored inside the driven-bones, you are able to not only limit the data volume but even in the event that you need to add or delete helper bones, there’s no need to reconvert the motion data- you can just adjust the model data instead.

posted by Chris at 8:27 AM  

Thursday, July 31, 2008

MGS4 Facial Animation

Shockingly Realistic Facial Animation

Thanks to my brother, Mike, for translating  this from the original japanese [here]

One of the most notable things about MGS4 is its world-leading cutting edge facial animation. Exactly how were these real-to-life facial expressions created?

Since the Metal Gear Solid series is lip-sinked for localization, from a workload standpoint voice analysis software is employed

In MGS4 for example, lip-sinking for Japanese and English were done seperately with different voice analysis software.Other emotions and expressions besides lip-sinking were animated by hand. In nearly all cases, the expression and phoneme elements were worked on together simultaneously, reducing interference and allowing MGS4 to achieve its simultanious world release.

When doing voice analysis, it’s necessary to set parameters for both expression components (i.e., anger, smile, etc.) and phoneme components (all languages) seperately. After setting this up, we need to see how it behaves as a rig. It’s possible to use parameters for the rotation and movement of bones; however, the rig can become more complicated and it can also become more difficult to predict how the bones with transform/change once enveloped. In other words, when facial animation is done by only controlling the bones, ituitively the designer’s job becomes more difficult and he runs into the following two problems: 1) expressing the behavior of bones, and 2) setting parameters for phonemes.

However, with shape animation (even though it has the drawback of linear interpolation) it’s extremely easy to set up parameters for all your phonemes and
expressions. Most of all, it’s adventagous in that the desiger will be able to intuitively predict the result.

For these reasons, this time on our rig we used bone-driven animation based on the results of various parameter shapes.

With this set up, using voice analysis automated animation (not just the mouth, but automatic animation of the tounge and throat phonemes as well) and hand animation for emotions, we are able to achieve an abundance of realistic expressions.

In the following flash movie you can see how smooth muscular expressions are achieved through superb rig setup

Flash Movie:

Facial rig setup pipeline

————————————————
1. Lo-poly model driven by shape animation
2. Above that, the constrained bones
3. Polygonal mesh enveloped to the bones
4.Tangent color
5. OpenGL display (wrinkles expressed also with normal map)

————————————————

Expressions, phonemes, eyes (eyebrows), and shader driven wrinkle animation are all tab selectable.
Through the combination of various parameters we can create life-like expressions like those shown above.

The most suprising thing is, we developed a tool that automatically sets up this facial rig that allows such sophisticated control. In other words, if you enter the facial model data and run the tool it will automatically identify the optimal position for bones– in this system the tool will create controls that include the preset parameters for emotions. (a smily face, an angry face, etc.) To perform the automated facial rigging, the facial data’s topology information needs to be standardized ahead of time. If you adhere to this one rule, your set up can be done automatically, and all that’s left to do is for the designer to fine-tune the controls and you have a constructed enviorment where you can get right into your facial animation.

Next, a rig that controls the movement of the eyeball and surrounding muscles can also be generated automatically using this tool. Since the area around the eye, like the area surrounding the mouth, is controlled by the simultanious usage of shapes and bones, when you move the eyeball locater you get smooth muscular movement. What’s more, even if you edit the shape, or redefine the configuration of the outline of the eye, it doesn’t disrupt the expression of brow wrinkles or the blinking of the eye in any way.

Behind all the characters that make an appearance in this game, and appeal to the player’s emotions, we have implemented this set up and animation system; and, through it we are able to increase and maintain a high quality user experience.

posted by Chris at 1:14 AM  

Monday, July 28, 2008

Gleaning Data from the 3dsMax ‘Reaction Manager’

This is something we had been discussing over at CGTalk, we couldn’t find a way to figure out Reaction Manager links through maxscript. It just is not exposed. Reaction Manager is like Set Driven in Maya or a Relation Constraint in MotionBuilder. In order to sync rigging components between the packages, you need to be able to query these driven relationships.

I set about doing this by checking dependencies, and it turns out it is possible. It’s a headache, but it is possible!

The problem is that even though slave nodes have controllers with names like “Float_Reactor”, the master nodes have nothing that distinguishes them. I saw that if I got dependents on a master node (it’s controllers, specifically the one that drives the slave), that there was something called ‘ReferenceTarget:Reaction_Master‘:

refs.dependents $.position.controller
#(Controller:Position_Rotation_Scale, ReferenceTarget:Reaction_Master, Controller:Position_Reaction, ReferenceTarget:Reaction_Set, ReferenceTarget:Reaction_Manager, ReferenceTarget:ReferenceTarget, ReferenceTarget:Scene, Controller:Position_Rotation_Scale, $Box:box02 @ [58.426544,76.195091,0.000000], $Box:box01 @ [-42.007244,70.495964,0.000000], ReferenceTarget:NodeSelection, ReferenceTarget:ReferenceTarget, ReferenceTarget:ReferenceTarget)

This is actually a class, as you can see below:

exprForMAXObject (refs.dependents $.position.controller)[2]
"<<Reaction Master instance>>"
 
getclassname (refs.dependents $.position.controller)[2]
"Reaction Master"

So now we get the dependents of this ‘Reaction Master’, and it gives us the node that it is driving:

refs.dependentNodes (refs.dependents $.position.controller)[2]
#($Box:box02 @ [58.426544,76.195091,0.000000])

So here is a fn that gets Master information from a node:

fn getAllReactionMasterRefs obj =
(
	local nodeRef
	local ctrlRef
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			for item in (refs.dependents ctrl) do
			(
				if item as string == "ReferenceTarget:Reaction_Master" then
				(
					nodeRef = (refs.dependentNodes item)
					ctrlRef = ctrl
				)
			)
			getAllReactionMasterRefs obj[n]
		)
	)
	return #(nodeRef, ctrlRef)
)

The node above returns:

getAllReactionMasterRefs $
#(#($Box:box02 @ [58.426544,76.195091,0.000000]), Controller:Position_Rotation_Scale)

The first item is an array of the referenced node, and the second is the controller that is driving *some* aspect of that node.

You now loop through this node looking for ‘Float_Reactor‘, ‘Point3_Reactor‘, etc, and then query them as stated in the manual (‘getReactionInfluence‘, ‘getReactionFalloff‘, etc) to figure out the relationship.

Here is an example function that prints out all reaction data for a slave node:

fn getAllReactionControllers obj =
(
	local list = #()
	for n = 1 to obj.numSubs do
	(
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
		(
			--print (classof ctrl)
			if (classof ctrl) == Float_Reactor \
			or (classof ctrl) == Point3_Reactor \
			or (classof ctrl) == Position_Reactor \
			or (classof ctrl) == Rotation_Reactor \
			or (classof ctrl) == Scale_Reactor then
			(
				reactorDumper obj[n].controller data
			)
		)
		getAllReactionControllers obj[n]
	)
)

Here is the output from ‘getAllReactionControllers $Box2‘:

[Controller:Position_Reaction]
ReactionCount - 2
ReactionName - My Reaction
    ReactionFalloff - 1.0
    ReactionInfluence - 100.0
    ReactionStrength - 1.2
    ReactionState - [51.3844,-17.2801,0]
    ReactionValue - [-40.5492,-20,0]
ReactionName - State02
    ReactionFalloff - 2.0
    ReactionInfluence - 108.665
    ReactionStrength - 1.0
    ReactionState - [65.8385,174.579,0]
    ReactionValue - [-48.2522,167.132,0]

Conclusion
So, once again, no free lunch here. You can loop through the scene looking for Masters, then derive the slave nodes, then dump their info. It shouldn’t be too difficult as you can only have one Master, but if you have multiple reaction controllers in each node effecting the other; it could be a mess. I threw this together in a few minutes just to see if it was possible, not to hand out a polished, working implementation.

posted by Chris at 4:42 PM  

Sunday, June 15, 2008

RigPorn: Kung Fu Panda

Here are some screens of animation rigs from Kung Fu Panda:

In a shot:

posted by Chris at 2:20 AM  
« Previous Page

Powered by WordPress