Stumbling Toward 'Awesomeness'

A Technical Art Blog

Sunday, June 29, 2008

Debugging a Bluescreen

This is a tip that a coworker (Tetsuji) showed me a year ago or so, I was pretty damn sure my ATI drivers were bluescreening my system, but I wanted to hunt down proof. So you have just had a bluescreen and your pc rebooted. Here’s how to hunt down what happened.

First thing you should see when you log back in is this:

It’s really important that you not do anything right now; especially don’t click one of those buttons. Click the ‘click here‘ text ad then you will see this window.

Ok, so this doesn’t tell us much at all. We want to get the ‘technical information’, so click the link for that and you will see something like this:

Here is why we did not click those buttons before; when you click those buttons, these files get deleted. So copy this path and go to this folder. Copy the contents elsewhere, and close all those windows. So you now have these three files:

The ‘dmp’ file (dump file) will tell us what bluescreened our machine, but we need some tools to read it. Head on over to the Microsoft site and download ‘Debugging Tools for Windows’ (x32, x64). Once installed, run ‘WinDbg‘.  Select File->Open Crash Dump… and point it at your DMP file. This will open, scroll down and look for something like this:

In this example the culprit was ‘pgfilter.sys‘, something installed by ‘Peer Guardian’, a hacky privacy protection tool I use at home. There is a better way to cut through a dump file, you can also type in ‘!analyze -v‘, this will generate something like this:

In this example above you see that it’s an ATI driver issue, which I fixed by replacing the card with an nvidia and tossing the ATI into our IT parts box (junkbox).

posted by Chris at 5:01 PM  

Sunday, June 29, 2008

You Suck At Photoshop

You Suck at Photoshop always cracks me up, you might like it as well

posted by Chris at 1:47 PM  

Monday, June 23, 2008

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Sylvain Bernard, Animation Director, Ubisoft

Animation:

  • All animation was done in 3dsMax with Biped
    • ‘Our animators do not like MotionBuilder for creating animation’
    • Would have meant porting all their tools to MotionBuilder
  • MotionBuilder was only used to clean mocap
  • They decided to ignore foot sliding in order to concentrate on a better performance and gameplay experience
  • They stressed the importance of Technical Animators
  • Up to 15 animators worked on Assassin’s Creed
  • 40% of all animation was hand keyed
  • There is no procedural animation(not counting blending)
  • They showed the entire move tree
    • sprint, run, walk, jog, slow walk, banking, strafe, 4 idles
    • 168 ground animations for altair locomotion group
    • 122 anims in climbing group

Production:

  • 90% of work was integrating animation into the environment
  • The key was pairing animators with programmers
    • Sit them together
  • Before they started one main goal of the project was ‘to do as much animation as we could’
    • They saw Next Gen as an animation showcase
  • They prototype gameplay in max to show programmers how the game should look/feel
    • How AI should react
    • How a character should interact with the environment
  • ‘In the beginning designers were given free reign to make anything they wanted, in the end we had to make a 20 page document telling them how to create levels’
    • Too much freedom leads to chaos
  • Stressed the need to involve animators in animation system development

Pipeline/Rigging:

  • All characters share the same skeleton (male/female npc, altair)
    • ‘the art director wanted characters of different heights, we said ‘no”
    • made mocking things up easy
  • They call their movement locator the ‘magic bug’
    • Locators ‘joined together’ when two characters interacted
  • NPCs use simple hinge constraints for ponytails and things
  • They had ‘no working AI for almost the first two years‘ of the project
  • They do edge detection on the collision mesh
  • Auto nav mesh generation
  • Auto ‘animation object’ placement
posted by Chris at 12:34 PM  

Sunday, June 22, 2008

3D Models not Subject to Copyright

I saw this over at slashdot:

“The US Court of Appeals for the Tenth Circuit has affirmed (PDF) a ruling that a plain, unadorned wireframe model of a Toyota vehicle is not a creative expression protected under copyright law. The court analogized the wire-frame models to photographs: the owner of an object does not have a copyright in all images of the object, but a photographer may have a limited copyright over a particular image based on artistic choices such as costumery, lighting, posing, etc. Thus, the modelers could only copyright any ‘incremental contribution’ they made to Toyota’s vehicles; in the case of plain models, there was nothing new to protect. This could be a two-edged sword — companies that produce goods may not be able to stop modelers from imaging those products, but modelers may not be able to prevent others from copying their work.”

This will have some interesting ramifications. And I don’t just mean for the Limbo of the Lost guys. (j/k)

posted by Chris at 11:09 PM  

Sunday, June 22, 2008

AutoDesk Masterclass: Python for MotionBuilder Artists

In 2007, my friend Jason gave an AutoDesk Masterclass entitled: Python Scripting for MotionBuilder Artists. It has been available online and I would like to mention it for anyone who is interested in Python and MotionBuilder.

Here’s what you get for only 40 bucks:

118 page PowerPoint presentation
72 page Full Documentation
21 Scripts
6 Scenes
2 text files
8 .mov videos capturing 1 hour 20 minute lecture

Buy it here: Python Scripting for MotionBuilder Artists

posted by Chris at 1:35 PM  

Saturday, June 21, 2008

Facial Stabilization in MotionBuilder using Python

Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.

Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.

To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.

andy serkis - weta head stabilization halo

Stabilization markers

Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.

It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.

Load up the facial mocap file you have downloaded, it should look something like this:

In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.

General Pipeline

As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.

This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:

You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)

Creating ‘myLib’

So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:

#casts point3 strings to pyEuclid vectors
def vec3(point3):
	return Vector3(point3[0], point3[1], point3[2])
 
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
	return FBVector3d(point3.x, point3.y, point3.z)

Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:

#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
	output = []
	for name in modelNames:
		output.append(FBFindModelByName(name))
	return output

Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)

casting fbvectors to pyeuclid

It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..

In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.

So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.

Adding Stabilization-Specific Code to ‘myLib’

One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.

#returns average position of an FBModelList as FBVector3d
def avgPos(models):
	mLen = len(models)
	if mLen == 1:
		return models[0].Translation
	total = vec3(models[0].Translation)
	for i in range (1, mLen):
		total += vec3(models[i].Translation)
	avgTranslation = total/mLen
	return fbv(avgTranslation)

Here is an example of avgPos() in use:

Now onto the stabilization code:

#stabilizes face markers, input 4 FBModelList arrays, leaveOrig  for leaving original markers
def stab(right,left,center,markers,leaveOrig):
 
	pMatrix = FBMatrix()
	lSystem=FBSystem()
	lScene = lSystem.Scene
	newMarkers = []
 
	def faceOrient():
		lScene.Evaluate()
 
		Rpos = vec3(avgPos(right))
		Lpos = vec3(avgPos(left))
		Cpos = vec3(avgPos(center))
 
		#build the coordinate system of the head
		faceAttach.GetMatrix(pMatrix)
		xVec = (Cpos - Rpos)
		xVec = xVec.normalize()
		zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
		zVec = zVec.normalize()
		yVec = xVec.cross(zVec)
		yVec = yVec.normalize()
		facePos = (Rpos + Lpos)/2
 
		pMatrix[0] = xVec.x
		pMatrix[1] = xVec.y
		pMatrix[2] = xVec.z
 
		pMatrix[4] = yVec.x
		pMatrix[5] = yVec.y
		pMatrix[6] = yVec.z
 
		pMatrix[8] = zVec.x
		pMatrix[9] = zVec.y
		pMatrix[10] = zVec.z
 
		pMatrix[12] = facePos.x
		pMatrix[13] = facePos.y
		pMatrix[14] = facePos.z
 
		faceAttach.SetMatrix(pMatrix,FBModelTransformationMatrix.kModelTransformation,True)
		lScene.Evaluate()
 
	#keys the translation and rotation of an animNodeList
	def keyTransRot(animNodeList):
		for lNode in animNodeList:
			if (lNode.Name == 'Lcl Translation'):
				lNode.KeyCandidate()
			if (lNode.Name == 'Lcl Rotation'):
				lNode.KeyCandidate()
 
	Rpos = vec3(avgPos(right))
	Lpos = vec3(avgPos(left))
	Cpos = vec3(avgPos(center))
 
	#create a null that will visualize the head coordsys, then position and orient it
	faceAttach = FBModelNull("faceAttach")
	faceAttach.Show = True
	faceAttach.Translation = fbv((Rpos + Lpos)/2)
	faceOrient()
 
	#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
	for obj in markers:
		new = FBModelNull(obj.Name + '_stab')
		newTran = vec3(obj.Translation)
		new.Translation = fbv(newTran)
		new.Show = True
		new.Size = 20
		new.Parent = faceAttach
		newMarkers.append(new)
 
	lPlayerControl = FBPlayerControl()
	lPlayerControl.GotoStart()
	FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
	FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
 
	animNodes = faceAttach.AnimationNode.Nodes
 
	for frame in range(FStart,FStop):
 
		#build proper head coordsys
		faceOrient()
 
		#update stabilized markers and key them
		for m in range (0,len(newMarkers)):
			markerAnimNodes = newMarkers[m].AnimationNode.Nodes
			newMarkers[m].SetVector(markers[m].Translation.Data)
			lScene.Evaluate()
			keyTransRot(markerAnimNodes)
 
		keyTransRot(animNodes)
 
		lPlayerControl.StepForward()

We feed our ‘stab function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)

Creating an External UI that Uses ‘myLib’

Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.

The code for the facial stabilization UI I have created is here: [stab_ui.py]

I will now step through code snippets pertaining to our facial STAB tool:

def getSelection():
	selectedItems = []
	mbPipe("selectedModels = FBModelList()")
	mbPipe("FBGetSelectedModels(selectedModels,None,True)")
	for item in (mbPipe("for item in selectedModels: print item.Name")):
		selectedItems.append(item)
	return selectedItems

stab uiThis returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.

At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.

Example:

def rStabClick(self,event):
	self.rStabMarkers = getSelection()
	print str(self.rStabMarkers)
	self.rStab.Label = (str(len(self.rStabMarkers)) + " Right Markers")

This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.

def stabilizeClick(self,event):
	mbPipe('from euclid import *')
	mbPipe('from myLib import *')
	mbPipe('rStab = modelsFromStrings(' + str(self.rStabMarkers) + ')')
	mbPipe('lStab = modelsFromStrings(' + str(self.lStabMarkers) + ')')
	mbPipe('cStab = modelsFromStrings(' + str(self.cStabMarkers) + ')')
	mbPipe('markerset = modelsFromStrings(' + str(self.mSetMarkers) + ')')
	mbPipe('stab(rStab,lStab,cStab,markerset,False)')

Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:


Kill the keyframes on the root (faceAttach) to remove head motion

Conclusion: Debugging/Optimization

Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.

Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:

Sending>>> selectedModels = FBModelList()
Sending>>> FBGetSelectedModels(selectedModels,None,True)
Sending>>> for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']

All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.

posted by Chris at 10:10 PM  

Friday, June 20, 2008

360 Degree Streaming Video

This is a video from a company called Immersive Media. It’s a 360 degree streaming video you can pan around, and even zoom in. Awesome stuff, their hardware even does realtime stitching, and they have an underwater housing. Check out the site for more vids, they have been to some great locations.

posted by Chris at 12:01 PM  

Friday, June 20, 2008

A Functional MotionBuilder Python Console

I was talking to my friend Marco the other day.  As he is a real programmer, he is somewhat equipped with the needed skills required to decode MotionBuilder’s procedurally-generated Python documentation.  We were both frustrated, fighting with the ‘Python Console Tool’, when I showed him the telnet interface he was like “why don’t you just use that?”

And this is what I started doing. I now do much of my tests and work in the telnet console, because, unlike the built in console that Motion Builder offers, the telnet window at least offers copy/paste, and you can press the up arrow to cycle through previous arguments that you have entered. I would suggest using this until Autodesk adds usable features to their ‘Python Console Tool’.

Here’s an example:

posted by Chris at 1:08 AM  

Friday, June 20, 2008

Quickly Graphing Python Data in MotionBuilder

I have been researching quick ways to output MotionBuilder data visually, which I might post about later (doing some matplotlib tests here at home). The following is probably a ‘no-brainer’ to people with a programming background, but I found it interesting. Below I am using simple hashes to graph values visually in the console.

data = [20, 15, 10, 7, 5, 4, 3, 2, 1, 1, 0]
for i in data: print '#' * i

This will output something like so:

####################
###############
##########
#######
#####
####
###
##
#
#

Here’s a better example referencing some data names, and it’s output in the MB pyConsole:

for i in range(0,len(data)): print data1[i] + ' ' + ('#' * data[i])

python graph

posted by Chris at 12:36 AM  

Tuesday, June 17, 2008

RIP Stan Winston

One of my heroes passed away today. I never knew the guy but it made me very sad and hollow to hear he had passed. He was responsible for many of the creatures in films that made me eventually want to be a Technical Director.

posted by Chris at 12:00 AM  

Monday, June 16, 2008

High Speed Photography with the Casio EX-F1

At work we got the Casio EX-F1 for animation reference. It’s a really great, cheap solution for those looking to record high speed reference (300/600/1200fps) or hd (1080) video. Here are some videos I took a few weeks ago:

posted by Chris at 5:27 PM  

Sunday, June 15, 2008

RigPorn: Kung Fu Panda

Here are some screens of animation rigs from Kung Fu Panda:

In a shot:

posted by Chris at 2:20 AM  

Sunday, June 15, 2008

Building a J1 Remote Trigger for Vicon Datastations

Remote Trigger? Why Would I Want That?

Vicon Datastations allow you to string off a remote trigger which can allow you to start and stop of a motion capture take with a physical button. This could allow you to start/stop motion capture with sensors or anything else. In our case, we wanted to start/stop another device at the same exact time and have it sync’d with the mocap data, also, allow one person to run the device and the mocap session.

Disclaimer: I am aware that the remote interface is the same for the V8i/612/624/460/V6 Datastations, but I built this for the V8i, which looks like this:

This is what the ‘J1 REMOTE‘ port looks like on the back of your Datastation:

RTFM

Here is the description of the J1 Remote in the Vicon hardware manual:

Located directly below the camera inter face connectors, the J1 connector function is to allow the remote control of data capture from external switches or photoelectric sensors. Connecting Start (pin 3) or Stop (pin 5) to Ground (pin 7) will initiate the selected function.  Pin 1 generates a negative going TTL gated reference signal, which is aligned, to the camera Horizontal Synchronisation (HD) signal and present when data capture is being per formed.

The hardware manual will tell you that the J1 Remote Interface Connector is a Lemo Part (FGG.1B.307.CLAD52). So you will have to order this (follow the link). Below is the pin out from the manual, it’s pretty simple stuff:

Building the Trigger

Working with Relays

So, what we want to do is make a start and a stop button, or you could make an on/off switch. I made a button. The button flips a relay, which is like a switch. Below you see 5 pins, labeled ‘start‘, ‘stop‘, ‘grnd‘ and ‘coil‘. When you apply power to the coil, it will connect the grnd from stop to start and vice versa. Because it’s a magnet that flips the switch, nothing from the inner circuitry of the trigger can send any interference to the Vicon Datastation.

Below you see two relays, one triggers start/stop, the other triggers an LED. You can get relays that flip multiple poles at once. If you wanted to start/stop other devices with the same buttons you would add more relays, or use a multi-pole. In my example below I was sure to get relays and LEDs that work with a 9v battery, this way you do not need resistors or anything to alter the voltage.

The Altogether

This is what a final remote trigger can look like, green starts, red stops. The green LED can be on while capturing. The above relay will flip the light on/off based on button contact, even if red is pressed first, so you may want to go a different route if someone has butter fingers. The cord is durable microphone cord, as we only need 3 wires (start/stop/grnd, mic cable =  left/right/grnd).

Note: The J1 Remote Trigger works in Vicon Workstation, however, when Vicon updated it’s software to IQ, they did not want to spend the time to continue support of the remote trigger. IQ supports newer technology like the ‘MX Remote’ made by Vicon, which they would rather have you purchase. So yes, if you update your Vicon Software, certain features of your Vicon hardware will become useless.

posted by Chris at 1:50 AM  

Tuesday, June 10, 2008

Poor Man’s Mocap

This year my friend Judd gave a talk entitled Uncharted Animation: An In-depth Look at the Character Animation Workflow and Pipeline. In the talk, he showed what they call ‘poor man’s mocap’, where the animator can load up a sequence of frames and they are sync’d with the timeline in Maya. So as someone scrubs an animation, it scrubs the frames of the video. I have duplicated this in a small maxscript available in cryTools. You can grab it in the [Tutorials/Files] section.

posted by Chris at 2:24 PM  

Powered by WordPress