Stumbling Toward 'Awesomeness'

A Technical Art Blog

Monday, April 19, 2010

Dealing with File Sequences in Python

I have been parsing through the files of other people a lot lately, and finally took the time to make a little function to give me general information about a sequence of files. It uses regex to yank the numeric parts out of a filename, figure out the padding, and glob to tell you how many files in the sequence. Here’s the code and an example usage:

#returns [base name, padding, filetype, number of files, first file, last file]
def getSeqInfo(file):
	dir = os.path.dirname(file)
	file = os.path.basename(file)
	segNum = re.findall(r'\d+', file)[-1]
	numPad = len(segNum)
	baseName = file.split(segNum)[0]
	fileType = file.split('.')[-1]
	globString = baseName
	for i in range(0,numPad): globString += '?'
	theGlob = glob.glob(dir+'\\'+globString+file.split(segNum)[1])
	numFrames = len(theGlob)
	firstFrame = theGlob[0]
	lastFrame = theGlob[-1]
	return [baseName, numPad, fileType, numFrames, firstFrame, lastFrame]

Here is an example of usage:

print getSeqInfo('E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000547.jpg')
>>['Frame', 6, 'jpg', 994, 'E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000000.jpg', 'E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000993.jpg']

I know this is pretty simple, but I looked around a bit online and didn’t see anything readily available showing how to deal with different numbered file sets. I have needed something like this for a while that will work with anything from OBJs sent from external contractors, to images from After Effects…

posted by admin at 6:49 PM  

Monday, April 12, 2010

Drop Files on a Python Script

So I have always been wondering how you can create almost like a ‘droplet’ to steal the photoshop lingo, from a python script. A while ago I came across some sites showing how to edit shellex in regedit to allow for files to be dropped on any python script and fed to it as args (Windows).

It’s really simple, you grab this reg file [py_drag_n_drop.reg] and install it.

Now when you drop files onto a python script, their filenames will be passed as args, here’s a simple script to test.

import sys
f = open('c:\\tmp.txt', 'w')
for arg in sys.argv:
    f.write(arg + '\n')

When you save this, and drop files onto its icon, it will create tmp.txt, which will look like this:

X:\photos\2010.04 - easter weekend\fuji\DSCF9048.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9049.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9050.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9051.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9052.MPO

The script itself is the first arg, then all the files. This way you can easily create scripts that accept drops to do things like convert files, upload files, etc..

posted by admin at 12:33 AM  

Wednesday, April 7, 2010

PyQt4 UIC Module Example

I have been really amazing myself at how much knowledge I have forgotten in the past five or six months… Most of the work I did in the past year utilized the UIC module to load UI files directly, but I can find very little information about this online. I was surprised to see that even the trusty old Rapid GUI Programming with Python and Qt book doesn’t cover loading UI files with the UIC module.

So, here is a tiny script with UI file [download] that will generate a pyqt example window that does ‘stuff’:

import sys
from PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
	def __init__(self):
		self.ui = uic.loadUi('X:/projects/2010/python/pyqt_tutorial/pyqt_tutorial.ui')
		self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
if __name__ == "__main__":
	app = QtGui.QApplication(sys.argv)
	win = TestApp()

Change the path to reflect where you have saved the UI file, and when you run the script you should get this:

EDIT: A few people have asked me to update this for other situations

PySide Inside Maya:

import sys
from PySide.QtUiTools import *
from PySide.QtCore import *
from PySide.QtGui import *
class TestApp(QMainWindow):
	def __init__(self):
		loader = QUiLoader()
		self.ui = loader.load('c:/pyqt_tutorial.ui')
		self.connect(self.ui.doubleSpinBox, SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, SIGNAL("clicked()"), buttonFn)
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()

PyQT Inside Maya:

import sys
from PyQt4 import QtGui, QtCore, uic
class TestApp(QtGui.QMainWindow):
	def __init__(self):
		self.ui = uic.loadUi('c:/pyqt_tutorial.ui')
		self.connect(self.ui.doubleSpinBox, QtCore.SIGNAL("valueChanged(double)"), spinFn)
		self.connect(self.ui.comboBox, QtCore.SIGNAL("currentIndexChanged(QString)"), comboFn)
		self.connect(self.ui.pushButton, QtCore.SIGNAL("clicked()"), buttonFn)
def spinFn(value):
	win.ui.doubleSpinBoxLabel.setText('doubleSpinBox is set to ' + str(value))
def buttonFn():
def comboFn(value):
	win.ui.comboBoxLabel.setText(str(value) + ' is selected')
win = TestApp()
posted by admin at 11:54 PM  

Tuesday, March 30, 2010

32K Sistine Chapel CubeMap [Python How-To]

The Vatican recently put up an interactive Sistine Chapel flash application. You can pan around the entire room and zoom in and out in great detail.

The Vatican is not very open with it’s art, the reason they scream ‘NO PHOTO’ when you pull a camera out in the chapel is that they sold the ability to take photos of it to a Japanese TV Station (Nippon TV) for 4.2 million dollars. Because the ceiling has long been in the public domain, the only way they can sell ‘the right to photograph’ the ceiling is by screwing over us tourists who visit. If you take a photo, they have no control over that image –because they don’t own the copyright of the work.

Many of you who know me, know I am a huge fan of Michelangelo’s work, this data was just too awesomely tempting and when I saw it posted publicly online, I really wanted to get my hands on the original assets.

Here is a python script to grab all of the image tiles that the flash app reads, and then generate the 8k faces of the cubemap. In the end you will have a 32,000 pixel cubemap.

First we copy the swatches from the website:

def getSistineCubemap(saveLoc):
	import urllib
	#define the faces of the cubemap, using their own lettering scheme
	faces = ['f','b','u','d','l','r']
	#location of the images
	url = ''
	#copy all the swatches to your local drive
	for face in faces:
		for x in range(1,9):
			for y in range(1,9):
				file = (face + '_' + str(y) + '_' + str(x) + '.jpg')
				urllib.urlretrieve((url + face + '_' + str(y) + '_' + str(x) + '.jpg'), (saveLoc + file))
				print "saved " + file

Next we use PIL to stitch them together:

def stitchCubeMapFace(theImage, x, y, show):
	from PIL import Image, ImageDraw
	from os import path
	file = theImage.split('/')[-1]
	fileSplit = file.split('_')
	im =
	#create an 8k face from the first swatch
	im = im.resize((8000, 8000), Image.NEAREST)
	thePath = path.split(theImage)[0]
	xPixel = 0
	yPixel = 0
	#loop through the swatches, stitching them together
	for y_ in range(1, x+1):
		for x_ in range(1,y+1):
			if yPixel == 8000:
				yPixel = 0
			nextImage = (thePath + '/' + fileSplit[0] + '_' + str(x_) + '_' + str(y_) + '.jpg')
			print ('Merging ' + nextImage + ' @' + str(xPixel) + ',' + str(yPixel))
			loadImage =
			im.paste(loadImage, (xPixel, yPixel))
			yPixel += 1000
		xPixel += 1000
	saveImageFile = (thePath + '/' + fileSplit[0] + '_face.jpg')
	print ('Saving face: ' + saveImageFile)
	#save the image, 'JPEG')
	#load the image in default image viewer for checking
	if show == True:
		import webbrowser

Here is an example of the input params:

stitchCubeMapFace('D:/sistineCubeMap/r_1_1.jpg', 8, 8, True)
posted by admin at 7:42 PM  

Wednesday, July 8, 2009

Buggy Camera Issues In Maya on x64

Many, many people are having weird, buggy camera issues where you rotate a view and it snaps back to the pre-tumbled state (view does not update properly). There are posts all over, and Autodesk’s official response is “Consumer gaming videocards are not supported”. Really? That’s basically saying: All consumer video cards, gaming or not, are unsupported. I have had this issue on my laptop, which is surely not a ‘gaming’ machine. Autodesk says the ‘fix’ is to upgrade to an expensive pro-level video card. But what they maybe would tell you if they weren’t partnered with nVidia is: It’s an easy fix!

Find your Maya ENV file:

C:\Documents and Settings\Administrator\My Documents\maya\2009-x64\Maya.env

And add this environment variable to it:


Autodesk buried this information in their Maya 2009 Late Breaking Release Notes, and it fixes the issue completely! However, even on their official forum, Autodesk employees and moderators reply to these draw errors as follows:

Maya 2009 was tested with a finite number of graphics cards from ATI and Nvidia, with drivers from each vendor that provided the best performance, with the least amount of issues. (at the time of product launch).  A list of officially qualified hardware can be found here: Maya is not qualified/supported on consumer gaming cards.  Geforce card users can expect to have issues.  This is clearly stated in the official qualification charts mentioned above.

posted by admin at 10:43 AM  

Wednesday, September 17, 2008

Visualizing MRI Data in 3dsMax

Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.

The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.

At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]

Volumetric Pixels

Above is an example of 128×128 10 slice reconstruction with greyscale cubes.

I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]

The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.

But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):

on loadVideoBTN pressed do
          --ask the user for an avi
          f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
          mapLoc = f
          if f == undefined then (return undefined)
               map = openBitMap f
               --get the width and height of the video
               heightEDT2.text = map.height as string
               widthEDT2.text = map.width as string
               --gethow many frames the video has
               vidLBL.text = (map.numFrames as string + " slices loaded.")
               loadVideoBTN.text = getfilenamefile f
               imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
               slicesEDT2.text = map.numFrames as string
               threshEDT.text = "90"

We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.

We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.

Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:

height = 0.0
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
     --seek to the frame of video that corresponds to the current slice
     map.frame = frame
     --loop that traverses y, which corresponds to the image height
     for y = mapHeight1 to mapHeight2 do
          voxels = #()
          currentSlicePROG.value = (100.0 * y / totalHeight)
          --read a line of pixels
          pixels = getPixels map [0,y-1] totalWidth
          --loop that traverses x, the line of pixels across the width
          for x = 1 to totalWidth do
               if (greyscale pixels[x]) < threshold then
                    --if you are not a color we want to store: do nothing
               --if you are a color we want, we will make a cube with your color in 3d space
                    b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
                    b.pos = [x,-y,height]
                    b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
                    append voxels b
          --grabage collection is important on large datasets
     --increment the height to bump your cubes to the next slice
     progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
     slicePROG.value = (100.0 * (height/totalSlices))

Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…

Point Clouds and ‘MetaBalls’

I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:

I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:

p = convertToPoly(Plane lengthsegs:1 widthsegs:1) = "VoxelPoint_dataSet"
polyop.deleteVerts $VoxelPoint_dataSet #(1,2,3,4)

That and when we created a box before, we now create a point:

polyop.createVert $VoxelPoint_dataSet [x,-y,height]

This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.

Faking Volume with Transparent Planes

I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.

I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:

--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
     meditMaterials[matNum].materialIDList[m] = m

Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:

p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1

This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.

I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]

You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:

Where to Go From here

I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.

posted by Chris at 3:48 PM  

Monday, July 28, 2008

Gleaning Data from the 3dsMax ‘Reaction Manager’

This is something we had been discussing over at CGTalk, we couldn’t find a way to figure out Reaction Manager links through maxscript. It just is not exposed. Reaction Manager is like Set Driven in Maya or a Relation Constraint in MotionBuilder. In order to sync rigging components between the packages, you need to be able to query these driven relationships.

I set about doing this by checking dependencies, and it turns out it is possible. It’s a headache, but it is possible!

The problem is that even though slave nodes have controllers with names like “Float_Reactor”, the master nodes have nothing that distinguishes them. I saw that if I got dependents on a master node (it’s controllers, specifically the one that drives the slave), that there was something called ‘ReferenceTarget:Reaction_Master‘:

refs.dependents $.position.controller
#(Controller:Position_Rotation_Scale, ReferenceTarget:Reaction_Master, Controller:Position_Reaction, ReferenceTarget:Reaction_Set, ReferenceTarget:Reaction_Manager, ReferenceTarget:ReferenceTarget, ReferenceTarget:Scene, Controller:Position_Rotation_Scale, $Box:box02 @ [58.426544,76.195091,0.000000], $Box:box01 @ [-42.007244,70.495964,0.000000], ReferenceTarget:NodeSelection, ReferenceTarget:ReferenceTarget, ReferenceTarget:ReferenceTarget)

This is actually a class, as you can see below:

exprForMAXObject (refs.dependents $.position.controller)[2]
"<<Reaction Master instance>>"
getclassname (refs.dependents $.position.controller)[2]
"Reaction Master"

So now we get the dependents of this ‘Reaction Master’, and it gives us the node that it is driving:

refs.dependentNodes (refs.dependents $.position.controller)[2]
#($Box:box02 @ [58.426544,76.195091,0.000000])

So here is a fn that gets Master information from a node:

fn getAllReactionMasterRefs obj =
	local nodeRef
	local ctrlRef
	for n = 1 to obj.numSubs do
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
			for item in (refs.dependents ctrl) do
				if item as string == "ReferenceTarget:Reaction_Master" then
					nodeRef = (refs.dependentNodes item)
					ctrlRef = ctrl
			getAllReactionMasterRefs obj[n]
	return #(nodeRef, ctrlRef)

The node above returns:

getAllReactionMasterRefs $
#(#($Box:box02 @ [58.426544,76.195091,0.000000]), Controller:Position_Rotation_Scale)

The first item is an array of the referenced node, and the second is the controller that is driving *some* aspect of that node.

You now loop through this node looking for ‘Float_Reactor‘, ‘Point3_Reactor‘, etc, and then query them as stated in the manual (‘getReactionInfluence‘, ‘getReactionFalloff‘, etc) to figure out the relationship.

Here is an example function that prints out all reaction data for a slave node:

fn getAllReactionControllers obj =
	local list = #()
	for n = 1 to obj.numSubs do
		ctrl = obj[n].controller
		if (ctrl!=undefined) then
			--print (classof ctrl)
			if (classof ctrl) == Float_Reactor \
			or (classof ctrl) == Point3_Reactor \
			or (classof ctrl) == Position_Reactor \
			or (classof ctrl) == Rotation_Reactor \
			or (classof ctrl) == Scale_Reactor then
				reactorDumper obj[n].controller data
		getAllReactionControllers obj[n]

Here is the output from ‘getAllReactionControllers $Box2‘:

ReactionCount - 2
ReactionName - My Reaction
    ReactionFalloff - 1.0
    ReactionInfluence - 100.0
    ReactionStrength - 1.2
    ReactionState - [51.3844,-17.2801,0]
    ReactionValue - [-40.5492,-20,0]
ReactionName - State02
    ReactionFalloff - 2.0
    ReactionInfluence - 108.665
    ReactionStrength - 1.0
    ReactionState - [65.8385,174.579,0]
    ReactionValue - [-48.2522,167.132,0]

So, once again, no free lunch here. You can loop through the scene looking for Masters, then derive the slave nodes, then dump their info. It shouldn’t be too difficult as you can only have one Master, but if you have multiple reaction controllers in each node effecting the other; it could be a mess. I threw this together in a few minutes just to see if it was possible, not to hand out a polished, working implementation.

posted by Chris at 4:42 PM  

Monday, July 28, 2008

Fixing Clipboard Problems in Photoshop

Over the past few years I have noticed that Photoshop often, usually after it is left idling for a few hours or days, no longer imports the windows clipboard.

Here is a fix if you don’t mind getting your hands dirty in the registry:


The above is for photoshop cs2, depending on your version you will have to look in different reg locations. There is also a problem when you hit a ‘size limit’ for an incoming clipboard image, and Photoshop dumps it. This can also be circumvented by editing the registry:

posted by Chris at 10:24 AM  

Friday, July 11, 2008

Simple Perforce Animation Browser/Loader for MotionBuilder

This is a simple proof-of-concept showing how to implement a perforce animation browser via python for MotionBuilder. Clicking an FBX animation syncs it and loads it.

The script can be found here: [], it requires the [wx] and [p4] libraries.

Clicking directories goes down into them, clicking fbx files syncs them and loads them in MotionBuilder. This is just a test, the ‘[..]’ doesn’t even go up directories. Opening an animation does not check it out, there is good documentation for the p4 python lib, you can start there; it’s pretty straight forward and easy: sure beats screen scraping p4 terminal stuff.

You will see the following, you should replace this with the p4 location of your animations, this will act as the starting directory.

	info ="info")
	print info[0]['clientRoot']

That should about do it, there are plenty of P4 tutorials out there, my code is pretty straight forward. The only problem was where I instanced it, be sure to instance it with something other than ‘p4’, I did this and it did not work, using ‘p4i’ it did without incident:

p4i = P4.P4()
posted by Chris at 6:45 PM  

Sunday, June 29, 2008

Debugging a Bluescreen

This is a tip that a coworker (Tetsuji) showed me a year ago or so, I was pretty damn sure my ATI drivers were bluescreening my system, but I wanted to hunt down proof. So you have just had a bluescreen and your pc rebooted. Here’s how to hunt down what happened.

First thing you should see when you log back in is this:

It’s really important that you not do anything right now; especially don’t click one of those buttons. Click the ‘click here‘ text ad then you will see this window.

Ok, so this doesn’t tell us much at all. We want to get the ‘technical information’, so click the link for that and you will see something like this:

Here is why we did not click those buttons before; when you click those buttons, these files get deleted. So copy this path and go to this folder. Copy the contents elsewhere, and close all those windows. So you now have these three files:

The ‘dmp’ file (dump file) will tell us what bluescreened our machine, but we need some tools to read it. Head on over to the Microsoft site and download ‘Debugging Tools for Windows’ (x32, x64). Once installed, run ‘WinDbg‘.  Select File->Open Crash Dump… and point it at your DMP file. This will open, scroll down and look for something like this:

In this example the culprit was ‘pgfilter.sys‘, something installed by ‘Peer Guardian’, a hacky privacy protection tool I use at home. There is a better way to cut through a dump file, you can also type in ‘!analyze -v‘, this will generate something like this:

In this example above you see that it’s an ATI driver issue, which I fixed by replacing the card with an nvidia and tossing the ATI into our IT parts box (junkbox).

posted by Chris at 5:01 PM  

Sunday, June 29, 2008

You Suck At Photoshop

You Suck at Photoshop always cracks me up, you might like it as well

posted by Chris at 1:47 PM  

Saturday, June 21, 2008

Facial Stabilization in MotionBuilder using Python

Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.

Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.

To follow this you will need some facial mocap data, there is some freely downloadable here at Grab the FBX file.

andy serkis - weta head stabilization halo

Stabilization markers

Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.

It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.

Load up the facial mocap file you have downloaded, it should look something like this:

In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.

General Pipeline

As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.

This will take some processing, and also an interactive user interface. You may have seen my tutorial on Creating Interactive MotionBuilder User Interface Tools. You should familiarize yourself with that because this will build on it. Below is the basic idea:

You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)

Creating ‘myLib’

So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:

#casts point3 strings to pyEuclid vectors
def vec3(point3):
	return Vector3(point3[0], point3[1], point3[2])
#casts a pyEuclid vector to FBVector3d
def fbv(point3):
	return FBVector3d(point3.x, point3.y, point3.z)

Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:

#returns an array of models when given an array of model names
#useful with external apps/telnetlib ui
def modelsFromStrings(modelNames):
	output = []
	for name in modelNames:
	return output

Now, if you were to take these snippets and save them as a file called in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)

casting fbvectors to pyeuclid

It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..

In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.

So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.

Adding Stabilization-Specific Code to ‘myLib’

One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.

#returns average position of an FBModelList as FBVector3d
def avgPos(models):
	mLen = len(models)
	if mLen == 1:
		return models[0].Translation
	total = vec3(models[0].Translation)
	for i in range (1, mLen):
		total += vec3(models[i].Translation)
	avgTranslation = total/mLen
	return fbv(avgTranslation)

Here is an example of avgPos() in use:

Now onto the stabilization code:

#stabilizes face markers, input 4 FBModelList arrays, leaveOrig  for leaving original markers
def stab(right,left,center,markers,leaveOrig):
	pMatrix = FBMatrix()
	lScene = lSystem.Scene
	newMarkers = []
	def faceOrient():
		Rpos = vec3(avgPos(right))
		Lpos = vec3(avgPos(left))
		Cpos = vec3(avgPos(center))
		#build the coordinate system of the head
		xVec = (Cpos - Rpos)
		xVec = xVec.normalize()
		zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
		zVec = zVec.normalize()
		yVec = xVec.cross(zVec)
		yVec = yVec.normalize()
		facePos = (Rpos + Lpos)/2
		pMatrix[0] = xVec.x
		pMatrix[1] = xVec.y
		pMatrix[2] = xVec.z
		pMatrix[4] = yVec.x
		pMatrix[5] = yVec.y
		pMatrix[6] = yVec.z
		pMatrix[8] = zVec.x
		pMatrix[9] = zVec.y
		pMatrix[10] = zVec.z
		pMatrix[12] = facePos.x
		pMatrix[13] = facePos.y
		pMatrix[14] = facePos.z
	#keys the translation and rotation of an animNodeList
	def keyTransRot(animNodeList):
		for lNode in animNodeList:
			if (lNode.Name == 'Lcl Translation'):
			if (lNode.Name == 'Lcl Rotation'):
	Rpos = vec3(avgPos(right))
	Lpos = vec3(avgPos(left))
	Cpos = vec3(avgPos(center))
	#create a null that will visualize the head coordsys, then position and orient it
	faceAttach = FBModelNull("faceAttach")
	faceAttach.Show = True
	faceAttach.Translation = fbv((Rpos + Lpos)/2)
	#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' later
	for obj in markers:
		new = FBModelNull(obj.Name + '_stab')
		newTran = vec3(obj.Translation)
		new.Translation = fbv(newTran)
		new.Show = True
		new.Size = 20
		new.Parent = faceAttach
	lPlayerControl = FBPlayerControl()
	FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
	FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
	animNodes = faceAttach.AnimationNode.Nodes
	for frame in range(FStart,FStop):
		#build proper head coordsys
		#update stabilized markers and key them
		for m in range (0,len(newMarkers)):
			markerAnimNodes = newMarkers[m].AnimationNode.Nodes

We feed our ‘stab function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)

Creating an External UI that Uses ‘myLib’

Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.

The code for the facial stabilization UI I have created is here: []

I will now step through code snippets pertaining to our facial STAB tool:

def getSelection():
	selectedItems = []
	mbPipe("selectedModels = FBModelList()")
	for item in (mbPipe("for item in selectedModels: print item.Name")):
	return selectedItems

stab uiThis returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.

At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.


def rStabClick(self,event):
	self.rStabMarkers = getSelection()
	print str(self.rStabMarkers)
	self.rStab.Label = (str(len(self.rStabMarkers)) + " Right Markers")

This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.

def stabilizeClick(self,event):
	mbPipe('from euclid import *')
	mbPipe('from myLib import *')
	mbPipe('rStab = modelsFromStrings(' + str(self.rStabMarkers) + ')')
	mbPipe('lStab = modelsFromStrings(' + str(self.lStabMarkers) + ')')
	mbPipe('cStab = modelsFromStrings(' + str(self.cStabMarkers) + ')')
	mbPipe('markerset = modelsFromStrings(' + str(self.mSetMarkers) + ')')

Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:

Kill the keyframes on the root (faceAttach) to remove head motion

Conclusion: Debugging/Optimization

Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.

Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:

Sending&gt;&gt;&gt; selectedModels = FBModelList()
Sending&gt;&gt;&gt; FBGetSelectedModels(selectedModels,None,True)
Sending&gt;&gt;&gt; for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']

All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.

posted by Chris at 10:10 PM  
« Previous Page

Powered by WordPress