Stumbling Toward 'Awesomeness'

A Technical Art Blog

Monday, March 5, 2018

uExport: A Simple Maya Tool for Exporting to UE4


I posted the skeleton of this a while ago in the post “Why does everyone write their own FBX exporter?“, many people asked me for the exporter over the years and I sent it to them. Since that post, uExport has become the way we get all characters into UE4 at Epic. It’s been crazy to see this little fledgling tool, developed largely in spare time to allow us to export non-A.R.T.v1 rigs (like the kite and deer in the first demo I worked on at Epic) become something we rely on so much.

It’s here on GitHub, there’s a lot more info in the readme.

posted by Chris at 9:05 PM  

Monday, November 13, 2017

The Mighty Message Attribute

I recently had a discussion about storing relationships in Maya, and hadn’t realized the role of the message attribute wasn’t this universally cherished thing. In previous posts entitled ‘Don’t use string paths‘, or ‘Why Storing String Refs is Dangerous and Irresponsible‘ I outlined why this is the devil’s work, but in those posts I talked about the API, PyMel and Message Attrs. I didn’t really focus on why message attrs were so important: they serialize node relationships.

For quite some time I have advocated storing relationships with message attrs. At the Maya SIGGRAPH User Event, when they asked me to speak about our modular rigging system, I kind of detailed how we leveraged those at Crytek in CryPed.

msg

I am not quite sure when I started using message attrs to convey relationships, I’m no brainiac, it could have been after seeing this 2003 post from Jason Schleifer on CGTalk:

image

Or maybe I read it in the Maya docs (unlikely):

“Message attributes only exist to formally declare relationships between nodes. By connecting two nodes via message attributes, a relationship between those nodes is expressed.”

So why does Maya use this, and why should I?

As you read in the docs above, when Maya wants to declare a relationship between a camera and image plane, they do so with a message attribute that connects them. This is important because this bond won’t be broken if the plane or it’s parent is renamed. As soon as you store the string path to a node in the DAG, that data is already stale.  It’s no longer valid.  When you query a message attribute, Maya returns the item, it’s DAG path will be valid, regardless of hierarchy or name changes.

Jason’s example above is maybe the most simple, in my image (a decade later) you can see the messages declaring many relationships abstracting the character at three main levels of interface, Character, ChatacterPart and RigPart. I talked about the basic ideas here in a 2013 post about object oriented python in Maya.

Though Rob vigorously disagreed in the comments there, I am still doing this today.  Here’s an example from the facial code we released in EPIC’s ARTv1 rigging tools some time ago. The face is abstracted on two levels, the ‘face’ and the ‘mask’, here I am only displaying the message connecting them:

wiring

By using properties as described in that previous blog post, below I am accessing the system, creating a face instance, walking down the message connection to the mask node, and then asking it for the attach locations. It’s giving me these transforms, by querying the DAG, live:

msg

So, that property looks like this:

    @property
    def attachLocations(self):
        return cmds.listConnections(self.node + '.attachLocations')
    @attachLocations.setter
    def attachLocations(self, locs):
        for loc in locs:
            utils.msgConnect(self.node + '.attachLocations', loc + '.maskNode')

Setting the attach locations through python would look like this, and it would rebuild the message attrs:

face.mask.attachLoactions = ['myLoc1', 'myLoc2']

Working like this, you have to think hard about what a rigger would want to access at what level and expose what’s needed. But in the end, as you see, through python, you have access to everything you need, and none of the data is stale.

How and when to use strings

There are times when the only way you can store a relationship is by using a string in some fashion. Here are some situations and how I have handled them in the past, feel free to leave a comment with your experiences.

  • Maya can’t store a relationship to something that doesn’t exist (has been deleted). It can’t store a relationship when it’s not open. In these situations, instead of storing the name in an attr, I stamp the two nodes with a string attr to store the relationship, then you query the world for another node with a matching stamped attr.
  • Many times you need to feed your class an initial interface node to build/wrap. Instead of feeding it a string name, you can query the world for node type, in the Ryse example above, the rigging and animation tools could query cmds.ls(type=’CryCharacter’), this would return all characters in the scene. This means all rigging and animation tools needed a common ‘working character’ combobox at the top to define the character the tool is operating on. If you don’t have a node type, you can use a special string attr to query for.
  • Sometimes you’re like saving joint names to serialize skinning data or something. You can use message attrs to play it safe here as well. Some pseudocode: For character in characters, if character identifier matches file on disk, for mesh in character.meshes if mesh in file skin it. For joint in character.joints if in file, add them to the skincluster, etc. Here you’re validating all your serialized string data against your class which is traversing the DAG live.
  • Message attrs can get SLOW if you’re tracking thousands of items, you should only be tracking important things you would want later. In CryPed, when we wanted to track all nodes that were created when a module was built, we would stamp them all with a string attr that was the function name that built the module. To track this kind of data HarryZ at Crytek had the pragmatic idea of just doing a global ls of the world when a buildout started and then one at the end and boolean them out, this caught all the intermediate and utility nodes and everything generated by the rigging code.
posted by Chris at 6:10 AM  

Monday, November 21, 2016

The Eyes

ct_eyes

Windows to the Soul

In CG, the eyes are unforgiving.  For thousands of generations we have had to decipher true intent of other human beings from their eyes, because of this, we have evolved to notice the most minute, 1mm shift in eye shape. If there was one part of a digital character that was one of the most difficult to simulate and render, it would be this. Let’s talk about it.

Talking About the Eyes

anat

First, I am going to go over some eye terminology. There’s a lot out there, but I am only going to go over what is required to give meaningful feedback in dailies. :D  You don’t need to be an ophthalmologist to discuss why a character’s eyes don’t look right.

The Iris – This is the round, colored circle in the eye, it encompasses the pupil, but it is not the pupil.
The Pupil – The black dot, or the hole in the iris.
The Sclera – The ‘whites of your eyes’
The Cornea – This is the bulge over the iris

Vergence – The eyes converge or turn inward to aim at the object a person is gazing at. They diverge, or turn outward when tracking something that is receding, if they diverge more than parallel, you can say the person is ‘walleyed’ (see: strabismus).

KEY TAKEAWAY – When viewing and reviewing eyes, it’s important to notice how the lids break across the iris and the shape of the negative space the lids and the iris create in the sclera. When we look at someone we are mainly seeing this shape. For more advanced readers, I picked the image above because you can really see the characteristics of the wetness meniscus, and eyelash refraction, but we’ll talk about that later.

Eye Placement

Initial eye placement is very important. Eyes too large or placed improperly will not rotate accurately when set in the face. There’s a lot you can learn by just looking at yourself in a mirror, looking at a friend, checking or scouring youtube. There’s even more you can learn from reading forensic facial reconstruction textbooks!

There are a few books for forensic artists, and most have information about placing eyes for facial re-construction. In my previous post about the jaw, I talked about forensic facial reconstruction a bit, these are my go-to books when it comes to eye (and teeth) placement:

books
Face It: A Visual Reference for Multi-ethnic Facial Modeling
Forensic Art and Illustration
Forensic Analysis of the Skull
Facial Geometry

I use a Mary Kay Travel Mirror and have bought them for all riggers/animators on my teams, at six bucks you can’t go wrong.

If you were to draw a line from the upper and lower orbits of the eye socket, it would be in line with the back of the cornea. I don’t usually like to point to skeletal reference, but in this case the orbits are relatively bony parts of the face that can be seen in surface anatomy.

placement01

Here are some good anatomical images for centering the eye in the socket (click to enlarge):

tmp696744_thumb    tg-7-57c-modified    f2

And in practice; here’s eye placement from Marius, the hero Character in Ryse the video game. Notice that the eyeball doesn’t even cover the entire ocular opening when viewed front on in wireframe, as the with anatomical images above

Marius, Ryse 2011, Crytek

Abdenour Bachir, Ryse 2011, Crytek (click to enlarge)

eye_blog

Lastly, here is a GIF I made from a video tutorial called ‘How to Paint the Human Eye‘ by Cat Reyto. It shows eye placement rather well.

So this has all been discussing eye placement in the socket, but what about eye socket placement in the face?  This is where proportions come into play. The books above, “A Visual Reference to Multi-Ethnic Facial Modeling” and “Facial Geometry” are very good at discussing facial proportions and showing you how those proportions can change based on ethnicity. Here is a page from “Facial Geometry” that shows the basic facial proportions, most importantly pupil and eye placement relative to the rest of the face:

face_prop

Here’s a page from the other book, which actually discusses 3D modeling. This is the chapter on eye placement, anyone interested in facial modeling should pick this up, it’s a great full color reference:

book_multiejpg

In my post about the jaw, I discussed jaw placement relative to the eyes and pupils, you can do the inverse: check the eye placement relative to teeth that you feel happy with. Notice that there’s a correlation between the molars or width of the upper teeth and the pupils.

face05face01

Sometimes art direction would like ‘larger’ eyes, this is sometimes attempted by making the eyeball larger, but then it can feel weird when the eye is too large for the socket. This causes issues with the rotations of the eye. For reference, take a look at people who have been handed down a specific neanderthal gene for large eyes (or Axenfeld-Rieger syndrome), like the Ukrainian model Masha Tyelna. She has large eyeballs, but also the facial physiology to accept them:

eyes_Masha_Tyelna4000000110309-masha_tyelna-fiteyes_Masha_Tyelna3

Eye Movement

eye_rot

Range of Movement

Making digital humans, often from fiction, I find that these numbers are all relative, but the book Three Dimensional Rotations of the Eye, has a some good information, as does this chapter of another book: Physiology of the Ocular Movements. From default pose I usually fond that each eye can rotate on the horizontal plane 40 degrees in each direction (right and left) and 30 degrees in each direction for the vertical plane (up and down).

Eye ‘Accommodation’ and Convergence
For the purpose of rigging, the eyes are at maximum divergence (or parallel) at about 1.5m or 6 feet. I have not come across the eye gaze distance that results in maximum divergence, if you happen to have that information; let me know!

Pupils dilate to focus on a near object, this is known as accommodation. A standard young person’s eye can gaze/focus on objects from infinity to 6.5cm from the eyes. In action, eyes cannot converge/diverge faster than about 25 degrees a second.

“Why Does My Character Look Cross-Eyed”?

So, take a look at the MRI at the top of this page. Do you notice that the eyes are not looking forward or parallel? It’s because the eyes aren’t parallel when looking parallel/at an infinite distance. Human eyes bow out a few degrees, the amount of degrees varies between 4 and 6, per eye. The amount ‘off’ an eyeball is bowed outward is called the ‘kappa angle’, check this diagram below:

getimage

Let’s go over some more terminology:

Pupillary Axis – A line drawn straight out the pupil.
Visual Axis – A line from the fovea to the fixation target or the item being gazed upon.
Angle Kappa – the angle between the pupillary and visual axis.

The Hirschberg Test

The way ophthalmologists determine if eyes are converging properly is with the Hirschberg test. It’s a simple test where they ask a child to look at a teddy bear, and they shine a pen light into their eye, they look at his this light source reflects off the cornea, and they can tell if the child has a lazy eye/issue converging his eyes on a point.

hberg

middleeastafrjophthalmol_2015_22_3_265_159691_u6Doing a ‘CG Hirschberg’ test requires a renderer with accurate reflections. You place a point light directly behind the camera and have the rig fixate/gaze directly onto the camera.

For more information on Angle Kappa, as well as data on the average angle across groups of people, check out Pablo Artal’s blog posts.

KEY TAKEAWAY

Even though the eyes bow out a little bit, for all intents and purposes, as you see in the Hirschberg, the pupil still seems like it is looking at the person. So yes, the pupillary axis is off by a few degrees, but with the refraction it’s not too noticeable (<1mm reflection offset from the center of the cornea). An Angle Kappa offset should be built into your rigs, you should not have the pupillary axis converging on the fixation target, this is what causes CG characters to look cross-eyed when viewed with scrutiny.

“Why Don’t My Eyes Feel Real?”

untitled-8

Hanno Hagedorn, Crysis 2005, Crytek

A difficult issue with eyes in computer graphics is making them feel like they are really set in the face.

The eye has a very interesting soft transition where the sclera meets the lids.  Some of this is ambient occlusion and shadows from the brow and lids, and some of it is from reflection of the upper eyelashes.

In 2005, Hanno Hagedorn and I were working on Crysis, we were having an issue with eyes that I called ‘game eye’. It’s where the sclera, has a sharp contrast with the skin.  In 2005, there was no obvious solution to do this in realtime.

We (I still credit Hanno) solved this in a very pragmatic way, the following is from one of our slides at GDC 2007:

overlay1

We created an ‘eye overlay’, a thin film that sat on the eye and deformed with the fleshy eye deformation. A lot of games today, including Paragon, my current project at Epic, use this technique. Many years later, a famous VFX company actually tried to patent the technique.

Here are those same meshes, six years later on Ryse: Son of Rome:

eye_meshes

Wetness Meniscus

One thing added since Crysis is the tearline or wetness meniscus. This is very important, it’s purpose of this is to kick up small spec highlights and fake the area where the lid meets the sclera. You can see this line in photo of the female eye where I have outlined initial anatomical terms.

overlay_wetness_lashes

Eye Rendering

All of these eye parts are not easy to shade correctly, Nicolas Schulz describes how Crytek implemented a forward pass in their deferred renderer to deal with eyes in his 2014 paper The Rendering Technology of Ryse. Nicholas has two slides dedicated to eye shading:

eye_rendering

The eye shader used on Ryse is documented in depth here [docs.cryengine.com], the modified HLSL CFX shader code is here [github].   Some important features of the shader include:

  • Cornea refraction and scattering – this is important, it simulates the refraction of the liquid in the cornea
  • Iris color, depth, self shadowing, and SSS – Since there is no physical iris, we need to fake that there is a physical form there using a displacement map (POM)
  • Eye occlusion overlay depth bias – this is very important, it allows you to push/pull the depth of the overlay film, so that the eye doesn’t penetrate through it
  • Sclera SSS – not to be overlooked, or else the eye looks like a golfball.

eye_ball_textures

These were the images that fed into the shader features above, and below are the overlay spec and AO masks, and the spec texture for the wetness/tearline.

eye_occlusioneye_water_spec

This is an example of the iris displacement map ingame seen with a debug render view (Xbox One, 2011)

Abdenour Bachir, Ryse, Crytek 2011

Does the eye really need a corneal bulge?

On Ryse and Crysis, we had spherical eyes, and faked the corneal bulge with a shader and fleshy eye deformation. A corneal bulge means that you really have to have your shit together when it comes to the eye deformation. All those layers mentioned above have to deform with the bulge and it can be a lot to manage across all the deformation contexts of the eye (blink directionals, squint directionals, etc) On Paragon, we have non-spherical eyes, and it’s very challenging to deform the eyes properly in the context of a 60hz e-sports title with only one joint per lid. It’s actually pretty much impossible.

Altogether

Below, you see the eyes in Ryse: Son of Rome, they feel set in the face, and consistent with the world and hyper-real style of the game. (in-game mesh and rig, click to enlarge)

Abdenour Bachir, Ryse 2012, Crytek

Abdenour Bachir, Ryse 2012, Crytek

Scanning/Acquiring Eyes

capture

When scanning a character’s head it’s important to get their eyes fixed at a gaze distance you have recorded. I also use the scan to see how the iris breaks across the lid and infer information about how the eye will be set in the face and skull.

  • If you want to take your own eye texture reference, shoot through a ring or attach a light co-axial with the camera lens, try to get the highlight in the pupil as you will discard this part of the image anyway.
  • Because the eye is shiny and it’s characteristics change as you move around it, photogrammetry often falls flat.
  • Because the eye is a complex translucent lens, cutting spec with cross-polarization often also falls flat. Light loses it’s polarization when bouncing around in there.

When it comes to scanning the eyes themselves, Disney Research has published an interesting paper on the High Quality Capture of Eyes.

How You Review Eyes

First off you should check eyeball depth and placement, as shown above.

  • You should be reviewing eyes with at least the FOV of a portrait lens: 80mm or a 25 degree FOV. When getting ‘all up in there’ I often use a 10 degree FOV.
  • Try to review them at the distance you will see them in your shipping product.
  • If you are embodying the person that is the fixation point of the digital human, you really need runtime look IK (for the eyes, but preferably feathered torso>head>neck>eyes). From some distance, you can tell is someone is looking at your eyes or your ear. Think about that. The slightest anim compression or issue in any joint from root to eyes can cause the gaze to be off a few degrees and that’s all it takes.
  • If you’re doing a lot of work, build a debug view into your software that draws the pupillary axis and visual axis, all the way to the fixation/gaze point
posted by Chris at 2:01 AM  

Tuesday, October 25, 2016

Skin Save/Load Tool

deformerweightsplus

Some had asked me to package the previous code together into a tool. I was reluctant because there were many situations where it just didn’t work. I made something to aid a discussion on the beta forum.

However, accidentally, I seem to have the code working. It’s odd, but the solution to skinClusters not being paintable after deformerWeights was used to load them was to try and make another skinCluster (which errors, but fixes the Artisan issue).

I have tossed this on GitHub here:
https://github.com/chrisevans3d/deformerWeightsPlus/

 

posted by Chris at 10:46 AM  

Monday, September 26, 2016

Save/Load SkinWeights 125x Faster

DISCLAIMER/WARNING: Trying to implement this in production I have found other items on top of the massive list that do not work. The ignore names flag ‘ig’ causes a hard crash on file load, the weight precision flag ‘wp’ isn’t implemented though it’s documented, the weight tolerance flag ‘wt’ causes files to hang indefinitely on load. When it does load weights properly, it often does so in a way that crashes the Paint Skin Weights tool. I have reported this in the Maya Beta forums.

Previously I discussed the promise in Maya command ‘deformerWeights‘. The tool that ships with Maya was not very useful, but the code it called was 125 times faster than python if you used it correctly..

Let’s make a python class that can save and load skin weights. You hand it a few hundred skinned meshes (avg Paragon character) and it saves the weights and then you delete history on the meshes, and it loads the weights back on.  What I just described is the process riggers go through when updating a rig or a mesh _every day_.

Below we begin the class, we import a python module to parse XML, and we say “if the user passed in a path, let’s parse it.”

#we import an xml parser that ships with python
import xml.etree.ElementTree
 
#this will be our class, which can take the path to a file on disk
class SkinDeformerWeights(object):
    def __init__(self, path=None):
        self.path = path
 
        if self.path:
            self.parseFile(self.path)

Next, let’s make this parseFile function. Why is parsing the file important? In the last post we found out that there’s a bug that doesn’t appropriately apply saved weights unless you have a skinCluster with the *exact* same joints as were exported. We’re going to read the file and make a skinCluster that works.

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')

Now we’re getting some data here, we know that the format can save deformers for multiple shapes, let’s make a shape class and store these.

class SkinnedShape(object):
    def __init__(self, joints=None, shape=None, skin=None, verts=None):
        self.joints = joints
        self.shape = shape
        self.skin = skin
        self.verts = verts

Let’s use that when we parse the file, let’s then store the data we paresed in our new shape class:

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')
 
        if shape not in self.shapes.keys():
            self.shapes[shape] = self.skinnedShape(shape=shape, skin=clusterName, joints=[jnt])
        else:
            s = self.shapes[shape]
            s.joints.append(jnt)

So now we have a dictionary of our shape classes, and each knows the shape, cluster name, and all influences. This is important because, if you read the previous post, the weights will only load onto a skinCluster with the exact same number and names of joints.
Now we write a method to apply the weight info we parsed:

def applyWeightInfo(self):
    for shape in self.shapes:
        #make a skincluster using the joints
        if cmds.objExists(shape):
            ss = self.shapes[shape]
            skinList = ss.joints
            skinList.append(shape)
            cmds.select(cl=1)
            cmds.select(skinList)
            cluster = cmds.skinCluster(name=ss.skin, tsb=1)
            fname = self.path.split('\\')[-1]
            dir = self.path.replace(fname,'')
            cmds.deformerWeights(fname , path = dir, deformer=ss.skin, im=1)

And there you go. Let’s also write a method to export/save the skinWeights from a list of meshes so we never have to use the Export DeformerWeights tool:

def saveWeightInfo(self, fpath, meshes, all=True):
    t1 = time.time()
 
    #get skin clusters
    meshDict = {}
    for mesh in meshes:
        sc = mel.eval('findRelatedSkinCluster '+mesh)
        #not using shape atm, mesh instead
        msh =  cmds.listRelatives(mesh, shapes=1)
        if sc != '':
            meshDict[sc] = mesh
        else:
            cmds.warning('>>>saveWeightInfo: ' + mesh + ' is not connected to a skinCluster!')
    fname = fpath.split('\\')[-1]
    dir = fpath.replace(fname,'')
 
    for skin in meshDict:
        cmds.deformerWeights(meshDict[skin] + '.skinWeights', path=dir, ex=1, deformer=skin)
 
    elapsed = time.time()-t1
    print 'Exported skinWeights for', len(meshes), 'meshes in', elapsed, 'seconds.'

You give this a folder and it’ll dump one file per skinCluster into that folder.
Here is the final class we’ve created [deformerWeights.py], and let’s give it a test run.

sdw = skinDeformerWeights()
sdw.saveWeightInfo('e:\\gadget\\', cmds.ls(sl=1))
>>>Exported skinWeights for 214 meshes in 2.433 seconds.

Let’s now load them back, we will iterate through the files in the directory and parse each, applying the weights:

import os
t1=time.time()
path = "e:\\gadget\\"
files = 0
for file in os.listdir(path):
    if file.endswith(".skinWeights"):
        fpath = path + file
        sdw = skinDeformerWeights(path=fpath)
        sdw.applyWeightInfo()
        files += 1
elapsed = time.time() - t1
print 'Loaded skinWeights for', files, 'meshes in', elapsed, 'seconds.'
>>> Loaded skinWeights for 214 meshes in 8.432 seconds.

“>>> Loaded skinWeights for 214 meshes in 8.432 seconds.”

So that’s a simple 50 line wrapper to save and load skinWeights using the deformerWeights command. No longer do we need to write C++ API plugins to save/load weights quickly.

posted by Chris at 1:27 AM  

Saturday, September 24, 2016

DeformerWeights Command, Cloaked Savior?

export-skin

This post was originally going to be entitled ‘Why Everyone Writes their Own Skin Exporter’. Maya’s smooth skin weight export tool hasn’t changed since Maya 4.0 when it was introduced 15 years ago. It saves out greyscale weightmaps in UV space, one image per joint influence per mesh. The only update they have done in 15 years is change the slider to go from a max of 1024 pixels to 4096, and now 8192!

Autodesk understands the need and importance of a skin weight exporter, they even ship it as a C++ example in their Maya Developer Kit / SDK. So why are they still shipping this abomination above?

got-this

Like blind data discussed before, in pythonland we cannot set skinweights in one go, we must iterate through the entire mesh setting skin weights one vertex at a time. This means a Maya feature or C++ plugin can save and load skin weights in seconds that take python 15 minutes.

I have written lots of skin weight save/load crap in my time, and as I sat down and started to do that very thing in an instructional blog post one night, we had an interesting discussion in the office the next day. ADSK added ‘Export Deformer Weights‘ in 2011, but it has never really worked. I don’t know a single person who uses it. But it does save and load ‘deformer’ weights via the C++ API –so there’s real promise here!

 

44521502

Save/load skin weights fast without writing a custom Maya plugin? This is kinda like the holy grail, which is ridiculous, but you should see the lengths people go to eek out a little more performance! My personal favorite was MacaroniKazoo back in 2010 reaching in and setting the skinCluster weight list directly by hand using an unholy conglomeration of python API and MEL commands. Tyler Thornock has a post that builds on this here.

So I asked the guys if anyone had looked at Export Deformer Weights recently, everyone either hadn’t heard of it, heard it was shit, or had some real first-hand experience of it being “A bit shit.” But still –the promise was there!

Export Deformer Weights: Broken and Backwards

So, first thing’s first, I made an awesome test case. I am going to go over all the gotchas and things that are broken, but if you don’t want to take this voyage of discovery, skip this section.

skin01

I open the deformer weight export tool, and just wow.. I mean the UI team really likes it’s space:

export-def1

I save my skin weights to an XML file, delete history on the mesh, and open the import UI:

export-def2

Gotcha 1) It requires a deformer to load weights onto. You need to re-skin your mesh.

I re-skinned my mesh and loaded the XML using Index.. drumroll..

skin02

Well, this is definitely not applying the weights back by vertex index. I decided to try ‘Nearest’:

skin03

Gotcha 2) Of the options [Index, Nearest, Over], ‘Index’ is somehow lossy, and anything other than ‘Index’ seems to crash often, ‘Nearest’ seems totally borked. (above)

So this was when I just began to think this was a complete waste of my time. I was pretty annoyed that they even shipped a tool like this, something that is so needed and so important, yet crashes frequently and completely trashes your data when it does work.

this-is-fine

Not Taking ‘Broken and Backwards’ for an Answer

I am already invested and, not understanding how loading weights by point index could be lossy and broken, I decided to look at the XML file. The tool writes out one XML file per skinCluster, here’s a rundown of the file format:

Mesh Info (Shape) – the vertices of the shape are stored local space x,y,z and corresponding index

<?xml version="1.0"?>
<deformerWeight>
  <headerInfo fileName="C:/Users/chris.evans/Desktop/test_sphere.xml" worldMatrix="1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 "/>
  <shape name="pSphereShape1" group="7" stride="3" size="382" max="382">
    <point index="0" value=" 0.148778 -0.987688 -0.048341"/>
    <point index="1" value=" 0.126558 -0.987688 -0.091950"/>
    <point index="2" value=" 0.091950 -0.987688 -0.126558"/>
    ...

Joints (Weights) – There is one block per joint that calls out each vertex that it have influences for on the shape

  <weights deformer="skinCluster1" source="root" shape="pSphereShape1" layer="0" defaultValue="0.000" size="201" max="380">
    <point index="0" value="0.503"/>
    <point index="1" value="0.503"/>
    <point index="2" value="0.503"/>
    ...

And that’s it, not a lot of data, nothing about the skinCluster attributes or options, no support for spaces like UV or world. (odd, since it’s had UV support for 15 years)

Next I decided to run the tool again and see what command it was calling, I then looked up the command documentation and here’s where it gets interesting, go ahead, take a look!

def_bary

So now I am hooked, someone is putting some thought into this –at least on some level.

image

I don’t at all understand why the UI has none of these options, but I need to get this working. If you read through the docs, the command also supports:

  • Exporting multiple skinClusters/shapes/deformers per XML file
  • Exporting skinCluster/deformer attributes like ‘skinningMethod’ and ‘envelope’
  • Local and world space positions with a positional tolerance

“Someone is putting some thought into this”

So I started trying to figure out why a file format that explicitly knows every influence of every vertex by index and inf name, doesn’t load weights properly. After some trials I hit gotcha #3:

Gotcha 3) Of the options the weights only load properly if the skinCluster has the *exact* same influences it was saved with. Which really makes no sense, because the file format has the name of every joint in the old skinCluster.

So now I had it working, time to wrap it and make it useful.

The Documentation is a Lie.

So, first thing’s first, I did a speed test.

Importing with deformerWeights was about 125 times faster: Gold mine.

But I just couldn’t get some of the flags to work, I thought I was just a moron, until I finally tried the code example in the ADSK Maya documentation, which FAILS. Let’s first look at the -vc flag, which is required to load using ‘bilinear’ or ‘barycentric’ mapping/extrapolation:

cmds.deformerWeights ("testWeights.xml", ex=True, vc=True, deformer="skinCluster1")
 
# Error: Invalid flag 'vc'
# Traceback (most recent call last):
#   File "<maya console>", line 1, in <module>
# TypeError: Invalid flag 'vc' #

Gotcha 4) The python examples do not work. -vertexConnections flag doesn’t work, -attribute flag doesn’t work, so no saving skincluster metadata like ‘skinningMethod’, etc. Because of that, ‘barycentric’ and other methods that need vertex connection info do not work. The ‘deformer’ flag shows that it takes a list of deformers and writes them all to one file, but this is not true, it takes a single string name of a deformer.

I now know why the UI doesn’t have all these cool options! –they don’t work!

Gotcha 5) It doesn’t take a file path, to save a file to a path you need to specify the filename, and then the path separate.

cmds.deformerWeights ("testWeights.xml", path='d:\\myWeight\\export\\folder\\', ex=True, deformer="skinCluster1")

Perhaps someone fixed this stuff, documented it, and then reverted the fix, but this has been around since 2011.. I tried the above in maya 2016 latest service pack and all my links above are to that version of the documentation.

I wasn’t really intending to write this much, so now that we know this can import weights 125 times faster, we’ll make a tool to utilize it. Stay tuned!

posted by Chris at 2:15 AM  

Friday, September 2, 2016

SIGGRAPH Realtime Live Demo Stream

The stream of our SIGGRAPH Realtime Live demo is up on teh internets. If you haven’t seen the actual live demo, check it out!

I feels amazing to win the award for best Realtime Graphics amongst such industry giants. There are so many companies from so many industries participating now, and the event has grown such much. Feels really humbling to be honored with this for a third year; no pressure!

posted by Chris at 9:26 AM  

Tuesday, August 30, 2016

The Jaw

All ‘virtual’ joints that we place are based on virtual anatomical surface landmarks. By ‘virtual’ I mean the polygonal rendermeshes that our ‘puppet’ will drive. As a rigger, you have to be able to look at the surface anatomy (polygonal mesh) and determine where a joint should be placed, but the solution is not always obvious.

“The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

“The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

The jaw is one of these tricky situations! Many people think the jaw rotates from the ‘socket’ or ‘fossa’ that the jaw (or mandible — or mandibular condyle) fits into, but this is not the case.

As riggers, we sometimes need to ignore internal anatomy and focus on surface anatomy, which is our final deliverable. This means think about the center of rotation for the entire mass of flesh (or vertices) that we’re moving. For the jaw of a human(oid) this pivot is under the earlobe when viewed from the side.


Rotating directly from the ‘socket’ would result in this incredible ‘derp‘ shown above, but the temporomandibular (TMJ) joint doesn’t work like this when we open our jaw/mouth.

tmj

Instead, the jaw/mandible slides forward as the mouth opens, like you see in this ‘live MRI’ slice above, resulting in this sexay jaw open below. You can see her mandible/jaw slide forward as she opens her mouth.
jaw_open_fluoro

This post is of course ignoring the fact that you can rig the jaw in a way that, through a combination of rotation and translation uses the TMJ as a pivot and rotates around the true jaw mass center of rotation described above. You can do virtually anything, you can drive with your feet; that doesn’t mean it’s a good idea. This post is probably most important for riggers creating characters fast, using tools that generate skeletons from user-placed signposts or locators.

Speaking of signposts and locators, another tool to help you with placement is to constrain a toroid or circle to your joint, it can help you visualize the jaw swing quickly:

jaw_swing

For those interest in further investigation, the paper ‘Rotation and Translation of the Jaw During Speech’ by Jan Edwards and Katherine Harris (1990) can be downloaded [here].

Jaw and Teeth Placement (Modeling)

side

Quite a few people have told me they found this helpful. I would like to add that teeth placement is very important and something you should check before rigging.  For modelers, here are some radiograms showing teeth placement in reference to the facial surface anatomy (click to enlarge):


jaw_placement02   jaw_placement046dfd119e23d059ea798630e2b40b7567
Setting the teeth the correct distance from the lips is important, I urge any facial modelers to take interest in forensic facial reconstruction. Books like Forensic Art and Illustration have lots of good data, like the Rhine Facial Soft Tissue Depth charts. Keeping in line with my post above, we need to know the upper and lower teeth dept in relation to our surface anatomy.

As you see above, we’re primarily interested in 6, 7, 20, and 21. Rhine et al have created charts for caucasoid and negroid Americans of varying builds. (Click below to enlarge)

 

NOTE: Soft tissue thickness charts for the face are also a great place to gut-check your sub surface scattering maps and profiles!

Up next, placing the teeth so that they are large enough or wide enough is also important. Below, notice the item marked ‘J’, this is the average upper lip line in relation to the upper teeth.

 

face_prop

Above is a forensic facial reconstruction proportion guideline. I like to augment that a bit to help with teeth placement. I find that it’s helpful to look at the incisors vs the nostrils, and the pupils vs the lip corners/molars. Here are some examples of that (click to enlarge):

face02face01face03face04face05

posted by Chris at 5:36 PM  

Tuesday, June 7, 2016

Displaying half the data: But twice as fast!

nodeEditor

In Maya 2016, by default, Node Editor cannot display more than 500 connections. Let’s put this in perspective. Let’s say you have two attributes and each drives 20 joints. BOOM, you’re over the limit bro.. what you doin? Why you breakin’ Maya?

The solution, the warning says is to set an option var. Oh yeah that’s a frickin’ charm, that’s like unlocking all streets on my GPS by loosening a nut on the underside of my car.

I count on this visual editor to display connections, that’s all it needs to do. I have had rigs that make this update once every 10 seconds, but at least I could find what I was looking for and troubleshoot it. I would much rather a drop in fidelity, like stop drawing nice curves or something –than just randomly not displaying data.

How is that even an option? Sometimes my mind wanders:

“Sir, we noticed that on some systems, the software can get slow if we display more than 500 connections.”
“How did you fix it?”
“Well we just decided not to show more than 500 connections.”
“Stellar work Raymond. Can this limit be adjusted in the Settings and Preferences? Maybe the first time Maya opens or the user opens Node Editor it can be set based on his hardware.”
“Well right now it’s an optionVar, I mean that works well enough for me.”

Here’s what you can enter into the mel evaluation line in the lower left to set your max display depth to ten million:

optionVar -iv "editorConnectionScanLimit" 10000000;

I think ADSK probably hates me by now. ¬.¬

posted by Chris at 12:11 AM  

Saturday, May 7, 2016

Simple Runtime Rigging in UE4

This is something that I found when cleaning my old computer out at work this week after an upgrade. This is an example of using a look at controller and a blendspace to drive a fleshy eye deformation in UE4 live at runtime.

This is the technique Jeremy Ernst used on Smaug’s eye in the demo we did with Weta over a year ago. Create a 2d blendspace for the eye poses. Export out left, right, up, down poses and connect them:

eye2

Grab the local transform from the eye, and set two vars with it:

image13

Create a Look At Controller and integrate the blendspace:

image06

posted by Chris at 7:40 AM  

Tuesday, April 19, 2016

ADSK Ships Pose Space Deformer, Ignores Clavicle

Oh the clavicle.. you’re like the bastard redheaded stepchild.. is this guy an animator or what?

Anyway, sixteen years after the publication of Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Driven Deformation, and fourteen years after Michael Comet released the poseDeformer plugin for Maya, which became the industry standard: Autodesk has now implemented the feature into the software.

In what seems like my early 20’s being acted out before my eyes, ‘Combination Sculpting’, the way of associating a fixer shape when other shapes are in a certain configuration (pioneered by Bay Raitt on Gollum in 2002), is also not a feature that will ship in Maya.

Riggers rejoice! We have voted this up on all feedback forums for years, and ADSK created the tool with direct involvement from the rigging community, great that it can finally see daylight. Now, all you Extension2 users grab it and work out the remaining kinks before everyone gets it in Maya 2017!

And if you’re upgrading to EXT2 today, don’t forget, FBX anim export/bake is fixed!

posted by Chris at 3:41 PM  

Thursday, March 31, 2016

Why Does Everyone Write Their Own FBX Exporter?

fbx_export

When it comes to characters or character animation, any studio doing anything remotely complex has to write their own FBX exporter. But why?

send_to_unreal

It’s true that the FBX export options dialog is so convoluted that even ADSK has created a front end for exporting FBX to Unreal or Unity, but the main reason boils down to one issue: Maya FBX Export does not bake animation properly.

I *WANT* to use the FBX Exporter core (from Python). As a technical artist, I want a C++ plugin efficiently traversing the dag and baking/exporting animation. I do not want to dupe a skeleton and walk the timeline using python, calling scene update on the entire DAG every frame. The FBX Exporter doesn’t even have python exposure, so we have to eval MEL commands; it’s just not ideal to say the least.

Where Have the Attrs Gone?

When exporting characters elsewhere, you want to use FBX to get important data there as well. Not just an animated transform. In this test file [fbx_test], you see a sphere, skinned to a joint. That joint has an attribute called important_data. In games especially, we can use this scalar to drive a material parameter such as a wrinkle on a face, a blendshape, to allow an animator to blend off a cloth solver, anything really. So here’s our important data, driven my a multiply-divide node:

Capture

Export this scene as an FBX, being sure to tell it to bake complex animation:

mel.eval("FBXExportBakeComplexAnimation -v true")

Now import it back into an empty Maya scene, your important_data is gone!

To make matters worse, in the FBX ascii file I see an anim curve for my important data:

;AnimCurveNode::important_data, Model::joint1
C: “OP”,1519473056,205108448, “important_data”

So I am at a loss as to why programs using the FBX SDK do not get this data on import, including Maya.

DISCLAIMER: On multiple occasions I have reached out to ADSK asking them to please address this issue, but it has not happened. From the image above (new game export options), you can see that, on some level, ADSK wants to make this experience better, so please echo me in asking them to fix this issue.

In line with the title of this post, I have written, and would gladly give out an exporter for UE4, but as it uses the C++ bake / FBX Exporter from Maya, I would prefer to wait until this gets fixed. The ‘Bake Root’ option was actually created by Kevin Vassey at Epic because UE4 imports custom attrs on the root joint, so we just bake that if needed. This is a little puny exporter; internally we use Jeremy’s A.R.T. Toolkit for exporting animation in production.

uExportt

 

UPDATE 1: Thanks for the shares and responses. Some of you have told me that you have also talked to ADSK about this, and others say it was fixed at one time, but is now broken. ADSK has reached out in the comments and has created a bug number in their tracking system. I will update later on it’s priority/timeframe if I get more info!

UPDATE 2: I can verify, this is fixed if you grab FBX 2016.1.2 or later from here. It should be fixed in future versions of Maya, and the bug was fixed before I wrote this post, but was not in Maya 2016. In the comments, Chris Dardis said that the fix was available in a non public cut of Maya he was using. As the link is not compiled, I have uploaded a compiled Maya 2016 FBX 2016.1.2 plugin here.

posted by Chris at 10:01 PM  

Monday, July 13, 2015

RigPorn: Square Enix Optical Facial Pipeline

square_mask

posted by Chris at 6:29 PM  

Friday, January 23, 2015

BEWARE: Maya 2015 EXT / 2014 Constraint Incompatibility

incompat

I like to live on the edge when it comes to new versions of Maya, but rarely does it cause a major issue for me. A wise man once said ‘the moment you stop respecting something is when it bites you in the ass’. Well consider me bitten.

Anytime a rig I created with 2015 is loaded in 2014 it is irreparably broken. Spewing tons of errors like this one:
// Error: file: C:/Users/chris.evans/Desktop/test.ma line 210: The orientConstraint ‘nurbsSphere1_orientConstraint1’ has no ‘w0’ attribute. //

Diffing two constrained spheres, I noticed that there’s a new flag called ‘DCB’ or Disconnect Behavior.  Batch delete that and it seems that 2015 rigs can work in 2014.

But as usual, be a better person than me and just don’t open a file with a version of Maya your animators aren’t on. 😀

posted by Chris at 10:57 PM  

Thursday, October 23, 2014

Destiny Rigging/Animation Slides and Videos Up

destiny

The Destiny talks at SIGGRAPH were really interesting, the material just went up and you should definitely check it out:

http://advances.realtimerendering.com/destiny/siggraph2014/animation/index.html

posted by Chris at 11:37 AM  

Thursday, September 18, 2014

Ryse SIGGRAPH 2014 Reel

I mentioned that we won “best Realtime Graphics” at this year’s SIGGRAPH conference, but I never linked the video:
https://www.youtube.com/watch?v=mLvUQNjgY7E

posted by Chris at 4:24 PM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Monday, August 25, 2014

Maya 2015: Poly Combine Skinned Meshes?

At Crytek, we have a plugin to preserve skinning on hacking up and uniting meshes, based on this old post here. It’s released in the Tools folder of CryENGINE if you grabbed the engine on Steam. Imagine my surprise when I saw this option in Maya 2015:

polyUniteSkinning

It fires off a new polyUnite command called polyUniteSkinned [Maya 2015 Docs], which can merge skinned meshes? Has anyone gotten this to work? It doesn’t seem to work properly through the UI, passes ‘name’ as a flag and fails as they removed that flag. (seriously) But it seems to work in simple situations, as shown here, it didn’t work attaching a face to a body, but at least it shows ADSK is moving in the right direction!

lice_gb_weights

posted by Chris at 12:42 AM  

Tuesday, July 15, 2014

RigPorn: The Last of Us

I realize most of you have seen this, but for those of you who haven’t, Judd walks people through TLOU rigs with a focus on facial as well. Really great stuff.

posted by Chris at 3:15 PM  

Thursday, July 10, 2014

RigPorn: Call of Duty: Advanced Warfare

[Click to enlarge images]

(All images taken from recent CoD marketing materials)

codaw11

Here you can see the first-person hands rig, complete with camera frustum tools, and animation controls.

codaw2

Close-up of the generic male rig, no face rig loaded at the moment, but still interesting.

codaw3

Here’s a great shot of their first-person-hands-picker. I always love seeing how animators want to work, I really never have worked on a team who wanted a picker, much less something like this, but it’s great to see.

codaw4

Another picker, maybe the full body one, or cinematics only.

Shooting at giant’s new studio in manhattan beach?

codaw5

Surprisingly, this looks like it is being shot at Giant’s Manhattan Beach facility, also looks like Giant hardware and marker layout -feel free to correct me if I am wrong. If the unique poured concrete construction doesn’t give it away, they also released images with giant Avatar banners in the background.

datei_1399619105

posted by Chris at 1:34 AM  

Monday, July 7, 2014

Geodesic Voxel Binding in Maya 2015

If you’re like me, your ears will perk up at any technology promising a better initial skin bind. So I decided to take a look at the new geodesic voxel binding in Maya 2015, I couldn’t find much information about it online, so I decided to do the usual and write the post I would have wanted to find when I googled. I hope it’s useful.

Background

This new way of skin binding was presented by Autodesk at SIGGRAPH 2013.

nanosuit

Here’s a link to the SIGGRAPH 2013 white paper: Geodesic Voxel Binding for Production Character Meshes, definitely worth checking out. I do like how Autodesk is now using the word ‘Production’ a lot more. It seems they are no longer using simple test cases to test pipelines and workflows. Above, they used our Nanosuit, from the Crysis franchise. Here’s the full video that accompanies the talk: [LINK]

How It Works

voxelinfo

The basic idea is that it voxelizes characters into three types of voxels, skeleton, interior, and boundary. This way it tries to eliminate cross-talk. At ILM we had a binding solution in Zeno that used mesh normals and this eliminated crosstalk between manifold parts like fingers, but most of this paper focuses on skinning non-manifold meshes, meshes with intersecting parts, open holes, etc.

In Practice

Here’s the hand of the Marius bust we send out for rigging tests, notice when skinned with Closest in Hierarchy, there is some significant crosstalk:

lice_ch

Here’s an initial finger bind with the new algorithm, there’s still some crosstalk at 1024voxel resolution (highest possible), but it’s much better:

lice_gb

As someone who is very nitpicky about my skinning, *any* crosstalk at all is unacceptable, and it takes me about the same amount of time to clean the tiniest values as it does these larger ones. Here’s a closer look at some of the crosstalk from the ‘gb’ binding:

lice_gb_weights_trim

Crosstalk isn’t just bad for deformation, but these tiny little values are inefficient and sloppy, especially if you are sending it to a game engine.

Another area that requires significant cleanup is the underarm area where the serratus anterior lies, here I thought the new approach would work very well, unfortunately the binding didn’t have a noticeable difference from previous methods.

click to enlarge

click to enlarge (Head mesh from CryENGINE Asset Pack on Steam)

Few things are more difficult to skin than the human face. Here you can see traditional vs geodesic. I will say it’s definitely better than the old bind, but still has issues. This is one of the first initial skin binds on a closed-mouth neutral bindpose I have seen that has no cross-talk on one lip. I tweaked the falloffs doing three different binds on the traditional on the left.

Multi-Threaded?

voxelbind_crop

Another thing I like is a hint at a multi-threaded future. The binding process (voxel calculation, etc) is multi-threaded. At Crytek, we even make hardware purchasing decisions based on Maya not being multi-threaded. We get animators the fastest 2 core CPUs, this allows them better interactive framerates, and still a second core for a headless mayapy to export a long linear cutscene or animation. It’s nice to see Autodesk begin to think about multi-threading tools and processes.

In Conclusion

The new Geodesic Bind algorithm from Autodesk is a step forward. There’s still no free lunch, but I will be using this as my default bind in the future. I will update this post if I run into any problems or benefits not outlined here. It would be great if there was a voxel debug view, or the ability to dynamically drive voxel resolution with an input like vertex colors a map, or polygon density.

Backwards Compatibility: New Nodes and Attrs

If you just want to use the latest Maya to try the feature, here are some gotchas. There is a new geomBind node, and some attributes on shape nodes:

// Error: file: C:/Users/chris/Desktop/TechAnimationTest/TechAnimationTest/Head_Mesh_skin.ma line 28725: The skinCluster ‘skinCluster1’ has no ‘gb’ attribute. //
// Warning: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 27464: Unrecognized node type ‘geomBind’; preserving node information during this session. //
// Error: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 34: The mesh ‘eyes_MSH’ has no ‘.sdt’ attribute. //

The geomBind node stores ‘the post voxel validation state performed during the geodesic voxel bind algorithm.’ and some other attributes. It has a message attr that connects to a skinCluster. The SDT attr on shapes is not related to skinning, it is a new ‘Subdivision Method’ attr for the openSubDiv support.

geomBindNodes

The above said, it seems to work fine for me if I just delete that stuff, the skin weights are fine.

 

posted by Chris at 1:29 AM  

Sunday, May 11, 2014

Maya: Vector Math by Example

BEFORE WE BEGIN

This post is about how to use vector math and trigonometric functions in Maya, it is not a linear algebra or vector math course, it should give you what you need to follow along in Maya while you learn with online materials. Kahn Academy is a great online learning resource for math, and Mathematics for Computer Graphics, and Linear Algebra and its Applications are very good books. Gilbert Strang, the Author of Linear Algebra, has his entire MIT Linear Algebra course lectures here in video form. Also, Volume 2 of Complete Maya Programming has some vector math examples in MEL and C++.

vector_wikipedia

VECTORS

Think of the white vector above as a movement. It does have three scalar values (ax, ay, az), sure, but do not think of a vector as a point or a position. When you see a vector, I believe it helps to imagine it as a movement from 0,0,0 – an origin. We don’t know where it started, we only know the movement.

A vector has been normalized, or is considered a unit vector, when it’s length is one. This is achieved by dividing each component by the length.

VECTOR LIBRARIES

There are many Python libraries dedicated to vector math, but none ship with Python itself. I have tried numPy, then pyEuclid, and finally piMath. It can definitely be a benefit to load the same vector class across multiple apps like Maya, MotionBuilder, etc.. But, I used those in a time when MotionBuilder had no vector class, and before Maya had the API. Today, I use the vector class built into the Maya Python API (2.0), which wraps the underlying Maya C++ code: MVector

I had to call out 2.0 above, as those of you using the old API, you have to ‘cast’ your vectors to/from, meaning that classes like MVector (Maya’s vector class) don’t accept python objects like lists or tuples, this is still the case with the 2014 SWIG implementation of the default API, but not API 2.0. One solution is to override the MVector class in a way that it accepts a Python lists and tuples, essentially automatically casting it for you:

class MVector(om.MVector):
    def __init__(self, *args):
        if( issubclass, args, list ) and len(args[0])== 3:
            om.MVector.__init__(self, args[0][0], args[0][1], args[0][2])
        else:
            om.MVector.__init__(self, args)

But that aside, just use Maya Python API 2.0:

#import API 2.0
import maya.api.OpenMaya as om
#import old API
import maya.OpenMaya as om_old

 

CREATING VECTORS IN MAYA

Let’s first create two cubes, and move them

import maya.cmds as cmds
import maya.api.OpenMaya as om
cube1, cube2 = cmds.polyCube()[0], cmds.polyCube()[0]
cmds.xform(cube2, t=(1,2,3))
cmds.xform(cube1, t=(3,5,2))

Let’s get the translation of each, and store those as MVectors

t1, t2 = cmds.xform(cube1, t=1, q=1), cmds.xform(cube2, t=1, q=1)
print t1,t2
v1, v2 = om.MVector(t1), om.MVector(t2)
print v1, v2

This will return the translation in the form [x, y, z], and also the MVector, which will print: (x, y, z), and in the old API: <__main__.MVector; proxy of <Swig Object of type ‘MVector *’ at 0x000000002941D2D0> >. This is a SWIG wrapped C++ object, API 2.0 prints the vector.

Note: I just told you to think of vectors as a movement, and not as a position, and the first example I give stores translation in a vector. Maybe not the best, but remember this translation, is really being stored as a movement in space from an origin.

So let’s start doing stuff and things.
 

LENGTH / DISTANCE / MAGNITUDE

We have two translations, both stored as vectors, let’s get the distance between them, to do this, we want to make a new vector that describes a ray from one location to the other and then find it’s length, or magnitude. To do this we subtract each component of v1 from v2:

v = v2-v1
print v

This results in ‘-2.0 -3.0 1.0’.

To get the length of the vector we actually get the square root of the sum of x,y,and z squared sqrt(x^2+y^2+z^2), but as we haven’t covered the math module yet, let’s just ask the MVector for the ‘length’:

print om.MVector(v2-v1).length()

This will return 3.74165738677, which, if you snap a measure tool on the cubes, you can verify:

distance

Use Case: Distance Check

As every joint in a hierarchy is in it’s parent space, a joint’s ‘magnitude’ is it’s length. Let’s create a lot of joints, then select them by joint length.

import maya.cmds as cmds
import random as r
import maya.api.OpenMaya as om
 
root = cmds.joint()
jnts = []
 
for i in range(0, 2000):
    cmds.select(cl=1)
    jnt = cmds.joint()
    trans = (r.randrange(-100,100), r.randrange(-100,100), r.randrange(-100,100))
    cmds.xform(jnt, t=trans)
    jnts.append(jnt)
 
cmds.parent(jnts, root)

joint_dist

So we’ve created this cloud of joints, but let’s just select those joints with a joint length of less than 50.

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1))
    if v.length() < 50: sel.append(jnt)
 
cmds.select(sel)

 

DOT PRODUCT / ANGLE BETWEEN TWO VECTORS

The dot product is a scalar value obtained by performing a specific operation on two vector components. This doesn’t make much sense, so I will tell you that the dot product is extremely useful in finding the angle between two vectors, or checking which general direction something is pointing.

dot = v1*v2
print dot

USE CASE: Direction Test

direction

The dot product of two normalized vectors will always be between -1.0 and 1.0, if the dot product is greater than zero, the vectors are pointing in the same general direction, zero means they are perpendicular, less than zero means opposite directions. So let’s loop through our joints and select those that are facing the x direction:

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1)).normal()
    dot = v*om.MVector([1,0,0])
    if dot > 0.7: sel.append(jnt)
cmds.select(sel)

USE CASE: Test World Colinearity

This one comes from last week in the office, one of my guys wanted to know how to check which way in the world something was facing. I believe it was to derive some information from arbitrary skeletons. This builds on the above by getting each vector of a node in world space.

def getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector):
    matrix = om.MGlobal.getSelectionListByName(node).getDagPath(0).inclusiveMatrix()
    vec = (vec * matrix).normal()
    return vec
 
 
def axisVectorColinearity(node, vec):
    vec = om.MVector(vec)
 
    x = getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector)
    y = getLocalVecToWorldSpace(node, vec=om.MVector.kYaxisVector)
    z = getLocalVecToWorldSpace(node, vec=om.MVector.kZaxisVector)
 
    #return the dot products
    return {'x': vec*x, 'y':vec*y, 'z':vec*z}
 
jnt = cmds.joint()
print axisVectorColinearity(jnt, [0,0,1])

You can rotate the joint around and you will see which axis is most closely pointing to the world space vector you have given as an input.

USE CASE: Angle Between Vectors

angle

When working with unit vectors, we can get the arc cosine of a dot product to derive the angle between the two vectors, but this requires trigonometric functions, which are not available in our vector class, for this we must import the math module. Scratching the code above, let’s find the angle between two joints:

import maya.cmds as cmds
import maya.api.OpenMaya as om
import math
 
jnt1 = cmds.joint()
cmds.select(cl=1)
jnt2 = cmds.joint()
cmds.xform(jnt2, t=(0,0,10))
cmds.xform(jnt1, t=(10,0,0))
cmds.select(cl=1)
root = cmds.joint()
cmds.parent([jnt1, jnt2], root)
 
v1 = om.MVector(cmds.xform(jnt1, t=1, q=1)).normal()
v2 = om.MVector(cmds.xform(jnt2, t=1, q=1)).normal()
 
dot = v1*v2
print dot
print math.acos(dot)
print math.acos(dot) * 180 / math.pi

So at the end here, the arc Cosine of the dot product returns the angle in radians (1.57079632679), which we convert to degrees by multiplying it by 180 and dividing by pi (90.0). To check your work, there is no angle tool in Maya, but you can create a circle shape and set the sweep degrees to your result.

Now that you know how to convert radians to an angle, if you store the result of the above in an MAngle class, you can ask for it however you like:

print om.MAngle(math.acos(dot)).asDegrees()

Now that you know how to do this, there is an even easier, using the angle function of the MVector class, you can ask it the angle given a second vector:

print v1.angle(v2)

There are also useful attributes v1.rotateBy(r,r,r) for an offset and v1.rotateTo(v2). I say (r,r,r) in my example, but the rotateBy attr takes angles or radians.

CHALLENGE: Can you write your own rad_to_deg and deg_to_rad utility methods?

USE CASE: Orient-Driver

poseDriver
Moving along, let’s apply these concepts to something more interesting. Let’s drive a blendshape based on orientation of a joint. Since the dot product is a scalar value, we can pipe this right into a blendshape, when the dot product is 1.0, we know that the orientations match, when it’s 0, we know they are perpendicular.

vecPoseDriver

We will use a locator constrained to the child to help in deriving a vector. The fourByFourMatrix stores the original position of the locator. I tried using the holdMatrix node, which should store a cached version of the original locator matrix, but it kept updating. (?) We use the vectorProduct node in ‘dot product’ mode to get the dot product of the original vector and the current vector of the joint. We then pipe this value into the weight of the blendshape node.

Now, this simple example doesn’t take twist into account, and we aren’t setting a falloff or cone, the falloff will be 1.0 when the vectors align and the blendshape is on 100% and 0.0, when they’re perpendicular and the blendshape will be on 0%. I also don’t clamp the dot product, so the blendshape input can go to -1.
 

CROSS PRODUCT / PERPENDICULAR VECTOR TO TWO VECTORS

The cross product results in a vector that is perpendicular to two vectors. Generally you would do (v1.y*v2.z-v1.z*v2.y, v1.z*v2.x-v1.x*v2.z, v1.x*v2.y-v1.y*v2.x), ut luckily, the vector class manages this for us by using the ‘^’ symbol:

cross = v1^v2
print cross

USE CASE: Building a coordinate frame

crossProduct

If we get the cross product of v1^v2 above, and use this vector to now cross (v1 x v2)x v1, we will now have a third perpendicular vector to build a coordinate system or ‘orthonormal basis’. A useful example would be aligning a node to a nurbs curve using the pointOnCurveInfo node.

crossProduct

In the example above, we are using two cross products to build a matrix from the tangent of the pointOnCurveInfo and it’s position, then decomposing this matrix to set the orientation and position of a locator.



Many people put content like this behind a paywall.
If you found this useful, please consider buying me a beer.

posted by Chris at 11:49 PM  

Saturday, August 24, 2013

Ryse at the Anaheim Autodesk User Event

I have been working on Ryse for almost two years now, it’s one of the most amazing projects I have had the chance to work on. The team we have assembled is just amazing, and it’s great to be in the position to show people what games can look like on next-gen hardware..  Autodesk asked us to come out to Anaheim and talk about some of the pipeline work we have been doing, and it’s great to finally be able to share some of the this stuff.

A lot of people have been asking about the fidelity, like ‘where are all those polygons?’, if you look at the video, you will see that the regular Romans, they actually have leather ties modeled that deform with the movement of the plates, and something that might never be noticed: deforming leather straps underneath the plates modeled/rigged holding together every piece of Lorica Segmata armor, and underneath that: a red tunic! Ryse is a labor of love!

We’re all working pretty hard right now, but it’s the kind of ‘pixel fucking’ that makes great art -we’re really polishing, and having a blast. We hope the characters and world we have created knock your socks off in November.

posted by Chris at 11:16 PM  

Monday, February 11, 2013

Object Oriented Python in Maya Pt. 1

I have written many tools at different companies, I taught myself, and don’t have a CS degree. I enjoy shot-sculpting, skinning, and have been known to tweak parameters of on-screen visuals for hours; I don’t consider myself a ‘coder’; still can’t allocate my own memory.  I feel I haven’t really used OOP from an architecture standpoint. So I bought an OOP book, and set out on a journey of self improvement.

‘OOP’ In Maya

In Maya, you often use DG nodes as ‘objects’. At Crytek we have our own modular nodes that create meta-frameworks encapsulating the character pipeline at multiple levels (characters, characterParts, and rigParts). Without knowing it, we were using Object Oriented Analysis when designing our frameworks, and even had some charts that look quite a bit like UML. DG node networks are connected often with message nodes, this is akin to a pointer to the object in memory, whereas with a python ‘object’ I felt it could always easily lose it’s mapping to the scene.

It is possible now with the OpenMaya C++ API to store a pointer to the DG node in memory and just request the full dag path any time you want it, also PyMel objects are Python classes and link to the DG node even when the string name changes.

“John is 47 Years Old and 6 Feet Tall”

Classes always seemed great for times when I had a bunch of data objects, the classic uses are books in a library, or customers: John is 47 years old and likes the color purple. Awesome. However, in Maya, all our data is in nodes already, and those nodes have attributes, those attributes serialize into a Maya file when I save: so I never really felt the need to use classes.

Although, all this ‘getting’, ‘setting’ and ‘listing’ really grows tiresome, even when you have custom methods to do it fairly easily.

It was difficult to find any really useful examples of OOP with classes in Maya. Most of our code is for ‘constructing’: building a rig, building a window, etc. Code that runs in a linear fashion and does ‘stuff’. There’s no huge architecture, the architecture is Maya itself.

Class Warfare

I wanted to package my information in classes and pass that back and forth in a more elegant way –at all times, not just while constructing things. So for classes to be useful to me, I needed them to synchronously exist with DG nodes.

I also didn’t want to have to get and set the information when syncing the classes with DG nodes, that kind of defeats the purpose of Python classes IMO.

Any time I opened a tool I would ‘wrap’ DG nodes in classes that harnessed the power of Python and OOP. To do this meant diving into more of the deep end, but since that was what was useful to me, that’s what I want to talk about here.

To demonstrate, let’s construct this example:

#the setup
loc = cmds.spaceLocator()
cons = [cmds.circle()[0], cmds.circle()[0]]
meshes = [cmds.sphere()[0], cmds.sphere()[0], cmds.sphere()[0]]
cmds.addAttr(loc, sn='controllers', at='message')
cmds.addAttr(cons, sn='rigging', at='message')
for con in cons: cmds.connectAttr(loc[0] + '.controllers', con + '.rigging')
cmds.addAttr(loc, sn='rendermeshes', at='message')
cmds.addAttr(meshes, sn='rendermesh', at='message')
for mesh in meshes: cmds.connectAttr(loc[0] + '.rendermeshes', mesh + '.rendermesh')

So now we have this little node network:

node_network

Now if I wanted to wrap this network in a class. We are going to use @property to give us the functionality of an attribute, but really a method that runs to return us a value (from the DG node) when the ‘attribute’ is queried. I believe using properties is key to harnessing the power of classes in Maya.

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")

So now we can query the ‘controllers’ attribute/property, and it returns our controllers:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']

Next up, we add a setter, which runs code when you set a property ‘attribute’:

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")
 
	@controllers.setter
	def controllers(self, cons):
		#disconnect existing controller connections
		for con in cmds.listConnections(self.node + '.controllers'):
			cmds.disconnectAttr(self.node + '.controllers', con + '.rigging')
 
		for con in cons:
			if cmds.objExists(con):
				if not cmds.attributeQuery('rigging', n=con, ex=1):
					cmds.addAttr(con, longName='rigging', attributeType='message', s=1)
				cmds.connectAttr((self.node + '.controllers'), (con + '.rigging'), f=1)
			else:
				cmds.error(con + ' does not exist!')

So now when we set the ‘controllers’ attribute/property, it runs a method that blows away all current message connections and adds new ones connecting your cons:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']
test.controllers = [cmds.nurbsSquare()[0]]
print test.controllers
##>>[u'nurbsSquare1']

To me, something like properties makes classes infinitely more useful in Maya. For a short time we tried to engineer a DG node at Crytek that when an attr changed, could eval a string with a similar name on the node. This is essentially what a property can do, and it’s pretty powerful. Take a moment to look through code of some of the real ‘heavy lifters’ in the field, like zooToolBox, and you’ll see @property all over the place.

I hope you found this as useful as I would have.

posted by Chris at 1:03 AM  

Monday, January 7, 2013

Abusing ‘Blind Data’ in Maya

‘Blind data’ is custom data that you can store on any object or its components (vertex, edge, polygon, etc). The documentation says ‘Blind data is information stored with polygons which is not used by Maya in any way..’ I believe it is used when importing meshes from other apps that have properties that do not map to Maya, so that when you take them back to those apps, those properties remain.

Anyway, the important point here is that blind data is metadata (int, float, double, boolean, string, binary) that you can attach to any component. It matters not what happens to said component, you can extract a polygon from a mesh, it’s index will have changed, it’s object will have changed, but its blind data will remain with it. The only drawback can be that it can be painfully slow to write this data, but we will get to that later.

Simple Example

First let’s create a blind data template, this is required to store the data later. We use the command ‘blindDataType’ to create a template for a string type called ‘skinningInfo’ or ‘skin’ for short, giving it an ID 12344. Then we query the ID and it returns the blind data attribs we have created.

cmds.blindDataType(id=12344, dt='string', ldn='skinningInfo', sdn='skin')
print cmds.blindDataType(id=12344, tn=1, q=1)
>>['skinningInfo', 'skin', 'string']

So now we have our template, let’s try using it, this is more focused on getting the idea across than speed:

#query vertex # of mesh
v = cmds.polyEvaluate(node, v=1)
#loop through vertices
for vtx in range(0, v):
    #get influences
    infs = cmds.skinCluster(sc, inf=1, q=1)
    #get weights
    objVtx = node + ".vtx[" + str(vtx) + "]"
    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
    #build dict of influence:weight
    for i in range(0, len(infs)):
        weightDict[infs[i]] = vals[i]
    #write value to blind data
 cmds.polyBlindData(objVtx, id=12344, at='vertex', ldn='skinningInfo', sd=str(weightDict))

So here you have saved a dictionary per vertex that has key/value pairs of influence/weight. You can query like so:

#I have a vertex selected in component mode
print cmds.polyQueryBlindData(cmds.ls(sl=1), id=12344, showComp=1)
['polySurface2.vtx[64].skin', "{u'joint2': 0.49755714634259796, 'joint3': 0.49755714634259784, 'joint1': 0.0048857073148042395}"]

Now On To Something More Useful

So let’s create a function to store skinning data per-vertex, as you may have seen with the above, that was painfully slow. If you have written any skinning tools, you know that the solution to this (other than learning C++) is to apply your change to all vertices at once. Below we build two lists, one of vertices and one of weights, then we

def storeBlindSkinning(mesh, sc):
	'''
	mesh is a skinned mesh, and sc is the skincluster affecting the mesh
	'''
	vtxList = []
	v = cmds.polyEvaluate(mesh, v=1)
	infs = cmds.skinCluster(sc, inf=1, q=1)
	vtxWeights = []
	for vtx in range(0, v):
	    objVtx = mesh + ".vtx[" + str(vtx) + "]"
	    vtxList.append(objVtx)
	    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
	    for i in range(0, len(infs)):
        	weightDict[infs[i]] = vals[i]
            vtxWeights.append(str(weightDict))
        cmds.polyBlindData(vtxList, id=12344, at='vertex', ldn='skinningInfo', sd=vtxWeights)

Setting all the data at once is 1/3 faster, however, setting this data takes quite some time, and you may want to take a hit for a progress bar. (break it up into groups) On ~60,000 vertices this took 10min (15min doing it inside the loop). I don’t mind that hit if it means that I can now detach/alter/slice my mesh without losing skinning data. You can even extract faces and the new vertices created will get the same blind data as their original. (one becomes two)

As always, the C++ API is much faster, my colleague, Bogdan speed tested the above function and 50,000 vertices took only a few milliseconds, compared to 10 minutes in pythonland.

Remember, there are other ways to store skinning data, using UVs, position, vertex color channels, etc. I just wanted to introduce people to blind data in Maya and show a potential use.

posted by admin at 4:04 AM  
Next Page »

Powered by WordPress