Stumbling Toward 'Awesomeness'

A Technical Art Blog

Monday, September 26, 2016

Save/Load SkinWeights 125x Faster

Previously I discussed the promise in Maya command ‘deformerWeights‘. The tool that ships with Maya was not very useful, but the code it called was 125 times faster than python if you used it correctly..

Let’s make a python class that can save and load skin weights. You hand it a few hundred skinned meshes (avg Paragon character) and it saves the weights and then you delete history on the meshes, and it loads the weights back on.  What I just described is the process riggers go through when updating a rig or a mesh _every day_.

Below we begin the class, we import a python module to parse XML, and we say “if the user passed in a path, let’s parse it.”

#we import an xml parser that ships with python
import xml.etree.ElementTree
 
#this will be our class, which can take the path to a file on disk
class SkinDeformerWeights(object):
    def __init__(self, path=None):
        self.path = path
 
        if self.path:
            self.parseFile(self.path)

Next, let’s make this parseFile function. Why is parsing the file important? In the last post we found out that there’s a bug that doesn’t appropriately apply saved weights unless you have a skinCluster with the *exact* same joints as were exported. We’re going to read the file and make a skinCluster that works.

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')

Now we’re getting some data here, we know that the format can save deformers for multiple shapes, let’s make a shape class and store these.

class SkinnedShape(object):
    def __init__(self, joints=None, shape=None, skin=None, verts=None):
        self.joints = joints
        self.shape = shape
        self.skin = skin
        self.verts = verts

Let’s use that when we parse the file, let’s then store the data we paresed in our new shape class:

#the function takes a path to the file we want to parse
def parseFile(self, path):
    root = xml.etree.ElementTree.parse(path).getroot()
 
    #set the header info
    for atype in root.findall('headerInfo'):
        self.fileName = atype.get('fileName')
 
    for atype in root.findall('weights'):
        jnt = atype.get('source')
        shape = atype.get('shape')
        clusterName = atype.get('deformer')
 
        if shape not in self.shapes.keys():
            self.shapes[shape] = self.skinnedShape(shape=shape, skin=clusterName, joints=[jnt])
        else:
            s = self.shapes[shape]
            s.joints.append(jnt)

So now we have a dictionary of our shape classes, and each knows the shape, cluster name, and all influences. This is important because, if you read the previous post, the weights will only load onto a skinCluster with the exact same number and names of joints.
Now we write a method to apply the weight info we parsed:

def applyWeightInfo(self):
    for shape in self.shapes:
        #make a skincluster using the joints
        if cmds.objExists(shape):
            ss = self.shapes[shape]
            skinList = ss.joints
            skinList.append(shape)
            cmds.select(cl=1)
            cmds.select(skinList)
            cluster = cmds.skinCluster(name=ss.skin, tsb=1)
            fname = self.path.split('\\')[-1]
            dir = self.path.replace(fname,'')
            cmds.deformerWeights(fname , path = dir, deformer=ss.skin, im=1)

And there you go. Let’s also write a method to export/save the skinWeights from a list of meshes so we never have to use the Export DeformerWeights tool:

def saveWeightInfo(self, fpath, meshes, all=True):
    t1 = time.time()
 
    #get skin clusters
    meshDict = {}
    for mesh in meshes:
        sc = mel.eval('findRelatedSkinCluster '+mesh)
        #not using shape atm, mesh instead
        msh =  cmds.listRelatives(mesh, shapes=1)
        if sc != '':
            meshDict[sc] = mesh
        else:
            cmds.warning('>>>saveWeightInfo: ' + mesh + ' is not connected to a skinCluster!')
    fname = fpath.split('\\')[-1]
    dir = fpath.replace(fname,'')
 
    for skin in meshDict:
        cmds.deformerWeights(meshDict[skin] + '.skinWeights', path=dir, ex=1, deformer=skin)
 
    elapsed = time.time()-t1
    print 'Exported skinWeights for', len(meshes), 'meshes in', elapsed, 'seconds.'

You give this a folder and it’ll dump one file per skinCluster into that folder.
Here is the final class we’ve created [deformerWeights.py], and let’s give it a test run.

sdw = skinDeformerWeights()
sdw.saveWeightInfo('e:\\gadget\\', cmds.ls(sl=1))
>>>Exported skinWeights for 214 meshes in 2.433 seconds.

Let’s now load them back, we will iterate through the files in the directory and parse each, applying the weights:

import os
t1=time.time()
path = "e:\\gadget\\"
files = 0
for file in os.listdir(path):
    if file.endswith(".skinWeights"):
        fpath = path + file
        sdw = skinDeformerWeights(path=fpath)
        sdw.applyWeightInfo()
        files += 1
elapsed = time.time() - t1
print 'Loaded skinWeights for', files, 'meshes in', elapsed, 'seconds.'
>>> Loaded skinWeights for 214 meshes in 8.432 seconds.

“>>> Loaded skinWeights for 214 meshes in 8.432 seconds.”

So that’s a simple 50 line wrapper to save and load skinWeights using the deformerWeights command. No longer do we need to write C++ API plugins to save/load weights quickly.

posted by Chris at 1:27 AM  

Saturday, September 24, 2016

DeformerWeights Command, Cloaked Savior?

export-skin

This post was originally going to be entitled ‘Why Everyone Writes their Own Skin Exporter’. Maya’s smooth skin weight export tool hasn’t changed since Maya 4.0 when it was introduced 15 years ago. It saves out greyscale weightmaps in UV space, one image per joint influence per mesh. The only update they have done in 15 years is change the slider to go from a max of 1024 pixels to 4096, and now 8192!

Autodesk understands the need and importance of a skin weight exporter, they even ship it as a C++ example in their Maya Developer Kit / SDK. So why are they still shipping this abomination above?

got-this

Like blind data discussed before, in pythonland we cannot set skinweights in one go, we must iterate through the entire mesh setting skin weights one vertex at a time. This means a Maya feature or C++ plugin can save and load skin weights in seconds that take python 15 minutes.

I have written lots of skin weight save/load crap in my time, and as I sat down and started to do that very thing in an instructional blog post one night, we had an interesting discussion in the office the next day. ADSK added ‘Export Deformer Weights‘ in 2011, but it has never really worked. I don’t know a single person who uses it. But it does save and load ‘deformer’ weights via the C++ API –so there’s real promise here!

 

44521502

Save/load skin weights fast without writing a custom Maya plugin? This is kinda like the holy grail, which is ridiculous, but you should see the lengths people go to eek out a little more performance! My personal favorite was MacaroniKazoo back in 2010 reaching in and setting the skinCluster weight list directly by hand using an unholy conglomeration of python API and MEL commands. Tyler Thornock has a post that builds on this here.

So I asked the guys if anyone had looked at Export Deformer Weights recently, everyone either hadn’t heard of it, heard it was shit, or had some real first-hand experience of it being “A bit shit.” But still –the promise was there!

Export Deformer Weights: Broken and Backwards

So, first thing’s first, I made an awesome test case. I am going to go over all the gotchas and things that are broken, but if you don’t want to take this voyage of discovery, skip this section.

skin01

I open the deformer weight export tool, and just wow.. I mean the UI team really likes it’s space:

export-def1

I save my skin weights to an XML file, delete history on the mesh, and open the import UI:

export-def2

Gotcha 1) It requires a deformer to load weights onto. You need to re-skin your mesh.

I re-skinned my mesh and loaded the XML using Index.. drumroll..

skin02

Well, this is definitely not applying the weights back by vertex index. I decided to try ‘Nearest’:

skin03

Gotcha 2) Of the options [Index, Nearest, Over], ‘Index’ is somehow lossy, and anything other than ‘Index’ seems to crash often, ‘Nearest’ seems totally borked. (above)

So this was when I just began to think this was a complete waste of my time. I was pretty annoyed that they even shipped a tool like this, something that is so needed and so important, yet crashes frequently and completely trashes your data when it does work.

this-is-fine

Not Taking ‘Broken and Backwards’ for an Answer

I am already invested and, not understanding how loading weights by point index could be lossy and broken, I decided to look at the XML file. The tool writes out one XML file per skinCluster, here’s a rundown of the file format:

Mesh Info (Shape) – the vertices of the shape are stored local space x,y,z and corresponding index

<?xml version="1.0"?>
<deformerWeight>
  <headerInfo fileName="C:/Users/chris.evans/Desktop/test_sphere.xml" worldMatrix="1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 "/>
  <shape name="pSphereShape1" group="7" stride="3" size="382" max="382">
    <point index="0" value=" 0.148778 -0.987688 -0.048341"/>
    <point index="1" value=" 0.126558 -0.987688 -0.091950"/>
    <point index="2" value=" 0.091950 -0.987688 -0.126558"/>
    ...

Joints (Weights) – There is one block per joint that calls out each vertex that it have influences for on the shape

  <weights deformer="skinCluster1" source="root" shape="pSphereShape1" layer="0" defaultValue="0.000" size="201" max="380">
    <point index="0" value="0.503"/>
    <point index="1" value="0.503"/>
    <point index="2" value="0.503"/>
    ...

And that’s it, not a lot of data, nothing about the skinCluster attributes or options, no support for spaces like UV or world. (odd, since it’s had UV support for 15 years)

Next I decided to run the tool again and see what command it was calling, I then looked up the command documentation and here’s where it gets interesting, go ahead, take a look!

def_bary

So now I am hooked, someone is putting some thought into this –at least on some level.

image

I don’t at all understand why the UI has none of these options, but I need to get this working. If you read through the docs, the command also supports:

  • Exporting multiple skinClusters/shapes/deformers per XML file
  • Exporting skinCluster/deformer attributes like ‘skinningMethod’ and ‘envelope’
  • Local and world space positions with a positional tolerance

“Someone is putting some thought into this”

So I started trying to figure out why a file format that explicitly knows every influence of every vertex by index and inf name, doesn’t load weights properly. After some trials I hit gotcha #3:

Gotcha 3) Of the options the weights only load properly if the skinCluster has the *exact* same influences it was saved with. Which really makes no sense, because the file format has the name of every joint in the old skinCluster.

So now I had it working, time to wrap it and make it useful.

The Documentation is a Lie.

So, first thing’s first, I did a speed test.

Importing with deformerWeights was about 125 times faster: Gold mine.

But I just couldn’t get some of the flags to work, I thought I was just a moron, until I finally tried the code example in the ADSK Maya documentation, which FAILS. Let’s first look at the -vc flag, which is required to load using ‘bilinear’ or ‘barycentric’ mapping/extrapolation:

cmds.deformerWeights ("testWeights.xml", ex=True, vc=True, deformer="skinCluster1")
 
# Error: Invalid flag 'vc'
# Traceback (most recent call last):
#   File "<maya console>", line 1, in <module>
# TypeError: Invalid flag 'vc' #

Gotcha 4) The python examples do not work. -vertexConnections flag doesn’t work, -attribute flag doesn’t work, so no saving skincluster metadata like ‘skinningMethod’, etc. Because of that, ‘barycentric’ and other methods that need vertex connection info do not work. The ‘deformer’ flag shows that it takes a list of deformers and writes them all to one file, but this is not true, it takes a single string name of a deformer.

I now know why the UI doesn’t have all these cool options! –they don’t work!

Gotcha 5) It doesn’t take a file path, to save a file to a path you need to specify the filename, and then the path separate.

cmds.deformerWeights ("testWeights.xml", path='d:\\myWeight\\export\\folder\\', ex=True, deformer="skinCluster1")

Perhaps someone fixed this stuff, documented it, and then reverted the fix, but this has been around since 2011.. I tried the above in maya 2016 latest service pack and all my links above are to that version of the documentation.

I wasn’t really intending to write this much, so now that we know this can import weights 125 times faster, we’ll make a tool to utilize it. Stay tuned!

posted by Chris at 2:15 AM  

Friday, September 2, 2016

SIGGRAPH Realtime Live Demo Stream

The stream of our SIGGRAPH Realtime Live demo is up on teh internets. If you haven’t seen the actual live demo, check it out!

I feels amazing to win the award for best Realtime Graphics amongst such industry giants. There are so many companies from so many industries participating now, and the event has grown such much. Feels really humbling to be honored with this for a third year; no pressure!

posted by Chris at 9:26 AM  

Wednesday, August 31, 2016

Calibrating the Alienware 13 R2 OLED Laptop

Last month Dell had the Black Friday in July sale and this beauty was on sale for 500 dollars off, plus 10% if you ordered by phone. I decided it was time to replace my beloved Lenovo x220t.

The Alienware 13 might be ugly and lack a Wacom digitizer, but it does have an nVidia GTX 965M and an OLED screen with 211% sRGB coverage! As the Lenovo Yoga X1 only has integrated graphics, I think the Alienware is the machine for 3D artists.

If you’re a gamer who landed here because you wondered how to get the most out of your amazing display, or wondered why everything is super-red-pink, it’s time to put your big boy pants on! Calibrating the monitor wasn’t so straight forward, but let’s jump into it.

argyll

We are going to use an open source Color Management toolkit called ArgyllCMS [Download it here]. It can use many different hardware calibration devices, I have used it with the xRite Huey and the Spyder5.

One thing that’s important to know is that all the sensors are the same, you only pay for software features. If you don’t own a calibrator, you can buy the cheapest Spyder, because it’s the same sensor and you are using this software, not the OEM software.

displayCAL

Next we’re going to use a GUI front end built to make ArgyllCMS more user friendly. It’s called DisplayCAL, but it requires a lot of libs (numPy, wxWidgets, etc) so I recommend downloading this zero install that has everything.

displaycalui

Be sure to set the ‘White level drift compensation’ on. You will need to ignore the RGB adjustment it first asks you to fuss with because there is no RGB adjustment on your monitor.

When you are through, you will see the following (or something like it!):

spyder
SRGB

Note: DisplayCAL can also output a 3d LUT for madVR, which works with any number of video playback apps. Check here to force your browser to use your color management profile. If it’s useful, I can make another post detailing all the different settings and color-managed apps to make use of your monitor.

I hope you found this useful, it should apply to the Lenovo Yoga X1 and any other OLED laptops in the coming months.

posted by Chris at 9:23 PM  

Tuesday, August 30, 2016

The Jaw

All ‘virtual’ joints that we place are based on virtual anatomical surface landmarks. By ‘virtual’ I mean the polygonal rendermeshes that our ‘puppet’ will drive. As a rigger, you have to be able to look at the surface anatomy (polygonal mesh) and determine where a joint should be placed, but the solution is not always obvious.

“The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

The jaw is one of these tricky situations! Many people think the jaw rotates from the ‘socket’ or ‘fossa’ that the jaw (or mandible — or mandibular condyle) fits into, but this is not the case.

As riggers, we sometimes need to ignore internal anatomy and focus on surface anatomy, which is our final deliverable. This means think about the center of rotation for the entire mass of flesh (or vertices) that we’re moving. For the jaw of a human(oid) this pivot is under the earlobe when viewed from the side.


Rotating directly from the ‘socket’ would result in this incredible ‘derp‘ shown above, but the temporomandibular (TMJ) joint doesn’t work like this when we open our jaw/mouth.

tmj

Instead, the jaw/mandible slides forward as the mouth opens, like you see in this ‘live MRI’ slice above, resulting in this sexay jaw open below. You can see her mandible/jaw slide forward as she opens her mouth.
jaw_open_fluoro

This post is of course ignoring the fact that you can rig the jaw in a way that, through a combination of rotation and translation uses the TMJ as a pivot and rotates around the true jaw mass center of rotation described above. You can do virtually anything, you can drive with your feet; that doesn’t mean it’s a good idea. This post is probably most important for riggers creating characters fast, using tools that generate skeletons from user-placed signposts or locators.

Speaking of signposts and locators, another tool to help you with placement is to constrain a toroid or circle to your joint, it can help you visualize the jaw swing quickly:

jaw_swing

For those interest in further investigation, the paper ‘Rotation and Translation of the Jaw During Speech’ by Jan Edwards and Katherine Harris (1990) can be downloaded [here].

Jaw and Teeth Placement (Modeling)

side

Quite a few people have told me they found this helpful. I would like to add that teeth placement is very important and something you should check before rigging.  For modelers, here are some radiograms showing teeth placement in reference to the facial surface anatomy (click to enlarge):


jaw_placement02   jaw_placement046dfd119e23d059ea798630e2b40b7567

The upper lip placement relative to the teeth is very important. Up next, placing the teeth so that they are large enough or wide enough is also important. I find that it’s helpful to look at the incisors vs the nostrils, and the pupils vs the lip corners/molars. Here are some examples of that (click to enlarge):

face02face01face03face04face05

posted by Chris at 5:36 PM  

Tuesday, June 7, 2016

Displaying half the data: But twice as fast!

nodeEditor

In Maya 2016, by default, Node Editor cannot display more than 500 connections. Let’s put this in perspective. Let’s say you have two attributes and each drives 20 joints. BOOM, you’re over the limit bro.. what you doin? Why you breakin’ Maya?

The solution, the warning says is to set an option var. Oh yeah that’s a frickin’ charm, that’s like unlocking all streets on my GPS by loosening a nut on the underside of my car.

I count on this visual editor to display connections, that’s all it needs to do. I have had rigs that make this update once every 10 seconds, but at least I could find what I was looking for and troubleshoot it. I would much rather a drop in fidelity, like stop drawing nice curves or something –than just randomly not displaying data.

How is that even an option? Sometimes my mind wanders:

“Sir, we noticed that on some systems, the software can get slow if we display more than 500 connections.”
“How did you fix it?”
“Well we just decided not to show more than 500 connections.”
“Stellar work Raymond. Can this limit be adjusted in the Settings and Preferences? Maybe the first time Maya opens or the user opens Node Editor it can be set based on his hardware.”
“Well right now it’s an optionVar, I mean that works well enough for me.”

Here’s what you can enter into the mel evaluation line in the lower left to set your max display depth to ten million:

optionVar -iv "editorConnectionScanLimit" 10000000;

I think ADSK probably hates me by now. ¬.¬

posted by Chris at 12:11 AM  

Saturday, May 7, 2016

Simple Runtime Rigging in UE4

This is something that I found when cleaning my old computer out at work this week after an upgrade. This is an example of using a look at controller and a blendspace to drive a fleshy eye deformation in UE4 live at runtime.

This is the technique Jeremy Ernst used on Smaug’s eye in the demo we did with Weta over a year ago. Create a 2d blendspace for the eye poses. Export out left, right, up, down poses and connect them:

eye2

Grab the local transform from the eye, and set two vars with it:

image13

Create a Look At Controller and integrate the blendspace:

image06

posted by Chris at 7:40 AM  

Tuesday, April 19, 2016

ADSK Ships Pose Space Deformer, Ignores Clavicle

Oh the clavicle.. you’re like the bastard redheaded stepchild.. is this guy an animator or what?

Anyway, sixteen years after the publication of Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Driven Deformation, and fourteen years after Michael Comet released the poseDeformer plugin for Maya, which became the industry standard: Autodesk has now implemented the feature into the software.

In what seems like my early 20’s being acted out before my eyes, ‘Combination Sculpting’, the way of associating a fixer shape when other shapes are in a certain configuration (pioneered by Bay Raitt on Gollum in 2002), is also not a feature that will ship in Maya.

Riggers rejoice! We have voted this up on all feedback forums for years, and ADSK created the tool with direct involvement from the rigging community, great that it can finally see daylight. Now, all you Extension2 users grab it and work out the remaining kinks before everyone gets it in Maya 2017!

And if you’re upgrading to EXT2 today, don’t forget, FBX anim export/bake is fixed!

posted by Chris at 3:41 PM  

Monday, April 4, 2016

Off on a Tangent

Plutarch tells that Alexander the Great visited the Libyan Sibyl, and was given the correct constellation of checkboxes and steps to get meshes from Max to Maya properly.

Plutarch tells that Alexander the Great visited the Libyan Sibyl in search of the correct constellation of checkboxes and steps to get meshes from Max to Maya.

I have spent many years of my life in studios where characters are modeled in a package other than Maya (often Max) and imported into Maya via FBX. Having worked along side great character artists like Hanno Hagedorn, Abdenour Bachir, and most recently Kevin Lanning and his team here at Epic, I cannot tell you how many hours of our lives were devoted to trying to get mesh tangents into the final product that resembled what they were in the original sculpt/bake. Not to mention brilliant pipeline programmers like Bogdan Coroi, or James Goulding‘s team here at Epic, many hours have been spent trying to solve this issue.

Sometimes it seemed some mystical channeling whereby some constellation of export or import checkboxes along with maybe layering an edit mesh modifier on top of your character before export to Maya worked. Sometimes the solution seemed to have been exporting only triangulated meshes to Maya, whereby you needed a (fragile) pipeline to allow you to have quaded for skinning and triangulated from Max for export.

Well, as it turns out, Maya has always ignored all mesh tangent data on FBX import.

I hope this post saves you some headache. At Crytek we looked to change the pipeline to store all normal maps in world space, another, more pragmatic solution, proposed by Jeremy Ernst here at Epic is to give the engine the static mesh from Max and the skinned mesh from Maya and just transfer the original data. Scott Parrish told me that his team bakes against the skinned FBX as it comes out of UE4, this is another way of solving the issue.

I also understand this is not a simple issue, all DCC packages work differently. Max allows users to turn edges, etc.. But it’s good to know that we’re also not crazy.

posted by Chris at 9:46 PM  

Thursday, March 31, 2016

Why Does Everyone Write Their Own FBX Exporter?

fbx_export

When it comes to characters or character animation, any studio doing anything remotely complex has to write their own FBX exporter. But why?

send_to_unreal

It’s true that the FBX export options dialog is so convoluted that even ADSK has created a front end for exporting FBX to Unreal or Unity, but the main reason boils down to one issue: Maya FBX Export does not bake animation properly.

I *WANT* to use the FBX Exporter core (from Python). As a technical artist, I want a C++ plugin efficiently traversing the dag and baking/exporting animation. I do not want to dupe a skeleton and walk the timeline using python, calling scene update on the entire DAG every frame. The FBX Exporter doesn’t even have python exposure, so we have to eval MEL commands; it’s just not ideal to say the least.

Where Have the Attrs Gone?

When exporting characters elsewhere, you want to use FBX to get important data there as well. Not just an animated transform. In this test file [fbx_test], you see a sphere, skinned to a joint. That joint has an attribute called important_data. In games especially, we can use this scalar to drive a material parameter such as a wrinkle on a face, a blendshape, to allow an animator to blend off a cloth solver, anything really. So here’s our important data, driven my a multiply-divide node:

Capture

Export this scene as an FBX, being sure to tell it to bake complex animation:

mel.eval("FBXExportBakeComplexAnimation -v true")

Now import it back into an empty Maya scene, your important_data is gone!

To make matters worse, in the FBX ascii file I see an anim curve for my important data:

;AnimCurveNode::important_data, Model::joint1
C: “OP”,1519473056,205108448, “important_data”

So I am at a loss as to why programs using the FBX SDK do not get this data on import, including Maya.

DISCLAIMER: On multiple occasions I have reached out to ADSK asking them to please address this issue, but it has not happened. From the image above (new game export options), you can see that, on some level, ADSK wants to make this experience better, so please echo me in asking them to fix this issue.

In line with the title of this post, I have written, and would gladly give out an exporter for UE4, but as it uses the C++ bake / FBX Exporter from Maya, I would prefer to wait until this gets fixed. The ‘Bake Root’ option was actually created by Kevin Vassey at Epic because UE4 imports custom attrs on the root joint, so we just bake that if needed. This is a little puny exporter; internally we use Jeremy’s A.R.T. Toolkit for exporting animation in production.

uExportt

 

UPDATE 1: Thanks for the shares and responses. Some of you have told me that you have also talked to ADSK about this, and others say it was fixed at one time, but is now broken. ADSK has reached out in the comments and has created a bug number in their tracking system. I will update later on it’s priority/timeframe if I get more info!

UPDATE 2: I can verify, this is fixed if you grab FBX 2016.1.2 or later from here. It should be fixed in future versions of Maya, and the bug was fixed before I wrote this post, but was not in Maya 2016. In the comments, Chris Dardis said that the fix was available in a non public cut of Maya he was using. As the link is not compiled, I have uploaded a compiled Maya 2016 FBX 2016.1.2 plugin here.

posted by Chris at 10:01 PM  

Monday, July 13, 2015

RigPorn: Square Enix Optical Facial Pipeline

square_mask

posted by Chris at 6:29 PM  

Friday, January 23, 2015

BEWARE: Maya 2015 EXT / 2014 Constraint Incompatibility

incompat

I like to live on the edge when it comes to new versions of Maya, but rarely does it cause a major issue for me. A wise man once said ‘the moment you stop respecting something is when it bites you in the ass’. Well consider me bitten.

Anytime a rig I created with 2015 is loaded in 2014 it is irreparably broken. Spewing tons of errors like this one:
// Error: file: C:/Users/chris.evans/Desktop/test.ma line 210: The orientConstraint ‘nurbsSphere1_orientConstraint1’ has no ‘w0’ attribute. //

Diffing two constrained spheres, I noticed that there’s a new flag called ‘DCB’ or Disconnect Behavior.  Batch delete that and it seems that 2015 rigs can work in 2014.

But as usual, be a better person than me and just don’t open a file with a version of Maya your animators aren’t on. 😀

posted by Chris at 10:57 PM  

Monday, January 5, 2015

An Epic Change of Venue

Epic_Logo

In November I started at Epic Games in North Carolina, joining Jeremy Ernst and his team to focus on high quality characters and related technologies. For quite some time I have been enamored with not only the decisions and business practices of Epic, but the engine itself. Whether it’s putting the code on github, or the deep implementations of things like Blueprint; I am constantly amazed that they seem to remain focused, always putting the user first.

Look for some cool things coming down the pipe this year, we have an amazing team of really talented people, and an huge user-base that spans all industries.

Not many people know that I was born in South Carolina, I grew up in Florida and went to school in Georgia. I always felt a bit out of place on the West Coast, it’s great to finally be back to a place that feels like home.

posted by Chris at 9:34 AM  

Thursday, October 23, 2014

Destiny Rigging/Animation Slides and Videos Up

destiny

The Destiny talks at SIGGRAPH were really interesting, the material just went up and you should definitely check it out:

http://advances.realtimerendering.com/destiny/siggraph2014/animation/index.html

posted by Chris at 11:37 AM  

Friday, October 17, 2014

Embedding Icons and Images In Python with XPM

xpm1

As technically-inclined artists, we often like to create polished UIs, but we have to balance this with not wanting to complicate the user experience (fewer files the better). I tend to not use too many icons with my tools, and my Maya tools often just steal existing icons from Maya: because I know they will be there.

However, you can embed bitmap icons into your PySide/PyQt apps by using the XPM file format, and storing it as a string array. Often I will just place images at the bottom of the file, this keeps icons inside your actual tool, and you don’t need to distribute multiple files or link to external resources.

Here’s an example XPM file:

/* XPM */
static char *heart_xpm[] = {
/* width height num_colors chars_per_pixel */
"7 6 2 1",
/* colors */
"` c None",
". c #e2385a",
/* pixels */
"`..`..`",
".......",
".......",
"`.....`",
"``...``",
"```.```"
};

This is a small heart, you can actually see it, in the header you se the ‘.’ maps to pink, you can see the ‘.’ pattern of a heart. The XPM format is like C, the comments tell you what each block does.
Here’s an example in PySide that generates the above button with heart icon:

import sys
from PySide import QtGui, QtCore
 
def main():
    app = QtGui.QApplication(sys.argv)
    heartXPM = ['7 6 2 1','N c None','. c #e2385a','N..N..N',\
    '.......','.......','N.....N','NN...NN','NNN.NNN']
    w = QtGui.QWidget()
    w.setWindowTitle('XPM Test')
    w.button = QtGui.QPushButton('XPM Bitmaps', w)
    w.button.setIcon(QtGui.QIcon(QtGui.QPixmap(heartXPM)))
    w.button.setIconSize(QtCore.QSize(24,24))
    w.show()
 
    sys.exit(app.exec_())
 
if __name__ == '__main__':
    main()

You need to feed QT a string array, and strip everything out. Gimp can save XPM, but you can also load an image into xnView and save as XPM.

Addendum: Robert pointed out in the comments that pyrcc4, a tool that ships with PyQt4, can compile .qrc files into python files that can be imported. I haven’t tried, but if they can be imported, and are .py files, I guess they can be embedded as well. He also mentioned base64 encoding bitmap images into strings and parsing them. Both these solutions could really make your files much larger than XPM though.

posted by Chris at 3:23 PM  

Thursday, October 16, 2014

Tracing and Visualizing Driven-Key Relationships

sdk

Before I get into the collosal mess that is setting driven keys in Maya, let me start off by saying, when I first made an ‘SDK’ in college, back in 1999, never did I think I would still be rigging like this 15 years later. (Nor did I know that I was setting a ‘driven key’, or a ‘DK’ which sounds less glamorous)

How The Mess is Made

simple

Grab this sample scene [driven_test]. In the file, a single locator with two attributes drives the translation and rotation of two other locators. Can’t get much ismpler than that, but look at this spaghetti mess above! This simple driven relationship created 24 curves, 12 blendWeighted nodes, and 18 unitConversion nodes. So let’s start to take apart this mess. When you set a driven-key relationship, it uses an input and a curve to drive another attribute:

curves2

When you have multiple attributes driving a node, maya creates ‘blendWeighted’ nodes, this blends the driven inputs to one scalar output, as you see with the translateX below:

curves

Blending scalars is fairly straight forward, but for rotations, get ready for this craziness: A blendWeighted node cannot take the output of an animCurveUA (angle output), the value must first be converted to radians, then blended. But the final node cannot take radians, so the result must be converted back to an euler angle. This happens for each channel.

craziness

If you think this is retarded; welcome to the club. It is a very cool usage of general purpose nodes in Maya, but you wouldn’t think so much of rigging was built on that, would you? That when you create a driven-key it basically makes a bunch of nodes and doesn’t track an actual relationship, because of this, you can’t even reload a relationship into the SDK tool later to edit! (unless you traverse the spaghetti or takes notes or something)

I am in love with Node Edtor, but by default hypergraph did hide some of the spaghetti, it used to hide unitConversions as ‘auxiliary nodes’:

auxnode

Node Editor shows unitConversions regardless of whether aux nodes are enabled, I would rather see too much and know what’s going on than have things hidden, but maybe that’s just me. You can actually define what nodes are considered aux nodes and whether ‘editors’ show them, but I am way off on a tangent here.

So just go in there and delete the unit conversion yourself and wire the euler angle output, which you would think is a float.. into the blendWeighted input, which takes floats. Maya won’t let you, it creates the unitConversion for you because animCurveUA outputs angles, not floats.

This is why our very simple example file generated over 50 nodes. On Ryse, Maruis’ face has about 73,000 nodes of driven-key spaghetti. (47,382 curves,  1,420 blendWeighted, 24,074 unitConversion)

Finding and traversing

So how do we find this stuff and maybe query and deal with it? Does Maya have some built in way of querying a driven relationship? No. Not that I have ever found. You must become one with the spaghetti! setDrivenKeyframe has a query mode, but that only tells you what ‘driver’ or ‘driven’ nodes/attrs are in the SDK window if it’s open, they don’t query driver or driven relationships!

We know that these intermediate nodes that store the keys are curves, but they’re a special kind of curve, one that doesn’t take time as an input. Here’s how to search for those:

print len(cmds.ls(type=("animCurveUL","animCurveUU","animCurveUA","animCurveUT")))

So what are these nodes? I mentioned above that animCurveUA puts out an angle:

  • animCurveUU – curve that takes a double precision float and has a double output
  • animCurveUA – takes a double and outputs an angle
  • animCurveUL – takes a double and outputs a distance (length)
  • animCurveUT – takes a double and outputs a time

When working with lots of driven-key relationships you frequently want to know what is driving what, and this can be very difficult because of all the intermediate node-spaghetti. Here’s what you often want to know:

  • What attribute is driving what – for instance, select all nodes an attr drives, so that you can add them back to the SDK tool. ugh.
  • What is being driven by an attribute

I wrote a small script/hack to query these relationships, you can grab it here [drivenKeyVisualizer]. Seriously, this is just a UI front end to a 100 line script snippet, don’t let the awesomeness of QT fool you.

dkv1

The way I decided to go about it was:

  1. Find the driven-key curves
  2. Create a tiny curve class to store little ‘sdk’ objects
  3. List incoming connections (listConnections) to get the driving attr
  4. Get the future DG history as a list and reverse it (listHistory(future=1).reverse())
  5. Walk the reversed history until I hit a unitConversion or blendWeighted node
  6. Get it’s outgoing connection (listConnections) to find the plug that it’s driving
  7. Store all this as my sdk objects
  8. Loop across all my objects and generate the QTreeWidget

Here’s how I traversed that future history (in the file above):

 #search down the dag for all future nodes
 futureNodes = [node for node in cmds.listHistory(c, future=1, ac=1)]
 #reverse the list order so that you get farthest first
 futureNodes.reverse()
 drivenAttr = None
 
 #walk the list until you find either a unitConversion, blendWeighted, or nothing
 for node in futureNodes:
     if cmds.nodeType(node) == 'unitConversion':
         try:
             drivenAttr = cmds.listConnections(node + '.output', p=1)[0]
             break
         except:
             cmds.warning('blendWeighted node with no output: ' + node)
             break
     elif cmds.nodeType(node) == 'blendWeighted':
         try:
             drivenAttr = cmds.listConnections(node + '.output', p=1)[0]
             break
         except:
             cmds.warning('blendWeighted node with no output: ' + node)
             break
 if not drivenAttr:
     drivenAttr = cmds.listConnections(c + '.output', p=1)[0]
 if drivenAttr:
     dk.drivenAttr = drivenAttr
 else:
     cmds.warning('No driven attr found for ' + c)

This of course won’t work if you have anything like ‘contextual rigging’ that checks the value of an attr and then uses it to blend between two banks of driven-keys, but because this is all general DG nodes, as soon as you enter nodes by hand, it’s no longer really a vanilla driven-key relationship that’s been set.

If you have a better idea, let me know, this above is just a way I have done it that has been robust, but again, I mainly drive transforms.

 What can you do?

Prune Unused Pasta

pruned

Pruned version of the driven_test.ma DAG

By definition, when rigging something with many driven transforms like a face, you are creating driven-key relationships based on what you MIGHT need. This goes for when making the initial relationship, or in the future when you maybe want to add detail. WILL I NEED to maybe translate an eyelid xform to get a driven pose I want.. MAYBE.. so you find yourself keying rot/trans on *EVERYTHING*. That’s what I did in my example, and the way the Maya SDK tool works, you can’t really choose which attrs per driven node you want to drive, so best to ‘go hunting with a shotgun’ as we say. (shoot all the trees and hope something falls out)

Ok so let’s write some code to identify and prune driven relationships we didn’t use.

CAUTION: I would only ever do this in a ‘publish’ step, where you take your final rig and delete crap to make it faster (or break it) for animators. Also, I have never used this exact code below in production, I just created it while making this blog post as an example. I have run it on some of our production rigs and haven’t noticed anything terrible, but I also haven’t really looked. 😀

def deleteUnusedDriverCurves():
    for driverCurve in cmds.ls(type=("animCurveUL","animCurveUU","animCurveUA","animCurveUT")):
        #delete unused driven curves
        if not [item for item in cmds.keyframe(driverCurve, valueChange=1, q=1) if abs(item) &gt; 0.0]:
            cmds.delete(driverCurve)
 
deleteUnusedDriverCurves()

This deletes any curves that do not have a change in value. You could have multiple keys, but if there’s no curve, let’s get rid of it. Now that we have deleted some curves, we have some blendWeighted nodes that now aren’t blending anything and unitConversions that are worthless creatures. Let’s take care of them:

def deleteUnusedBlendNodes():
    #rewire blend nodes that aren't blending anything
    for blendNode in cmds.ls(type='blendWeighted'):
        if len(cmds.listConnections(blendNode, destination=0)) == 1:
            curveOutput = None
            drivenInput = None
 
            #leap over the unitConversion if it exists and find the driving curve
            curveOutput = cmds.listConnections(blendNode, destination=0, plugs=1, skipConversionNodes=1)[0]
            #leap over the downstream unitConversion if it exists
            conns = cmds.listConnections(blendNode, source=0, plugs=1, skipConversionNodes=1)
            for conn in conns:
                if cmds.nodeType(conn.split('.')[0]) == 'hyperLayout': conns.pop(conns.index(conn))
            if len(conns) == 1:
                drivenInput = conns[0]
            else:
                cmds.warning('BlendWeighted had more than two outputs? Node: ' + blendNode)
 
            #connect the values, delete the blendWeighted
            cmds.connectAttr(curveOutput, drivenInput, force=1)
            cmds.delete(blendNode)
            print 'Removed', blendNode, 'and wired', curveOutput, 'to', drivenInput
 
deleteUnusedBlendNodes()

We find all blendWeighted nodes with only one input, then traverse out from them and directly connect whatever it was the node was blending, then we delete it. This is a bit tricky and I still think I may have missed something because I wrote this example at 2am, but I ran it on some rigs and haven’t seen an issue.

Here are the results running this code on an example rig:

pruned_graph

Create a Tool To Track / Mirror / Select Driven Relationships

This is a prototype I was working on at home but shelved, I would like to pick it up again when I have some time, or would be open to tossing it on github if people would like to help. It’s not rocket science, it’s besically what Maya should have by default. You just want to track the relationships you make, and also select all nodes driven by an attr. Also mirror their transforms across an axis (more geared toward driven transforms).

sdkWrangler

Write Your Own Driver Node

Many places create their own ‘driven node’ that just stores driven relationships. Judd Simantov showed a Maya node that was used on Uncharted2 to store all facial poses/driven relationships:

facePoseNode

The benefits of making your own node are not just in DAG readability, check out the time spent evaluating all these unitConversion and blendWeighted nodes in a complex facial rig (using the new Maya 2015 EXT 1 Profiler tool) –that’s over 760ms! (click to enlarge)

profiler

Though it’s not enough to make a node like this, you need to make a front end to manage it, here’s the front end for this node:

poseFaceUI

Give Autodesk Feedback!

feedback

As the PSD request is ‘under review’, maybe when they make the driver, they can make a more general framework to drive things in Maya.

Conclusion

As you can see, there are many ways to try to manage and contain the mess generated by creating driven-key relationships.  I would like to update this based on feedback, it’s definitely not an overview, and speaks more to people who have been creating driven-key relationships for quite some time.  Also, if you would find a tool like my Maya SDK replacement useful, let me know, especially if you would like to help iron it out and/or test it.

posted by Chris at 3:16 PM  

Wednesday, October 15, 2014

Reminder: Maya saves ScriptEditor tabs on crash

backup

This is just a reminder that for some time now Maya has been saving all ScriptEditor tabs on crash. I frequently bump into people who don’t know or don’t remember this. If your Maya says that it’s attempting to save in your /Temp folder, it’s also saving all your ScriptEditor tabs.

posted by Chris at 11:56 AM  

Wednesday, September 24, 2014

Maya DAG nodes in QT tools (don’t use string paths)

fragile

 

String paths to Maya objects are _fragile_. This data is stale as soon as you receive it, long paths are better than short, and taking into account namespaces is even better –BUT when it comes down to it, the data is just old as soon as you get it, and there is a better way.

A Full Path that is Never Stale

If you store a node in a Maya Python API MDagPath object, you can ask at any time and get its full path. Because it’s basically a pointer to the object in memory. Here’s an example:

import maya.OpenMaya as om
 
#create and populate an MSelectionList
sel = om.MSelectionList()
sel.add(cmds.ls(sl=1)[0])
 
#create and fill an MDagPath
mdag = om.MDagPath()
sel.getDagPath(0, mdag)
 
#At any time request the current fullpath to the node
print mdag.fullPathName()

CHALLENGE: Can you write a renaming tool that doesn’t use string path node representation? That works with all DG nodes? Without using PyMel? 😉

Embedding Maya Python Objects in PyQT (UserRole)

So.. if we can store a pointer to an object, how do we track that with QT UIs, pass it around, etc? It can be really difficult to find a way to be responsible here, you have all these lists and node representations, and in the past I would try to sync lists of python MDagPath objects, or make a dictionary that mirrors the listView… but there is a way to store arbitrary python objects on list and tree widgetitems!

In this code snipet, I traverse a selection and add all selected nodes to a list, I add ‘UserRole’ data to each and set that data to be an MDagPath:

    listWids = []
    for item in nodes:
        sel = om.MSelectionList()
        sel.add(item)
        mdag = om.MDagPath()
        sel.getDagPath(0, mdag)
        name = mdag.fullPathName()
        if not self.longNamesCHK.isChecked():
            name = name.split('|')[-1]
        wid = QtGui.QListWidgetItem(listWid)
        wid.setText(name)
        wid.setData(QtCore.Qt.UserRole, mdag)
        listWids.append(wid)

Now, later when we want to get the full path name of this widgetItem, we just ask it for this data. That returns the MDagPath object, and we can ask the object for the current full path to the node:

print self.drivenNodeLST.currentItem().data(QtCore.Qt.UserRole).fullPathName()

So this is a good way to have arbitrary data that travels wit the QT node description, which is usually some kind of widget.

posted by Chris at 12:40 PM  

Thursday, September 18, 2014

Ryse SIGGRAPH 2014 Reel

I mentioned that we won “best Realtime Graphics” at this year’s SIGGRAPH conference, but I never linked the video:
https://www.youtube.com/watch?v=mLvUQNjgY7E

posted by Chris at 4:24 PM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Monday, August 25, 2014

Maya 2015: Poly Combine Skinned Meshes?

At Crytek, we have a plugin to preserve skinning on hacking up and uniting meshes, based on this old post here. It’s released in the Tools folder of CryENGINE if you grabbed the engine on Steam. Imagine my surprise when I saw this option in Maya 2015:

polyUniteSkinning

It fires off a new polyUnite command called polyUniteSkinned [Maya 2015 Docs], which can merge skinned meshes? Has anyone gotten this to work? It doesn’t seem to work properly through the UI, passes ‘name’ as a flag and fails as they removed that flag. (seriously) But it seems to work in simple situations, as shown here, it didn’t work attaching a face to a body, but at least it shows ADSK is moving in the right direction!

lice_gb_weights

posted by Chris at 12:42 AM  

Thursday, August 21, 2014

Adding Sublime ‘Build’ Support for KL

fabric_build_kl

I have been dabbling with KL and that Fabric guys have some great introduction videos [here]. They have worked hard on some Sublime integration/highlighting, which you can find on GitHub [here].

While following these intro tutorials, instead of popping back and forth to your command line, you can actually compile/run your code in Sublime and see the results. To do this go to Preferences>Browse Packages… Then open that Sublime-KL folder, and inside create a new file called ‘KL.sublime-build‘, the contents of which should be:

{
"cmd": ["kl", "$file"]
}

Then just select KL from the ‘Build System’ menu as shown above, now press CTRL+B and it will show results inside the Sublime console!

posted by Chris at 2:13 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by Chris at 12:54 AM  

Tuesday, July 15, 2014

RigPorn: The Last of Us

I realize most of you have seen this, but for those of you who haven’t, Judd walks people through TLOU rigs with a focus on facial as well. Really great stuff.

posted by Chris at 3:15 PM  

Thursday, July 10, 2014

RigPorn: Call of Duty: Advanced Warfare

[Click to enlarge images]

(All images taken from recent CoD marketing materials)

codaw11

Here you can see the first-person hands rig, complete with camera frustum tools, and animation controls.

codaw2

Close-up of the generic male rig, no face rig loaded at the moment, but still interesting.

codaw3

Here’s a great shot of their first-person-hands-picker. I always love seeing how animators want to work, I really never have worked on a team who wanted a picker, much less something like this, but it’s great to see.

codaw4

Another picker, maybe the full body one, or cinematics only.

Shooting at giant’s new studio in manhattan beach?

codaw5

Surprisingly, this looks like it is being shot at Giant’s Manhattan Beach facility, also looks like Giant hardware and marker layout -feel free to correct me if I am wrong. If the unique poured concrete construction doesn’t give it away, they also released images with giant Avatar banners in the background.

datei_1399619105

posted by Chris at 1:34 AM  
Next Page »

Powered by WordPress