Stumbling Toward 'Awesomeness'

A Technical Art Blog

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by admin at 4:40 AM  

Monday, August 25, 2014

Maya 2015: Poly Combine Skinned Meshes?

At Crytek, we have a plugin to preserve skinning on hacking up and uniting meshes, based on this old post here. It’s released in the Tools folder of CryENGINE if you grabbed the engine on Steam. Imagine my surprise when I saw this option in Maya 2015:

polyUniteSkinning

It fires off a new polyUnite command called polyUniteSkinned [Maya 2015 Docs], which can merge skinned meshes? Has anyone gotten this to work? It doesn’t seem to work properly through the UI, passes ‘name’ as a flag and fails as they removed that flag. (seriously) But it seems to work in simple situations, as shown here, it didn’t work attaching a face to a body, but at least it shows ADSK is moving in the right direction!

lice_gb_weights

posted by admin at 12:42 AM  

Thursday, August 21, 2014

Adding Sublime ‘Build’ Support for KL

fabric_build_kl

I have been dabbling with KL and that Fabric guys have some great introduction videos [here]. They have worked hard on some Sublime integration/highlighting, which you can find on GitHub [here].

While following these intro tutorials, instead of popping back and forth to your command line, you can actually compile/run your code in Sublime and see the results. To do this go to Preferences>Browse Packages… Then open that Sublime-KL folder, and inside create a new file called ‘KL.sublime-build‘, the contents of which should be:

{
"cmd": ["kl", "$file"]
}

Then just select KL from the ‘Build System’ menu as shown above, now press CTRL+B and it will show results inside the Sublime console!

posted by admin at 2:13 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by admin at 12:54 AM  

Tuesday, July 15, 2014

RigPorn: The Last of Us

I realize most of you have seen this, but for those of you who haven’t, Judd walks people through TLOU rigs with a focus on facial as well. Really great stuff.

posted by admin at 3:15 PM  

Thursday, July 10, 2014

RigPorn: Call of Duty: Advanced Warfare

[Click to enlarge images]

(All images taken from recent CoD marketing materials)

codaw11

Here you can see the first-person hands rig, complete with camera frustum tools, and animation controls.

codaw2

Close-up of the generic male rig, no face rig loaded at the moment, but still interesting.

codaw3

Here’s a great shot of their first-person-hands-picker. I always love seeing how animators want to work, I really never have worked on a team who wanted a picker, much less something like this, but it’s great to see.

codaw4

Another picker, maybe the full body one, or cinematics only.

Shooting at giant’s new studio in manhattan beach?

codaw5

Surprisingly, this looks like it is being shot at Giant’s Manhattan Beach facility, also looks like Giant hardware and marker layout -feel free to correct me if I am wrong. If the unique poured concrete construction doesn’t give it away, they also released images with giant Avatar banners in the background.

datei_1399619105

posted by admin at 1:34 AM  

Monday, July 7, 2014

Geodesic Voxel Binding in Maya 2015

If you’re like me, your ears will perk up at any technology promising a better initial skin bind. So I decided to take a look at the new geodesic voxel binding in Maya 2015, I couldn’t find much information about it online, so I decided to do the usual and write the post I would have wanted to find when I googled. I hope it’s useful.

Background

This new way of skin binding was presented by Autodesk at SIGGRAPH 2013.

nanosuit

Here’s a link to the SIGGRAPH 2013 white paper: Geodesic Voxel Binding for Production Character Meshes, definitely worth checking out. I do like how Autodesk is now using the word ‘Production’ a lot more. It seems they are no longer using simple test cases to test pipelines and workflows. Above, they used our Nanosuit, from the Crysis franchise. Here’s the full video that accompanies the talk: [LINK]

How It Works

voxelinfo

The basic idea is that it voxelizes characters into three types of voxels, skeleton, interior, and boundary. This way it tries to eliminate cross-talk. At ILM we had a binding solution in Zeno that used mesh normals and this eliminated crosstalk between manifold parts like fingers, but most of this paper focuses on skinning non-manifold meshes, meshes with intersecting parts, open holes, etc.

In Practice

Here’s the hand of the Marius bust we send out for rigging tests, notice when skinned with Closest in Hierarchy, there is some significant crosstalk:

lice_ch

Here’s an initial finger bind with the new algorithm, there’s still some crosstalk at 1024voxel resolution (highest possible), but it’s much better:

lice_gb

As someone who is very nitpicky about my skinning, *any* crosstalk at all is unacceptable, and it takes me about the same amount of time to clean the tiniest values as it does these larger ones. Here’s a closer look at some of the crosstalk from the ‘gb’ binding:

lice_gb_weights_trim

Crosstalk isn’t just bad for deformation, but these tiny little values are inefficient and sloppy, especially if you are sending it to a game engine.

Another area that requires significant cleanup is the underarm area where the serratus anterior lies, here I thought the new approach would work very well, unfortunately the binding didn’t have a noticeable difference from previous methods.

click to enlarge

click to enlarge (Head mesh from CryENGINE Asset Pack on Steam)

Few things are more difficult to skin than the human face. Here you can see traditional vs geodesic. I will say it’s definitely better than the old bind, but still has issues. This is one of the first initial skin binds on a closed-mouth neutral bindpose I have seen that has no cross-talk on one lip. I tweaked the falloffs doing three different binds on the traditional on the left.

Multi-Threaded?

voxelbind_crop

Another thing I like is a hint at a multi-threaded future. The binding process (voxel calculation, etc) is multi-threaded. At Crytek, we even make hardware purchasing decisions based on Maya not being multi-threaded. We get animators the fastest 2 core CPUs, this allows them better interactive framerates, and still a second core for a headless mayapy to export a long linear cutscene or animation. It’s nice to see Autodesk begin to think about multi-threading tools and processes.

In Conclusion

The new Geodesic Bind algorithm from Autodesk is a step forward. There’s still no free lunch, but I will be using this as my default bind in the future. I will update this post if I run into any problems or benefits not outlined here. It would be great if there was a voxel debug view, or the ability to dynamically drive voxel resolution with an input like vertex colors a map, or polygon density.

Backwards Compatibility: New Nodes and Attrs

If you just want to use the latest Maya to try the feature, here are some gotchas. There is a new geomBind node, and some attributes on shape nodes:

// Error: file: C:/Users/chris/Desktop/TechAnimationTest/TechAnimationTest/Head_Mesh_skin.ma line 28725: The skinCluster ‘skinCluster1′ has no ‘gb’ attribute. //
// Warning: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 27464: Unrecognized node type ‘geomBind’; preserving node information during this session. //
// Error: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 34: The mesh ‘eyes_MSH’ has no ‘.sdt’ attribute. //

The geomBind node stores ‘the post voxel validation state performed during the geodesic voxel bind algorithm.’ and some other attributes. It has a message attr that connects to a skinCluster. The SDT attr on shapes is not related to skinning, it is a new ‘Subdivision Method’ attr for the openSubDiv support.

geomBindNodes

The above said, it seems to work fine for me if I just delete that stuff, the skin weights are fine.

 

posted by admin at 1:29 AM  

Monday, June 30, 2014

Wasted Time, Sunken Cost, and Working In a Team

sunk

YOUR APE ANCESTORS

Let’s say that you want to do something, like watch a movie. When you arrive and open your wallet to purchase a 10 dollar ticket, you notice you have lost a 10 dollar bill, the majority of people buy a movie ticket anyway (88%).

Let’s examine a slightly different situation, where you arrive at the theater, but have misplaced your ticket, would you go buy another? Studies show that a majority of people (54%) would not re-purchase a ticket and watch the film. The situations are the same, but in the first, you lost 10 dollars, it wasn’t associated with the movie, in the second, you lost your ticket, 10 dollars that was specifically allotted to that task, and loss sucks.

This is a great example of the Sunk Cost Fallacy. Kahneman and Tversky are two researchers who have spent a lot of their careers looking at loss aversion and decision theory. The bottom line is, it’s human nature that the more you invest in something, the harder it is to abandon it. As a Technical Artist, you will find yourself in a position where you are the decision-maker, don’t let your ape ancestors trick you into making a poor decision.

..since all decisions involve uncertainty about the future the human brain you use to make decisions has evolved an automatic and unconscious system for judging how to proceed when a potential for loss arises. Kahneman says organisms that placed more urgency on avoiding threats than they did on maximizing opportunities were more likely to pass on their genes. So, over time, the prospect of losses has become a more powerful motivator on your behavior than the promise of gains. Whenever possible, you try to avoid losses of any kind, and when comparing losses to gains you don’t treat them equally. – You Are Not So Smart

51809459

IN PRODUCTION

As a Technical Artist in a position to lead or direct a team, you will often be the person signing off tools or features you and your team have requested. How many times have you been in the following situation:

A feature or tool is requested. Joe, a genius ‘lone wolf’ programmer receives the task, he is briefed and told to update the customers periodically or ask them in the case he needs any clarification. Now, sometimes what happens is what my brother likes to call ‘The Grand Reveal’. It’s where, for whatever reason, Joe sits in his corner working hard on a task, not involving anyone, and on the last day he valiantly returns from the mountain top and presents something that is unfortunately neither really what was requested or needed.

In this situation, you get together with his Lead and point out that what is delivered is not what was requested, he will more than likely reply “But Joe spent four weeks on this! Surely we can just use this until Joe can later rework it?”

No, you really can’t. Joe needs to know he works on a team, that people rely on his work. Nothing gets people to deliver what they are supposed to next time like being forced to redo their work. I guarantee you next time Joe will be at your teams desks any time he has a question about the tool or feature he is working on. You know the needs of your project or team, it’s important that you do not compromise those because someone wasted time running off in the wrong direction or has problems working in a team.

I’m sure Joe is a really smart guy, but he’s also now four weeks behind.

 

HOW TO AVOID SINKING CASH IN WASTED EFFORT

Anything that is wasted effort represents wasted time. The best management of our time thus becomes linked inseparably with the best utilization of our efforts.
- Ted Engstrom

CREATE ‘FEATURE BRIEFS’

A Feature Brief is a one page document that serves as a contract between the person requesting a feature and the one implementing it. My Feature Briefs outline three main things:

  1. A short description of the feature or tool
  2. It’s function – how does it work, what are the expected results
  3. It’s justification – why is it needed? What is the problem that is needed to be solved.

It’s important that work not begin until both parties agree on all terminology and requests in the feature brief -again, treat it as a contract. And it’s worth mentioning that Feature Briefs aren’t always needed, but they’re a great way to make sure that goals are clearly defined, everyone’s on the same page, and leave zero wiggle room for interpretation.

GATED DEVELOPMENT

Work with Joe’s Lead or Manager to set up ‘Gates’, it’s important that he get the feedback as early as possible if he’s going down the wrong track. I understand that bothering people halfway through a task may not be kosher in Agile development, but never just assume that someone will deliver what you need on the last day of a sprint.

dilbert

Break down the goal into tasks whose progress can be reviewed, it’s important that you, the primary stakeholder, are involved in signing off these gates. Any gated process is only as useful as the person signing off the work, the above comic may seem harsh, but it’s vitally important that the stakeholder is involved in reviewing work. Joe’s manager has a vested interest in Joe moving on to his next tasks, you have a vested interest in the tool or feature being what your team, the company, and whomever else needs.

Perhaps Joe will first present an outline, or maybe after taking a detailed look at the problem, Joe has a better solution he would like to pitch and you all agree to change the Feature Brief. The next gate would be to evaluate a working prototype. You probably know a lot about the feature as you requested it –are there any gotchas, any things that just wont work or have been overlooked? Last is usually a more polished implementation and a user interface.

check_progress

ALWAYS CHECK THE PROGRESS OF EVERYTHING

If Joe has a Lead or Manager, check with them, no need to bother Joe, that’s what the others are there for. If you ask them details about where he’s at, more often than not they will offer for you to speak with him or get you an update. It’s just important to understand that if Joe delivers something that’s not what you need, it’s your fault too. Joe is only a genius in the trenches, it’s your job to make sure that he’s not barking up the wrong tree and wasting company time. It may be tempting, but never allow these guys to shoot themselves in the foot, if you think he’s not on the right track, you need to do something about it. Even without gated development, frequently check the progress of items that are important to you. The atmosphere should be that of a relay race, you are ready to accept the baton, and it needs to be what was agreed upon or you all fail.

hh8ocms9

NEVER SETTLE FOR A HALF-BAKED TEMPORARY SOLUTION YOU CANNOT LIVE WITH

More-often-than-not, whatever Joe did in the time he had allotted is going to be what you ship with. If you agree he will return to address the issues later, make sure that when this doesn’t happen, your team can still be successful. Nothing should be higher priority than a mistake that holds up another team. I am sure you feel this way when it’s your team, when a rig update from last week is causing all gun holster keys to be lost on animation export, it’s important to address that before new work. The same can be said for Joe’s work, don’t make it personal, he is now behind, your guys are relying on him, and it should be high priority for him to deliver the agreed upon work.

posted by admin at 12:02 AM  

Tuesday, June 3, 2014

Undersea Creatures, 2013

After Ryse wrapped, Colleen and I went diving in Asia for a month. I finally finished the epic After Effects project, srsly.. not for the faint of heart, I think I had over 200 layers. So yeah, my love for creatures has completely enveloped my spare time as well.

Colleen shot roughly half of this, while I am fiddling around with my aperture and strobes, she’s already gotten a video of the thing doing a backflip while waving to the camera. The little frogfish yawning, the tozeuma shrimp changing directions, and others are all hers..

posted by admin at 1:05 AM  

Tuesday, May 27, 2014

PyQt: Composite Widgets

customWid2

So the past few nights I was racking my brain a bit to get multiple widgets adding to a listview. I wanted to see a list of animations, each item in the list needed to have clickable buttons, and special labels.

I scoured the internets, and dusted off my old trusty ‘Rapid GUI Programming with Python and QT‘ book, I got the idea for the above from the ‘Composite Widgets’ chapter subsection, though they don’t use setItemWidget to insert a composite widget into another widget.

Here is what my QtDesigner file looked like:

customWid

I wanted to dynamically load a UI file of a custom widget and compile it with the UIC module. I first looked at making a delegate, but I just could not get that working, if you have done this with a delegate, let me know in the comments! (From the docs, it seems delegates cannot be composites of multiple widgets)

In the end I used pyuic4 to compile the above UI file into a python code, I dumped this, minus the form/window code, into a class I derive from QWidget:

class animItemWidget(QtGui.QWidget):
    def __init__(self, parent=None):
        super(animItemWidget, self).__init__()
        self.horizontalLayout_4 = QtGui.QHBoxLayout(self)
        self.horizontalLayout_4.setSpacing(2)
        self.horizontalLayout_4.setMargin(3)
        #blah, blah, blah

At the bottom of that length UI frenzy of an init, let’s connect a button to a function:

self.connect(self.button02, QtCore.SIGNAL("clicked()"), self.awesome)

Now define that function, let’s just print that the animation that the widget in the list whose button you clicked is AWESOME:

    def awesome(self):
        print self.label.text() + ' is awesome!'

This could do anything with the anim name or various data bound to this object, like check out/sync a file from Perforce, load a file in Maya, etc.

Now let’s make our main window. We are going to use setItemWidget to insert our animItemWidget into the QListWidget called ‘list’. Notice that I have access to every UI element in the composite widget.

from PyQt4 import QtGui, QtCore, uic
 
class uiControlTest(QtGui.QMainWindow):
    def __init__(self):
        super(uiControlTest, self).__init__()
        self.ui = uic.loadUi('uiControlTest.ui')
        self.ui.show()
 
        for i in range(0,100):
            wid = animItemWidget()
            wid.label_2.setText('Last edited by chrise @2014.06.21:23:17')
            wid.label.setText('Animation ' + str(i))
 
            wid2 = QtGui.QListWidgetItem()
            wid2.setSizeHint(QtCore.QSize(100, 40))
            self.ui.list.addItem(wid2)
            self.ui.list.setItemWidget(wid2, wid)

Now, of course, in my example I just quickly made a bunch of widgets, so their names are all default, but you get the idea. If you have a better way to do this, perhaps something more performant, please let me know in the comments.

Note: It looks like that book is freely available on a college class website, save yourself 50 bucks: http://www.cs.washington.edu/research/projects/urbansim/books/pyqt-book.pdf

posted by admin at 2:38 AM  

Sunday, May 11, 2014

Maya: Vector Math by Example

BEFORE WE BEGIN

This post is about how to use vector math and trigonometric functions in Maya, it is not a linear algebra or vector math course, it should give you what you need to follow along in Maya while you learn with online materials. Kahn Academy is a great online learning resource for math, and Mathematics for Computer Graphics, and Linear Algebra and its Applications are very good books. Gilbert Strang, the Author of Linear Algebra, has his entire MIT Linear Algebra course lectures here in video form. Also, Volume 2 of Complete Maya Programming has some vector math examples in MEL and C++.

vector_wikipedia

VECTORS

Think of the white vector above as a movement. It does have three scalar values (ax, ay, az), sure, but do not think of a vector as a point or a position. When you see a vector, I believe it helps to imagine it as a movement from 0,0,0 – an origin. We don’t know where it started, we only know the movement.

A vector has been normalized, or is considered a unit vector, when it’s length is one. This is achieved by dividing each component by the length.

VECTOR LIBRARIES

There are many Python libraries dedicated to vector math, but none ship with Python itself. I have tried numPy, then pyEuclid, and finally piMath. It can definitely be a benefit to load the same vector class across multiple apps like Maya, MotionBuilder, etc.. But, I used those in a time when MotionBuilder had no vector class, and before Maya had the API. Today, I use the vector class built into the Maya Python API (2.0), which wraps the underlying Maya C++ code: MVector

I had to call out 2.0 above, as those of you using the old API, you have to ‘cast’ your vectors to/from, meaning that classes like MVector (Maya’s vector class) don’t accept python objects like lists or tuples, this is still the case with the 2014 SWIG implementation of the default API, but not API 2.0. One solution is to override the MVector class in a way that it accepts a Python lists and tuples, essentially automatically casting it for you:

class MVector(om.MVector):
    def __init__(self, *args):
        if( issubclass, args, list ) and len(args[0])== 3:
            om.MVector.__init__(self, args[0][0], args[0][1], args[0][2])
        else:
            om.MVector.__init__(self, args)

But that aside, just use Maya Python API 2.0:

#import API 2.0
import maya.api.OpenMaya as om
#import old API
import maya.OpenMaya as om_old

 

CREATING VECTORS IN MAYA

Let’s first create two cubes, and move them

import maya.cmds as cmds
import maya.api.OpenMaya as om
cube1, cube2 = cmds.polyCube()[0], cmds.polyCube()[0]
cmds.xform(cube2, t=(1,2,3))
cmds.xform(cube1, t=(3,5,2))

Let’s get the translation of each, and store those as MVectors

t1, t2 = cmds.xform(cube1, t=1, q=1), cmds.xform(cube2, t=1, q=1)
print t1,t2
v1, v2 = om.MVector(t1), om.MVector(t2)
print v1, v2

This will return the translation in the form [x, y, z], and also the MVector, which will print: (x, y, z), and in the old API: <__main__.MVector; proxy of <Swig Object of type ‘MVector *’ at 0x000000002941D2D0> >. This is a SWIG wrapped C++ object, API 2.0 prints the vector.

Note: I just told you to think of vectors as a movement, and not as a position, and the first example I give stores translation in a vector. Maybe not the best, but remember this translation, is really being stored as a movement in space from an origin.

So let’s start doing stuff and things.
 

LENGTH / DISTANCE / MAGNITUDE

We have two translations, both stored as vectors, let’s get the distance between them, to do this, we want to make a new vector that describes a ray from one location to the other and then find it’s length, or magnitude. To do this we subtract each component of v1 from v2:

v = v2-v1
print v

This results in ‘-2.0 -3.0 1.0′.

To get the length of the vector we actually get the square root of the sum of x,y,and z squared sqrt(x^2+y^2+z^2), but as we haven’t covered the math module yet, let’s just ask the MVector for the ‘length’:

print om.MVector(v2-v1).length()

This will return 3.74165738677, which, if you snap a measure tool on the cubes, you can verify:

distance

Use Case: Distance Check

As every joint in a hierarchy is in it’s parent space, a joint’s ‘magnitude’ is it’s length. Let’s create a lot of joints, then select them by joint length.

import maya.cmds as cmds
import random as r
import maya.api.OpenMaya as om
 
root = cmds.joint()
jnts = []
 
for i in range(0, 2000):
    cmds.select(cl=1)
    jnt = cmds.joint()
    trans = (r.randrange(-100,100), r.randrange(-100,100), r.randrange(-100,100))
    cmds.xform(jnt, t=trans)
    jnts.append(jnt)
 
cmds.parent(jnts, root)

joint_dist

So we’ve created this cloud of joints, but let’s just select those joints with a joint length of less than 50.

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1))
    if v.length() < 50: sel.append(jnt)
 
cmds.select(sel)

 

DOT PRODUCT / ANGLE BETWEEN TWO VECTORS

The dot product is a scalar value obtained by performing a specific operation on two vector components. This doesn’t make much sense, so I will tell you that the dot product is extremely useful in finding the angle between two vectors, or checking which general direction something is pointing.

dot = v1*v2
print dot

USE CASE: Direction Test

direction

The dot product of two normalized vectors will always be between -1.0 and 1.0, if the dot product is greater than zero, the vectors are pointing in the same general direction, zero means they are perpendicular, less than zero means opposite directions. So let’s loop through our joints and select those that are facing the x direction:

sel = []
for jnt in jnts:
    v = om.MVector(cmds.xform(jnt, t=1, q=1)).normal()
    dot = v*om.MVector([1,0,0])
    if dot > 0.7: sel.append(jnt)
cmds.select(sel)

USE CASE: Test World Colinearity

This one comes from last week in the office, one of my guys wanted to know how to check which way in the world something was facing. I believe it was to derive some information from arbitrary skeletons. This builds on the above by getting each vector of a node in world space.

def getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector):
    matrix = om.MGlobal.getSelectionListByName(node).getDagPath(0).inclusiveMatrix()
    vec = (vec * matrix).normal()
    return vec
 
 
def axisVectorColinearity(node, vec):
    vec = om.MVector(vec)
 
    x = getLocalVecToWorldSpace(node, vec=om.MVector.kXaxisVector)
    y = getLocalVecToWorldSpace(node, vec=om.MVector.kYaxisVector)
    z = getLocalVecToWorldSpace(node, vec=om.MVector.kZaxisVector)
 
    #return the dot products
    return {'x': vec*x, 'y':vec*y, 'z':vec*z}
 
jnt = cmds.joint()
print axisVectorColinearity(jnt, [0,0,1])

You can rotate the joint around and you will see which axis is most closely pointing to the world space vector you have given as an input.

USE CASE: Angle Between Vectors

angle

When working with unit vectors, we can get the arc cosine of a dot product to derive the angle between the two vectors, but this requires trigonometric functions, which are not available in our vector class, for this we must import the math module. Scratching the code above, let’s find the angle between two joints:

import maya.cmds as cmds
import maya.api.OpenMaya as om
import math
 
jnt1 = cmds.joint()
cmds.select(cl=1)
jnt2 = cmds.joint()
cmds.xform(jnt2, t=(0,0,10))
cmds.xform(jnt1, t=(10,0,0))
cmds.select(cl=1)
root = cmds.joint()
cmds.parent([jnt1, jnt2], root)
 
v1 = om.MVector(cmds.xform(jnt1, t=1, q=1)).normal()
v2 = om.MVector(cmds.xform(jnt2, t=1, q=1)).normal()
 
dot = v1*v2
print dot
print math.acos(dot)
print math.acos(dot) * 180 / math.pi

So at the end here, the arc Cosine of the dot product returns the angle in radians (1.57079632679), which we convert to degrees by multiplying it by 180 and dividing by pi (90.0). To check your work, there is no angle tool in Maya, but you can create a circle shape and set the sweep degrees to your result.

Now that you know how to convert radians to an angle, if you store the result of the above in an MAngle class, you can ask for it however you like:

print om.MAngle(math.acos(dot)).asDegrees()

Now that you know how to do this, there is an even easier, using the angle function of the MVector class, you can ask it the angle given a second vector:

print v1.angle(v2)

There are also useful attributes v1.rotateBy(r,r,r) for an offset and v1.rotateTo(v2). I say (r,r,r) in my example, but the rotateBy attr takes angles or radians.

CHALLENGE: Can you write your own rad_to_deg and deg_to_rad utility methods?

USE CASE: Orient-Driver

poseDriver
Moving along, let’s apply these concepts to something more interesting. Let’s drive a blendshape based on orientation of a joint. Since the dot product is a scalar value, we can pipe this right into a blendshape, when the dot product is 1.0, we know that the orientations match, when it’s 0, we know they are perpendicular.

vecPoseDriver

We will use a locator constrained to the child to help in deriving a vector. The fourByFourMatrix stores the original position of the locator. I tried using the holdMatrix node, which should store a cached version of the original locator matrix, but it kept updating. (?) We use the vectorProduct node in ‘dot product’ mode to get the dot product of the original vector and the current vector of the joint. We then pipe this value into the weight of the blendshape node.

Now, this simple example doesn’t take twist into account, and we aren’t setting a falloff or cone, the falloff will be 1.0 when the vectors align and the blendshape is on 100% and 0.0, when they’re perpendicular and the blendshape will be on 0%. I also don’t clamp the dot product, so the blendshape input can go to -1.
 

CROSS PRODUCT / PERPENDICULAR VECTOR TO TWO VECTORS

The cross product results in a vector that is perpendicular to two vectors. Generally you would do (v1.y*v2.z-v1.z*v2.y, v1.z*v2.x-v1.x*v2.z, v1.x*v2.y-v1.y*v2.x), ut luckily, the vector class manages this for us by using the ‘^’ symbol:

cross = v1^v2
print cross

USE CASE: Building a coordinate frame

crossProduct

If we get the cross product of v1^v2 above, and use this vector to now cross (v1 x v2)x v1, we will now have a third perpendicular vector to build a coordinate system or ‘orthonormal basis’. A useful example would be aligning a node to a nurbs curve using the pointOnCurveInfo node.

crossProduct

In the example above, we are using two cross products to build a matrix from the tangent of the pointOnCurveInfo and it’s position, then decomposing this matrix to set the orientation and position of a locator.




Many people put content like this behind a paywall.
If you found this useful, please consider buying me a beer.

posted by admin at 11:49 PM  

Saturday, August 24, 2013

Ryse at the Anaheim Autodesk User Event

I have been working on Ryse for almost two years now, it’s one of the most amazing projects I have had the chance to work on. The team we have assembled is just amazing, and it’s great to be in the position to show people what games can look like on next-gen hardware..  Autodesk asked us to come out to Anaheim and talk about some of the pipeline work we have been doing, and it’s great to finally be able to share some of the this stuff.

A lot of people have been asking about the fidelity, like ‘where are all those polygons?’, if you look at the video, you will see that the regular Romans, they actually have leather ties modeled that deform with the movement of the plates, and something that might never be noticed: deforming leather straps underneath the plates modeled/rigged holding together every piece of Lorica Segmata armor, and underneath that: a red tunic! Ryse is a labor of love!

We’re all working pretty hard right now, but it’s the kind of ‘pixel fucking’ that makes great art -we’re really polishing, and having a blast. We hope the characters and world we have created knock your socks off in November.

posted by admin at 11:16 PM  

Monday, June 10, 2013

WordPress Malware Massacre

army_of_darkness_02

Some of my friends alerted me to my site being listed in the google malware database a week ago, but I was focusing on E3 and hadn’t had time to look into it. As it turns out, a vulnerability in a wordpress theme that I didn’t even have active allowed a virus to completely hose all sites on my co-located server with spam and random shit.

I wrote a quick python script [dirTools.py] that looks over all files and directories on linux and reports the following:

  • Html infested with twitter iFrame code injection
  • Malicious PHP, and code injected into existing PHP that eval’s strings obfuscated in
    • base_64
    • gzip
    • rot13
  • .htaccess files that change mod_rewrite.c to re-direct your users to bogus sites and internal php files
  • Files with permissions set greater than 664 and folders greater than 755
  • Hidden directories

I wrote this this afternoon and it’s focused on only this specific wordpress malware, it’s just basically some example code that warns of the above, and has two methods to remove PHP and HTML code injections.  Feel free to ask me questions, use at your own risk, by default the fixer methods are commented out, so this only reports issues. With them uncommented; they do make file edits to fix the code injections.

posted by admin at 1:56 AM  

Monday, February 11, 2013

Object Oriented Python in Maya Pt. 1

I have written many tools at different companies, I taught myself, and don’t have a CS degree. I enjoy shot-sculpting, skinning, and have been known to tweak parameters of on-screen visuals for hours; I don’t consider myself a ‘coder’; still can’t allocate my own memory.  I feel I haven’t really used OOP from an architecture standpoint. So I bought an OOP book, and set out on a journey of self improvement.

‘OOP’ In Maya

In Maya, you often use DG nodes as ‘objects’. At Crytek we have our own modular nodes that create meta-frameworks encapsulating the character pipeline at multiple levels (characters, characterParts, and rigParts). Without knowing it, we were using Object Oriented Analysis when designing our frameworks, and even had some charts that look quite a bit like UML. DG node networks are connected often with message nodes, this is akin to a pointer to the object in memory, whereas with a python ‘object’ I felt it could always easily lose it’s mapping to the scene.

It is possible now with the OpenMaya C++ API to store a pointer to the DG node in memory and just request the full dag path any time you want it, also PyMel objects are Python classes and link to the DG node even when the string name changes.

“John is 47 Years Old and 6 Feet Tall”

Classes always seemed great for times when I had a bunch of data objects, the classic uses are books in a library, or customers: John is 47 years old and likes the color purple. Awesome. However, in Maya, all our data is in nodes already, and those nodes have attributes, those attributes serialize into a Maya file when I save: so I never really felt the need to use classes.

Although, all this ‘getting’, ‘setting’ and ‘listing’ really grows tiresome, even when you have custom methods to do it fairly easily.

It was difficult to find any really useful examples of OOP with classes in Maya. Most of our code is for ‘constructing’: building a rig, building a window, etc. Code that runs in a linear fashion and does ‘stuff’. There’s no huge architecture, the architecture is Maya itself.

Class Warfare

I wanted to package my information in classes and pass that back and forth in a more elegant way –at all times, not just while constructing things. So for classes to be useful to me, I needed them to synchronously exist with DG nodes.

I also didn’t want to have to get and set the information when syncing the classes with DG nodes, that kind of defeats the purpose of Python classes IMO.

Any time I opened a tool I would ‘wrap’ DG nodes in classes that harnessed the power of Python and OOP. To do this meant diving into more of the deep end, but since that was what was useful to me, that’s what I want to talk about here.

To demonstrate, let’s construct this example:

#the setup
loc = cmds.spaceLocator()
cons = [cmds.circle()[0], cmds.circle()[0]]
meshes = [cmds.sphere()[0], cmds.sphere()[0], cmds.sphere()[0]]
cmds.addAttr(loc, sn='controllers', at='message')
cmds.addAttr(cons, sn='rigging', at='message')
for con in cons: cmds.connectAttr(loc[0] + '.controllers', con + '.rigging')
cmds.addAttr(loc, sn='rendermeshes', at='message')
cmds.addAttr(meshes, sn='rendermesh', at='message')
for mesh in meshes: cmds.connectAttr(loc[0] + '.rendermeshes', mesh + '.rendermesh')

So now we have this little node network:

node_network

Now if I wanted to wrap this network in a class. We are going to use @property to give us the functionality of an attribute, but really a method that runs to return us a value (from the DG node) when the ‘attribute’ is queried. I believe using properties is key to harnessing the power of classes in Maya.

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")

So now we can query the ‘controllers’ attribute/property, and it returns our controllers:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']

Next up, we add a setter, which runs code when you set a property ‘attribute’:

class GameThing(object):
	def __init__(self, node):
		self.node = node
 
	#controllers
	@property
	def controllers(self):
		return cmds.listConnections(self.node + ".controllers")
 
	@controllers.setter
	def controllers(self, cons):
		#disconnect existing controller connections
		for con in cmds.listConnections(self.node + '.controllers'):
			cmds.disconnectAttr(self.node + '.controllers', con + '.rigging')
 
		for con in cons:
			if cmds.objExists(con):
				if not cmds.attributeQuery('rigging', n=con, ex=1):
					cmds.addAttr(con, longName='rigging', attributeType='message', s=1)
				cmds.connectAttr((self.node + '.controllers'), (con + '.rigging'), f=1)
			else:
				cmds.error(con + ' does not exist!')

So now when we set the ‘controllers’ attribute/property, it runs a method that blows away all current message connections and adds new ones connecting your cons:

test = GameThing(loc)
print test.controllers
##>>[u'nurbsCircle2', u'nurbsCircle1']
test.controllers = [cmds.nurbsSquare()[0]]
print test.controllers
##>>[u'nurbsSquare1']

To me, something like properties makes classes infinitely more useful in Maya. For a short time we tried to engineer a DG node at Crytek that when an attr changed, could eval a string with a similar name on the node. This is essentially what a property can do, and it’s pretty powerful. Take a moment to look through code of some of the real ‘heavy lifters’ in the field, like zooToolBox, and you’ll see @property all over the place.

I hope you found this as useful as I would have.

posted by admin at 1:03 AM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow's 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ’100′ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by admin at 7:26 PM  

Wednesday, January 9, 2013

Raucous Ball of Noise

email_overload

I can’t remember the last time I had a new year’s resolution. But this year I decided to go for it.

A friend and I were joking that we increasingly feel like Producers: how we spend a large chunk of our time just making sure that things are moving. That a meeting has action items, or minutes. That tasks are scoped, their dependencies tracked, have resources assigned, or have dates on a calendar. That a process has proper gates to allow for course correction, etc. I now spend a majority of my time writing emails, attending meetings, or talking at desks.

Death by Mail

But what is crippling is the emails. I feel I have made a career out of always trying to be helpful, but I was surprised how easily I reply to anything someone sends me. And how willing people are to just ‘go hunting with a shotgun’ and mail 15 others instead of trying to have a discussion with the right person. Many of the mails I saw myself spending time on were threads involving many people and important topics, I felt the need to be involved, but we rarely seemed to come to solid decisions -just running commentary. These mails often had more than 10 people added in CC ‘for awareness’, but then those people feel the need to contribute their opinion in some way.

It turned the simplest discussion a raucous ball of noise, which often then required the creation of a meeting to make a decision on how to progress.

The meetings were more successful, I think in part to the fact that only the people who needed to be involved in the decision were invited. Unfortunately, I had often spent time on the mail thread to avoid the need for a meeting, only to find myself reiterating my sentiments in a meeting the next day.

I looked for a day where I wrote the least number of mails, the number was ~35, and it was a recent sick day when I had stayed home.

Small Adjustment, Big Victory

So I decided to pull myself out of this, after all it is somewhat self-induced. Of all the options, the best seemed to limit myself to 10 work emails a day. All other communication would be in person, in meetings, or on the phone.

I didn’t think this would have the impact it did.

From this, other things started to fall in place. I really disliked how I would increasingly feel like standard operating procedure was constantly looking for dropped balls. I need to let dependencies and other departments drop their balls, and hope that they will learn from it, or hope that someone else is watching. In essence, trust people more, and as a by product: spend more time being a Director and less a Producer.

10 emails a day forced me to really choose what email discussions I want to be involved with carefully. I was not respecting my own time, and this arbitrary rule forced me to do that. As a result, it allows me to spend more time on Art Technology initiatives, looking at the project, talking with my team, and giving proper direction.

I can’t reject meeting invites, or ignore mails, but this little adjustment has really helped me more than I thought it would.

posted by admin at 2:42 AM  

Monday, January 7, 2013

Abusing ‘Blind Data’ in Maya

‘Blind data’ is custom data that you can store on any object or its components (vertex, edge, polygon, etc). The documentation says ‘Blind data is information stored with polygons which is not used by Maya in any way..’ I believe it is used when importing meshes from other apps that have properties that do not map to Maya, so that when you take them back to those apps, those properties remain.

Anyway, the important point here is that blind data is metadata (int, float, double, boolean, string, binary) that you can attach to any component. It matters not what happens to said component, you can extract a polygon from a mesh, it’s index will have changed, it’s object will have changed, but its blind data will remain with it. The only drawback can be that it can be painfully slow to write this data, but we will get to that later.

Simple Example

First let’s create a blind data template, this is required to store the data later. We use the command ‘blindDataType’ to create a template for a string type called ‘skinningInfo’ or ‘skin’ for short, giving it an ID 12344. Then we query the ID and it returns the blind data attribs we have created.

cmds.blindDataType(id=12344, dt='string', ldn='skinningInfo', sdn='skin')
print cmds.blindDataType(id=12344, tn=1, q=1)
>>['skinningInfo', 'skin', 'string']

So now we have our template, let’s try using it, this is more focused on getting the idea across than speed:

#query vertex # of mesh
v = cmds.polyEvaluate(node, v=1)
#loop through vertices
for vtx in range(0, v):
    #get influences
    infs = cmds.skinCluster(sc, inf=1, q=1)
    #get weights
    objVtx = node + ".vtx[" + str(vtx) + "]"
    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
    #build dict of influence:weight
    for i in range(0, len(infs)):
        weightDict[infs[i]] = vals[i]
    #write value to blind data
 cmds.polyBlindData(objVtx, id=12344, at='vertex', ldn='skinningInfo', sd=str(weightDict))

So here you have saved a dictionary per vertex that has key/value pairs of influence/weight. You can query like so:

#I have a vertex selected in component mode
print cmds.polyQueryBlindData(cmds.ls(sl=1), id=12344, showComp=1)
['polySurface2.vtx[64].skin', "{u'joint2': 0.49755714634259796, 'joint3': 0.49755714634259784, 'joint1': 0.0048857073148042395}"]

Now On To Something More Useful

So let’s create a function to store skinning data per-vertex, as you may have seen with the above, that was painfully slow. If you have written any skinning tools, you know that the solution to this (other than learning C++) is to apply your change to all vertices at once. Below we build two lists, one of vertices and one of weights, then we

def storeBlindSkinning(mesh, sc):
	'''
	mesh is a skinned mesh, and sc is the skincluster affecting the mesh
	'''
	vtxList = []
	v = cmds.polyEvaluate(mesh, v=1)
	infs = cmds.skinCluster(sc, inf=1, q=1)
	vtxWeights = []
	for vtx in range(0, v):
	    objVtx = mesh + ".vtx[" + str(vtx) + "]"
	    vtxList.append(objVtx)
	    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
	    for i in range(0, len(infs)):
        	weightDict[infs[i]] = vals[i]
            vtxWeights.append(str(weightDict))
        cmds.polyBlindData(vtxList, id=12344, at='vertex', ldn='skinningInfo', sd=vtxWeights)

Setting all the data at once is 1/3 faster, however, setting this data takes quite some time, and you may want to take a hit for a progress bar. (break it up into groups) On ~60,000 vertices this took 10min (15min doing it inside the loop). I don’t mind that hit if it means that I can now detach/alter/slice my mesh without losing skinning data. You can even extract faces and the new vertices created will get the same blind data as their original. (one becomes two)

As always, the C++ API is much faster, my colleague, Bogdan speed tested the above function and 50,000 vertices took only a few milliseconds, compared to 10 minutes in pythonland.

Remember, there are other ways to store skinning data, using UVs, position, vertex color channels, etc. I just wanted to introduce people to blind data in Maya and show a potential use.

posted by admin at 4:04 AM  

Sunday, January 6, 2013

My 2012 in Review

2012 blew by incredibly fast. If I had to sum the year up into three categories it would be:

  • RYSE / CINEBOX: At Crytek, for the first time ever I broke away from the Crysis franchise and have been working on Ryse with my old friend Hanno Hagedorn, who returned to Crytek this year. In my ’20% time’ I am still on CINEBOX, which saw some of it’s first production use this year on some high profile film and game projects, but I can’t say much more than that.
  • MAYA: New project, new team, and new software/pipeline! I mentioned this in my SIGGRAPH class, which cleared PR, so no issue mentioning it here: Ryse is the first Maya project at Crytek. In the past year, with the help of Crytek UK, we have been building up a Maya pipeline from scratch. The 3dsMax pipeline was ~10 years old and had a lot of legacy stuff.  Any Maya studio I have worked at always had a legacy pipeline, and I had a mental checklist of things we all would have done differently ‘if we could rewrite everything’. It has been really fun working with the Ryse TechArt team to build this pipeline, we have some really great guys (and gal!), but I won’t out them here. (to the dismay of recruiters everywhere)
  • DIVING: This year I spent a lot more time in the water! Not only diving, but I stepped up my photography; nothing raises your pulse like a changing lenses out over a 200m dropoff! Colleen and I were lucky enough to get to Indonesia, Malaysia, and Egypt. (video, photos) We stayed on an old oil rig off Sipadan where we met a group of great photographers, one of which was Sin Hwa, who took that photo of me above.
posted by admin at 2:27 AM  

Monday, July 16, 2012

CINEBOX SIGGRAPH Talk and Studio Workshops

CRYENGINE CINEBOX

I am giving a talk at SIGGRAPH 2012 entitled ‘Film/Game Convergence: What’s Taking So Long?‘ where I discuss the inherent differences between games and film and go over a few case studies of projects that attempted to use a game engine for film previs. I also talk a bit about the development of our CINEBOX application, the decisions we had to make, and how we dealt with many of the issues previous attempts have run into.

STUDIO WORKSHOPS

I will be giving two more Studio Workshops this year, the first is a followup to last year’s Introduction to Python, entitled ‘Python Scripting in Maya‘. The other workshop is ‘Building a Game Level‘, which is the same basic workshop I gave last year where I show people how to make a playable game level in CryEngine in an hour. Studio Workshops are hands-on sessions where each attendee has a computer and follows along with the instructor. It’s a great chance for people of all ages to learn new things.

posted by admin at 8:06 PM  

Thursday, July 12, 2012

Not Dead Yet

Click to Enlarge

I have been really busy on Ryse, this past weekend I found some time to wrap the XNA import methods I had written in a UI.  I will post it soon in an un-padded form for the people asking for it.

For those who don’t know what I am referring to, a while back I wrote some python to import XNA character files (from retail discs) into Maya as textured characters with original joint names, skinning, etc. I hit some snags on the UV, texturing, and then viewport 2.0 stuff. It’s really great to see topology, bind pose, weighting, joint layout, etc.. of your favorite characters. Great reference!

I would also like to make a post about viewport 2.0 in the next week or so, that whole system is such a complete piece of frustrating garbage, hopefully you can benefit from my aimless bumping into walls in the darkness.  Anyway, gotta start ramping up for SIGGRAPH, so that might have to wait.

posted by admin at 1:36 AM  

Wednesday, April 25, 2012

RigPorn: Halo4 Skeleton and Loco Debug

Found a screen of 343's in-game locmotion debug for anyone interested (click to enlarge)

posted by admin at 12:47 AM  

Saturday, April 21, 2012

Crytek Cinema Sandbox, FMX Talk

I can finally talk about something I have been working on in the past two years.  One of the reasons I returned to Crytek was to push the use of game engines in linear content creation like film and television. On Avatar I saw how much time and effort went into layout, blocking, virtual sets, etc. The tools were archaic, the feedback loop was abysmal at times. In games we have to layout massive levels that people can roam through for 8-15 hours or more and CryEngine’s tools are some of the best for that.

I have been working as Product Manager with a small team of great guys, where I basically define the goals and backlog. It’s thrilling to finally get to see things like Catmull-Clark subd in runtime, or multi-channel EXR output, or Alembic support. It’s been really fun to define what the product is and prioritize features largely without external dependencies or politics, I thank Crytek for trusting me to helm such a project.

We had a live demo kiosk at GDC; check out the Cinema Sandbox Website for more info.

I will be speaking at FMX about CineBox and the whole idea of using game engines for previs and virtual production: The Long Road to Film / Game Convergence

posted by admin at 12:35 PM  

Saturday, April 21, 2012

Maya: Walking the Line

I am still finding my feet in Maya, on my project, some files have grown to 800mb in size. Things get corrupt, hand editing MAs is common; I am really learning some of the internals.

In the past week I have had to do a lot of timeline walking to switch coord spaces and get baked animations into and out of hierarchies. In 3dsMax you can do a loop and evaluate a node ‘at time i’, and there is no redraw or anything. I didn’t know how to do this in Maya.

I previously did this with looping cmds.currentTime(i) and ‘walking the timeline’, however, you can set the time node directly like so: cmds.setAttr(“time1.outTime”, int(i))

Unparenting a child with keyed compensation (1200 frames)
10.0299999714 sec – currentTime
2.02 sec – setAttr

There are some caveats, whereas in a currentTime loop you can just cmds.setKeyframe(node), I now have to cmds.setKeyframe(node, time=i). But when grabbing a matrix, I don’t need to pass time and it works, I don’t think you can anyway.. I guess it gets time from the time node.

Here’s a sample loop that makes a locator and copies a nodes animation to world space:

#function feeds in start, end, node
	if not start: start = cmds.playbackOptions(minTime=1, q=1)
	if not end: end = cmds.playbackOptions(maxTime=1, q=1)
	loc = cmds.spaceLocator(name='parentAlignHelper')
	for i in range(start, (end+1)):
		cmds.setAttr("time1.outTime", int(i))
		matrix = cmds.xform(node, q=1, ws=1, m=1)
		cmds.xform(loc, ws=1, m=matrix)
		cmds.setKeyframe(loc, time=i)
posted by admin at 11:44 AM  

Tuesday, October 25, 2011

Quick Note About Range(), Modulus, and Step

Maybe it’s me, but I often find myself parsing weird ascii text files from others. Sometimes the authors knew what the data was and there’s no real markup. Take this joint list for example:

143 # bones
root ground
-1
0 0 0
root hips
0
0 0.9512207 6E-08
spine 1
1
4E-08 0.9522207 1.4E-07
spine 2
2
3E-07 1.0324 8.3E-07
spine 3
3
5.6E-07 1.11357 1.53E-06
spine 4
4
8.2E-07 1.194749 2.22E-06
head neck lower

So the first line is the number of joints then it begins in three line intervals stating from the root outwards: joint name, parent integer, position. I used to make a pretty obtuse loop using a modulus operator. Basically, modulus is the remainder left over after division. So X%Y gives you the remainder of X divided by Y; here’s an example:

for i in range(0,20+1):
	if i%2 == 0: print i
#>> 0
#>> 2
#>> 4
#>> 6
#>> 8
#>> 10

The smart guys out there see where this is goin.. so I never knew range had a ‘step’ argument. (Or I believe I did, I think I actually had this epiphany maybe two years ago, but my memory is that bad.) So parsing the above is as simple as this:

jnts = []
for i in range(1,numJnts*3+1,3):
	jnt = lines[i].strip()
	parent = int(lines[i+1].strip())
	posSplit = lines[i+2].strip().split(' ')
	pos = (float(posSplit[0])*jointScale, \
	float(posSplit[1])*jointScale, float(posSplit[2])*jointScale)
	jnts.append([jnt, parent, pos])

Thanks to phuuchai on #python (efnet) for nudging me to RTFM!

posted by admin at 1:42 AM  

Wednesday, October 12, 2011

SIGGRAPH 2011: Intro To Python Course

I gave a workshop/talk at SIGGRAPH geared toward introducing people to Python. There were ~25 people on PCs following along, and awkwardly enough, many more than that standing and watching. I prefaced my talk with the fact that I am self-taught and by no means an expert. That said, I have created many python tools people use every day at industry-leading companies.

Starting from zero, in the next hour I aimed to not only introduce them to Python, but get them doing cool, usable things like:

  • Iterating through batches/lists
  • Reading / writing data to excel files
  • Wrangling data from one format to another in order to create a ‘tag cloud’

Many people have asked for the notes, and I only had rough notes. I love Python, and I work with this stuff every day, so I have had to really go back and flesh out some of what I talked about. This tutorial has a lot less of the general chit-chat and information. I apologize for that.

Installation / Environment Check


Let’s check to see that you have the tools properly installed. If you open the command prompt and type ‘python’ you should see this:

So Python is correctly installed, for the following you can either follow along in the cmd window (more difficult) or in IDLE, the IDE that python ships with (easier). This can be found by typing IDLE into the start menu:

Variables


Variables are pieces of information you store in memory, I will talk a bit about different types of variables.

Strings

Strings are pieces of text. I assume you know that, so let’s just go over some quick things:

string = 'this is a string'
print string
#>>this is a string
num = '3.1415'
print num
#>>3.1415

One thing to keep in mind, the above is a string, not a number. You can see this by:

print num + 2
#>>Traceback (most recent call last):
#>>  File "basics_variables.py", line 5, in
#>>    print num + 2
#>>TypeError: cannot concatenate 'str' and 'int' objects

Python is telling you that you cannot add a number to a string of text. It does not know that ’3.1415′ is a number. So let’s convert it to a number, this is called ‘casting’, we will ‘cast’ the string into a float and back:

print float(num) + 2
#>>5.1415
print str(float(num) + 2) + ' addme'
#>>5.1415 addme

Lists

Lists are the simplest ways to store pieces of data. Let’s make one by breaking up a string:

txt = 'jan tony senta michael brendon phillip jonathon mark'
names = txt.split(' ')
print names
#>>['jan', 'tony', 'senta', 'michael', 'brendon', 'phillip', 'jonathon', 'mark']
for item in names: print item
#>>jan
#>>tony
#>>senta
#>>michael
...

Split breaks up a string into pieces. You tell it what to break on, above, I told it to break on spaces txt.split(‘ ‘). So all the people are stored in a List, which is like an Array or Collection in some other languages.
You can call up the item by it’s number starting with zero:

print names[0], names[5]
#>>jan phillip

TIP: [-1] index will return the last item in an array, here’s a quick way to get a file from a path:

path = 'D:\\data\\dx11_PC_(110)_05_09\\Tools\\CryMaxInstaller.exe'
print path.split('\\')[-1]
#>>CryMaxInstaller.exe

Dictionaries

These store keys, and the keys reference different values. Let’s make one:

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
print dict['sean']
#>>False

So this is good, but these are just the keys, we need to know the values. Here’s another way to do this, using .keys()

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
for key in dict.keys(): print key, 'is', dict[key]
#>>sean is False
#>>sascha is tech artist
#>>harry is 142.1

So, dictionaries are a good way to store simple relationships of key and value pairs. In case you hadn’t notices, I used some ‘floats’ and ‘ints’ above. A float is a number with a decimal, like 3.1415, and an ‘int’ is a whole number like 10.

Creating Methods (Functions)


A method or function is like a little tool that you make. These building blocks work together to make your program.

Let’s say that you have to do something many times, you want to re-use this code and not copy/paste it all over. Let’s use the example above of names, let’s make a function that takes a big string of names and returns an ordered list:

def myFunc(input):
	people = input.split(' ')
	people = sorted(people)
	return people
txt = 'jan tony senta michael brendon phillip jonathon mark'
orderedList = myFunc(txt)
print orderedList
#>>['brendon', 'jan', 'jonathon', 'mark', 'michael', 'phillip', 'senta', 'tony']

Basic Example: Create A Tag Cloud From an Excel Document


So we have an excel sheet, and we want to turn it into a hip ‘tag cloud’ to get people’s attention.
If we go to http://www.wordle.net/ you will see that in order to create a tag cloud, we need to feed it the sentences multiple times, and we need to put a tilde in between the words of the sentence. We can automate this with Python!

First, download the excel sheet from me here: [info.csv] The CSV filetype is a great way to read/write docs easily that you can give to others, they load in excel easily.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
print lines
#>> ['always late to work,13\n', 'does not respect others,1\n', 'does not check work properly,5\n', 'does not plan properly,4\n', 'ignores standards/conventions,3\n']

‘\n’ is a line break character, it means ‘new line’, we want to get rid of that, we also want to just store the items, and how many times they were listed.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
dict = {}
for line in lines:
	split = line.strip().replace(' ','~').split(',')
	dict[split[0]] = int(split[1])
print dict
#>>{'ignores~standards/conventions': 3, 'does~not~respect~others': 1, 'does~not~plan~properly': 4, 'does~not~check~work~properly': 5, 'always~late~to~work': 13}

Now we have the data in memory in an easily readable way, let’s write it out to disk.

output = ''
for key in dict.keys():
	for i in range(0,dict[key]): output += (key + '\n')
f = open('C:\\Users\\chris\\Desktop\\intro_to_python\\test.txt', 'w')
f.write(output)
f.close()


There we go. In one hour you have learned to:

  • Read and write excel files
  • Iterate over data
  • Convert data sets into new formats
  • Write, read and alter ascii files

If you have any questions, or I left out any parts of the presentation you liked, reply here and I will get back to you.

posted by admin at 5:12 AM  
Next Page »

Powered by WordPress