Stumbling Toward 'Awesomeness'

A Technical Art Blog

Monday, November 13, 2017

The Mighty Message Attribute

I recently had a discussion about storing relationships in Maya, and hadn’t realized the role of the message attribute wasn’t this universally cherished thing. In previous posts entitled ‘Don’t use string paths‘, or ‘Why Storing String Refs is Dangerous and Irresponsible‘ I outlined why this is the devil’s work, but in those posts I talked about the API, PyMel and Message Attrs. I didn’t really focus on why message attrs were so important: they serialize node relationships.

For quite some time I have advocated storing relationships with message attrs. At the Maya SIGGRAPH User Event, when they asked me to speak about our modular rigging system, I kind of detailed how we leveraged those at Crytek in CryPed.

msg

I am not quite sure when I started using message attrs to convey relationships, I’m no brainiac, it could have been after seeing this 2003 post from Jason Schleifer on CGTalk:

image

Or maybe I read it in the Maya docs (unlikely):

“Message attributes only exist to formally declare relationships between nodes. By connecting two nodes via message attributes, a relationship between those nodes is expressed.”

So why does Maya use this, and why should I?

As you read in the docs above, when Maya wants to declare a relationship between a camera and image plane, they do so with a message attribute that connects them. This is important because this bond won’t be broken if the plane or it’s parent is renamed. As soon as you store the string path to a node in the DAG, that data is already stale.  It’s no longer valid.  When you query a message attribute, Maya returns the item, it’s DAG path will be valid, regardless of hierarchy or name changes.

Jason’s example above is maybe the most simple, in my image (a decade later) you can see the messages declaring many relationships abstracting the character at three main levels of interface, Character, ChatacterPart and RigPart. I talked about the basic ideas here in a 2013 post about object oriented python in Maya.

Though Rob vigorously disagreed in the comments there, I am still doing this today.  Here’s an example from the facial code we released in EPIC’s ARTv1 rigging tools some time ago. The face is abstracted on two levels, the ‘face’ and the ‘mask’, here I am only displaying the message connecting them:

wiring

By using properties as described in that previous blog post, below I am accessing the system, creating a face instance, walking down the message connection to the mask node, and then asking it for the attach locations. It’s giving me these transforms, by querying the DAG, live:

msg

So, that property looks like this:

    @property
    def attachLocations(self):
        return cmds.listConnections(self.node + '.attachLocations')
    @attachLocations.setter
    def attachLocations(self, locs):
        for loc in locs:
            utils.msgConnect(self.node + '.attachLocations', loc + '.maskNode')

Setting the attach locations through python would look like this, and it would rebuild the message attrs:

face.mask.attachLoactions = ['myLoc1', 'myLoc2']

Working like this, you have to think hard about what a rigger would want to access at what level and expose what’s needed. But in the end, as you see, through python, you have access to everything you need, and none of the data is stale.

How and when to use strings

There are times when the only way you can store a relationship is by using a string in some fashion. Here are some situations and how I have handled them in the past, feel free to leave a comment with your experiences.

  • Maya can’t store a relationship to something that doesn’t exist (has been deleted). It can’t store a relationship when it’s not open. In these situations, instead of storing the name in an attr, I stamp the two nodes with a string attr to store the relationship, then you query the world for another node with a matching stamped attr.
  • Many times you need to feed your class an initial interface node to build/wrap. Instead of feeding it a string name, you can query the world for node type, in the Ryse example above, the rigging and animation tools could query cmds.ls(type=’CryCharacter’), this would return all characters in the scene. This means all rigging and animation tools needed a common ‘working character’ combobox at the top to define the character the tool is operating on. If you don’t have a node type, you can use a special string attr to query for.
  • Sometimes you’re like saving joint names to serialize skinning data or something. You can use message attrs to play it safe here as well. Some pseudocode: For character in characters, if character identifier matches file on disk, for mesh in character.meshes if mesh in file skin it. For joint in character.joints if in file, add them to the skincluster, etc. Here you’re validating all your serialized string data against your class which is traversing the DAG live.
  • Message attrs can get SLOW if you’re tracking thousands of items, you should only be tracking important things you would want later. In CryPed, when we wanted to track all nodes that were created when a module was built, we would stamp them all with a string attr that was the function name that built the module. To track this kind of data HarryZ at Crytek had the pragmatic idea of just doing a global ls of the world when a buildout started and then one at the end and boolean them out, this caught all the intermediate and utility nodes and everything generated by the rigging code.
posted by Chris at 6:10 AM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by Chris at 12:54 AM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Powered by WordPress