Stumbling Toward 'Awesomeness'

A Technical Art Blog

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.


Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.


You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.


One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.



Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.


Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+


Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  


  1. A nice digest of the feat you guys pulled off here. The complexity you have outlined is most definitely why this isn’t common practice, and that I will agree with. Kudos to you and your team at Crytek and the 3Lateral folks. The result is of course quite impressive and for sure something to be proud of.

    I do still chuckle when I see “Faces” slide, one day I won’t.

    Comment by Randall Hess — 2014/08/26 @ 5:44 AM

  2. That is facinating! Very realistic facial animation.

    Comment by Strob — 2014/08/27 @ 10:57 PM

  3. This is some really amazing work, Chris. Congrats to you and the rest of the team. And thank you for taking all of the time to put together this thorough explanation of it!

    Comment by Ben Cloward — 2014/08/28 @ 2:05 AM

  4. Great work on the facial rig but I don’t think the 5% extra spent on that caused Crytek’s problem. Ryse was a stunning game, but very repetitive gameplay wise.

    Comment by Johnson — 2014/08/28 @ 6:36 AM

  5. @Johnson – I didn’t mean 5% resources, I meant the last 5% in quality. Sorry if I was unclear.

    Comment by admin — 2014/08/28 @ 10:14 AM

  6. […] […]

    Pingback by Steven Bender » RYSE Facial Technical Animation Breakdown from Chris Evans — 2014/08/28 @ 10:14 AM

  7. I would imaging the 5% extra quality took more than 5% extra on the resources ;]

    Comment by ëRiC — 2014/08/28 @ 12:18 PM

  8. It’s incredible what you guys have achieved with Ryse! And thanks for sharing.

    Is the head geo with eyes, gums, tongue etc “only” 5k triangles all in all at lod0?

    May I ask how many “poses” you have in the rig in total? Is it the same number as you have blendshapes or do you have some poses that are bones only?

    If performance was a non-issue would you still vouch for the hybrid joint/blendshape setup or would you go the blendshape only route? I’m a little curious as to why blendshapes wouldn’t scale well, as you decrease the mesh resolution the blendshapes would become linearly cheaper and many shapes could probably be removed completely. Is it because of memory?

    Comment by Nils Lerin — 2014/09/01 @ 7:45 PM

  9. Very interesting article.

    Am I correct in interpreting from the table that at close range, facial animation is calculated entirely on the CPU?

    Also did you ever consider doing the morph target animation on the GPU? If so what made you decide to go CPU only.

    Comment by Tim — 2016/01/29 @ 12:16 PM

  10. This was discussed in the talk. We were GPU bound day one. From the start of the project we looked at doing the new deformers on CPU. This includes the runtime wrap deformer for cloth, the >4 skin influences, and the blendshapes. For Ryse 2 we were looking to migrate the blendshape code to the GPU, but alas, Ryse 2 was not meant to be and the team of talented Ryse developers are largely scattered around the world working at different studios.

    Comment by admin — 2016/02/16 @ 9:55 PM

  11. […] “No one wants to have different character skeletons on each hardware platform.” […]

    Pingback by Multi-Resolution Facial Rigging - GAME ANIM — 2016/02/21 @ 6:36 PM

  12. Hi Chris — could you possibly expand on the “Tangent update” a bit more? Is that a normal map per blendshape corrective? Or is it an engine side magic thing?

    Comment by John — 2017/05/25 @ 12:52 PM

  13. I am at Epic now, we recently added this to Unreal Engine 4. Basically, when meshes deform, either by joint translation or by blendshape, the mesh tangents do not update in most game engines. In Maya you can enable your mesh tangents to view them, they should deform with the mesh.

    You have seen this if you ever tried to skin a parachute or something, or you open a mouth and the corners still shade dark. The mesh has deformed, but the tangents, or surface shading is still unchanged from before the deformation.

    Comment by Chris — 2017/06/06 @ 9:48 PM

  14. Thanks, Chris! Just gave it a shot in 4.16 and it works really, really well. Helped fixed some nasty correctives up. Found a funky UV seam problem with it, and bugged that in the Answers portal.

    Hope all is well. Cheers!

    Comment by John — 2017/06/07 @ 5:49 PM

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress