At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.
I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.
Lowest Common Denominator
As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.
When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.
It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets. It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.
You Deserve an Explanation
When we released the specs of our faces, people understandably were a bit taken aback. Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.
Let’s take a look at some goals:
- Cut-scene fidelity in gameplay at any time- no cut-scene rigs
- Up to 70 characters on screen
- Able to run on multiple hardware specs
The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.
On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.
But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.
Facial Level of Detail
So to achieve these goals, we must aggressively LOD our character faces.
Let’s generate some new goals:
- Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
- Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
- One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
- One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
- All facial animations shareable between characters
- Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.
One Hierarchy to Rule them All
Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.
To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:
- Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
- Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.
Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.
Why blendshapes? Isn’t 260 joints enough?
The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:
1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.
2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.
Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.
A Look Under the Hood: Ryse Facial LODing
Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.
|Assets / Technologies (LOD)
|CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k tris across multiple meshes||0-4m|
|CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled||4-7m|
|GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes||7-10m|
|GPU skinning , 4 inf, <10 joints, <1k mesh||10m+|
Here’s a different table showing the face mesh parts that we culled and when:
|4m||Eyebrow meshes replaced, baked into facial texture|
|3m||Eyelash geometry culled|
|3m||Eye AO ‘overlay’ layer culled|
|4m||Eye balls removed, replaced with baked in eyes in head mesh|
|2m||Eye ‘water’ miniscus culled|
|3m||Eye tearduct culled|
|3m||Teeth swapped for built-in mesh|
|3m||Tongue swapped for built-in mesh|
Why isn’t this standard?
Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!
But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon.. ¬.¬
I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:
DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.