Stumbling Toward 'Awesomeness'

A Technical Art Blog

Saturday, December 1, 2018

The Asset Registry: Finding and Iterating Through Assets with UE4 Python

One thing that you will want to do right away is iterate through a bank of existing assets or find assets in a build. In UE4, your main window into the ‘content browser’ is the ‘asset registry’. You can use it to find all kinds of assets, iterate through assets in a folder, etc.

Let’s go ahead and instance it to take a look, now would be a good time to open the unreal.AssetRegistryHelpers UE4 Python API docs in another tab! Also, I am running this in UE4 4.21 release, with the free Paragon asset Marketplace assets.

Walking Assets In A Directory

Let’s ask it for all the assets in a certain path.

asset_reg = unreal.AssetRegistryHelpers.get_asset_registry()
assets = asset_reg.get_assets_by_path('/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes')

The method to get the asset registry has returned an unreal.AssetRegistry class. If you look at this class, you can see some really useful calls, like get_assets_by_path, that I used on the next line.

Let’s take a look at the assets:

for asset in assets: print asset

This yields:

LogPython: <Struct 'AssetData' (0x000001ADF8564560) {object_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Skeleton.Morigesh_Skeleton", package_name: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Skeleton", package_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes", asset_name: "Morigesh_Skeleton", as
set_class: "Skeleton"}>
LogPython: <Struct 'AssetData' (0x000001ADF8566A90) {object_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Orion_Proto_Retarget.Orion_Proto_Retarget", package_name: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Orion_Proto_Retarget", package_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes", asset_name: "Orion_Proto_R
etarget", asset_class: "Rig"}>
LogPython: <Struct 'AssetData' (0x000001ADF8564560) {object_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Cyl_Shadows.Morigesh_Cyl_Shadows", package_name: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Cyl_Shadows", package_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes", asset_name: "Morigesh_Cyl_
Shadows", asset_class: "PhysicsAsset"}>
LogPython: <Struct 'AssetData' (0x000001ADF8566A90) {object_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Physics.Morigesh_Physics", package_name: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh_Physics", package_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes", asset_name: "Morigesh_Physics", asset_
class: "PhysicsAsset"}>
LogPython: <Struct 'AssetData' (0x000001ADF85654B0) {object_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh.Morigesh", package_name: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh", package_path: "/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes", asset_name: "Morigesh", asset_class: "SkeletalMesh"}></code>

It has returned Python Objects of ‘unreal.AssetData‘ type, this class has a lot of things we can query, like class type, name, full path, etc. Let’s print the class name for each:

for asset in assets:
    print asset.class

Let’s only look at skeletal meshes and then let’s do something to them. In order to manipulate them, we need to load them, look at what the get_full_name function returns:

for asset in assets:
    #you could use isinstance unreal.SkeletalMesh, but let's build on what we learned
    if asset.asset_class == 'SkeletalMesh':
        print asset.get_full_name()
#>SkeletalMesh'/Game/ParagonMorigesh/Characters/Heroes/Morigesh/Meshes/Morigesh.Morigesh'

We need to split that output, then load the asset:

for asset in assets:
    if asset.asset_class == 'SkeletalMesh':
        full_name = asset.get_full_name()
        path = full_name.split(' ')[-1]
        skelmesh = unreal.load_asset(path)

Now this returned an unreal.SkeletalMesh class and we can ask it for it’s skeleton:

skeleton = skelmesh.skeleton

Finding Assets

Let’s say someone gives you a list of problematic assets, but they’re not long paths, just asset names! You want to be able to find the long path for all assets in the list so that you can do something with them. The AssetRegistry can help!

Let’s build a dictionary of all asets in the build, the keys will be the short names, and the values will be the long paths:

def get_asset_dict(asset_type=None):
    asset_list = None
    if asset_type:
        asset_list = unreal.AssetRegistryHelpers.get_asset_registry().get_assets_by_class(asset_type)
    else:
        asset_list = unreal.AssetRegistryHelpers.get_asset_registry().get_all_assets()
    asset_dict = {}
    for asset in asset_list:
        asset_name = str(asset.asset_name)
        obj_path = asset.object_path
        if asset_name not in asset_dict:
            asset_dict[asset_name] = [str(obj_path)]
        else:
            asset_dict[asset_name].append(str(obj_path))
 
    return asset_dict

This takes a second or two to build, but you now have an index of all assets by package name, that you can query their full path. It’s a bit faster if you query all assets of a certain type you know you’re looking for. You also will know when there is more than one asset with a name, because it’s list will have multiple entries. (That’s why we store what we find in a list, there could be multiple assets with the same name)

posted by Chris at 10:12 PM  

Thursday, August 30, 2018

UE4 Python API Documentation

The team has worked really hard to get this building and automagically updated as we expose more useful stuff:

https://api.unrealengine.com/INT/PythonAPI/

posted by Chris at 4:56 PM  

Friday, April 6, 2018

Vector Math Examples in UE4 Python

One of the most visited/indexed posts on this site was the brief Maya vector math post, now that we are releasing Python in UE4, I thought I would port those examples to UE4 Python.

Creating Vectors

Let’s query the bounding box extents of a skeletal mesh actor. This will return two points in world space as vectors:

bbox = mesh.get_actor_bounds(False)
v1, v2 = bbox[0], bbox[1]

If we print the first vector, we see it’s a struct of type Vector

print v1
#<Struct 'Vector' (0x000001EACECEFE78) {x: 540.073303, y: 32.021194, z: 124.710869}>;

If you want the vector as a tuple or something to export to elsewhere, you just make a tuple of its components:

print (v1.x, v1.y, v1.z)
#(540.0733032226562, 32.02119445800781, 124.71086883544922)

If you want to create your own vector, you simply do:

my_vec3 = unreal.Vector(1,2,3)
Print my_vec3
#<Struct 'Vector' (0x000001EACECECA40) {x: 1.000000, y: 2.000000, z: 3.000000}>

Length / Distance / Magnitude

The Vector struct supports many mathematical operations, let’s say we want to get the distance from v1 to v2, we would subtract them, which returns a new vector, then we would get the length (‘size’ in this case) of the vector:

new_vec = v2-v1
Print new_vec.size()
#478.868011475

The vector struct has a lot of convenience functions built in (check out the docs here) , for instance, let’s get the distance from v1, to v2 (the diagonal across our bounding box), without doing the above by calling the dist() function:

print v1.dist(v2)
#478.868011475

There is also a distSquared function it you just want to quickly compare which are greater and not calc real distance.

USE CASE: FIND SMALL ACTORS

Using what we got, let’s play in the default scene:

We can iterate through the scene and find actors under a certain size/volume:

for actor in static_meshes:
    mesh_component = actor.get_component_by_class(unreal.StaticMeshComponent)
    v1, v2 = mesh_component.get_local_bounds()
    if v1.dist(v2) < 150:
        print actor.get_name()
#Statue

I am querying the bounds of the static mesh components because the get_bounds() on the actor class returns the unscaled bounds, you may notice that many things in the scene, including the statue and floor are scaled. This is reflected in the get_local_bounds() of the mesh component.

DOT PRODUCT / ANGLE BETWEEN TWO VECTORS

If we wanted to know the angle between two vectors, we would use the dot product of those, let’s create two new vectors v1, and v2, because our previous were not really vectors per se, but point locations Just like in UE4, you use the ‘|’ pipe to do a vector dot product.

v1 = unreal.Vector(0,0,1)
v2 = unreal.Vector(0,1,1)
v1.normalize()
v2.normalize()
print v1 | v2
print v1 * v2

Notice that asterisk will multiply the vectors, pipe will return the sum of the components, or the dot product / scalar product as a float.

For the angle between, let’s import python’s math library:

import math
dot = v1|v2
print math.acos(dot) #returns 0.785398180512
print math.acos(dot) * 180 / math.pi
#returns 45.0000009806

Above I used the Python math library, but there’s also an Unreal math library available unreal.MathLibrary, you should check that out, it has vector math functions that don’t exist in the vector class:

dot = v1|v2
print unreal.MathLibrary.acos(dot) #returns 0.785398180512
print unreal.MathLibrary.acos(dot) * 180 / math.pi #returns 45.0000009806
print unreal.MathLibrary.radians_to_degrees(unreal.MathLibrary.acos(dot)) #returns: 45.0

USE CASE: CHECK ACTOR COLINEARITY

Let’s use the dot product to check if chairs are colinear, or facing the same direction. Let’s dupe one chair and call it ‘Chair_dupe’:

fwd_vectors = {}
for actor in static_meshes:
    if 'Chair' in  actor.get_name():
        actor_vec = actor.get_actor_forward_vector()
        for stored_actor in fwd_vectors:
            if actor_vec | fwd_vectors[stored_actor] == 1.0:
                print actor.get_name(), 'is colinear to', stored_actor
        fwd_vectors[actor.get_name()] = actor_vec
#returns: Chair_dupe is colinear to Chair

I hope this was helpful, I can post some other examples later.

posted by Chris at 1:23 AM  

Tuesday, April 3, 2018

Stumbling Into UE4 Python

Out of the gate, it’s important to understand that the Python exposure wraps BluePrint, not the C++ SDK, if something isn’t exposed to blueprint, you cannot access it with Python.

A Note on Naming

There are notable differences that should be pointed out regarding the python exposure and the UE4 programming documentation. UE4 class names have dropped the prefix, an FRotator would be unreal.Rotator, FVector, unreal.Vector, etc. Functions on those classes have been converted to PEP8 style names that differ from their C++ counterparts, for example Actor.GetActorBounds is Actor.get_actor_bounds, just FYI.

Detective Work

If you have ever used an initial alpha or beta embedded Python implementation in an application, like Maya or MotionBuilder, (if you’re old like me!) –you know the drill. There’s no documentation yet, but you can use existing blueprint/C++ documentation and native python built-in functions like dir, help, and type to try and navigate.

DIR()
Python’s dir command returns a ‘directory’ of all attributes of an object. This is extremely useful. Let’s query the attributes of a static mesh actor:

print dir(actor)

This returns a massive list, if you print each item, you’ll see something like this:

...
get_remote_role
get_squared_distance_to
get_tickable_when_paused
get_typed_outer
get_velocity
get_vertical_distance_to
get_world
has_authority
hidden
initial_life_span
instigator
is_actor_being_destroyed
...

HELP()
Jamie Dale has put a lot of work into making help work as expected, through dir we saw the actor has a get_velocity property, let’s ask help abut it:

help(actor.get_velocity)

Help returns not only info about get_velocity, but tells us what a function takes and returns:

Help on built-in function get_velocity:
get_velocity(...)
    x.get_velocity() -&gt; Vector -- Returns velocity (in cm/s (Unreal Units/second) of the rootcomponent if it is either using physics or has an associated MovementComponent
    param: return_value (Vector)

TYPE()
Type just returns the type of object, but it’s useful when you’re not sure exactly what something is. Let’s ask what type our actor is:

print type(actor)
#<type 'StaticMeshActor'>

Now let’s ask it about the get_velocity above, we know it’s a function because I just used it, but as an example:

print type(actor.get_velocity)
#<type 'builtin_function_or_method_with_closure'>

With these core concepts, you can really begin stumbling around and making useful scripts and tools!

In Practice

Let’s query all StaticMeshes in a level (the default UE4 map). First thing is to load the level:

import unreal
 
level_path = '/Game/StarterContent/Maps/Minimal_Default'
level = unreal.find_asset(None, name=level_path)

Next we use get_all_actors_of_class to query all the static meshes:

static_meshes = unreal.GameplayStatics.get_all_actors_of_class(level, unreal.StaticMeshActor)
print static_meshes
#returns: [StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Table"', StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Statue"', StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Floor_14"', StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Floor"', StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Chair_15"', StaticMeshActor'"/Game/StarterContent/Maps/Minimal_Default.Minimal_Default:PersistentLevel.Chair"']

That’s returned a long list of class objects, let’s make that more readable by calling the get_name function of each class with a simple list comprehensions:

print [mesh.get_name() for mesh in static_meshes]
#returns: ['Table', 'Statue', 'Floor_14', 'Floor', 'Chair_15', 'Chair']

How did I know there was a get_name functions? I saw it when I ran directory on the static mesh actor: dir(actor).

Let’s move the table:

if mesh.get_name() == 'Table':
    xform = actor.get_actor_transform()
    location = actor.get_actor_location()
    print xform
    print location
#returns:
#<Struct 'Transform' (0x0000000095E5D3C0) {rotation: {x: 0.000000, y: 0.000000, z: 0.000000, w: 1.000000}, translation: {x: -180.000000, y: 0.000000, z: 32.000000}, scale3d: {x: 1.000000, y: 1.000000, z: 1.000000}}>
#<Struct 'Vector' (0x00000000B8ACE5C0) {x: -180.000000, y: 0.000000, z: 32.000000}>

Let’s change the location and use set_actor_location to set a new location:

location.z += 28 #location = 60
actor.set_actor_location(location)
#returns:
#Traceback (most recent call last):
#TypeError: Required argument 'sweep' (pos 2) not found

Wow, so that fails, we need to find out why, let’s ask help:

print help(actor.set_actor_location)
Help on built-in function set_actor_location:
 
set_actor_location(...)
    x.set_actor_location(new_location, sweep, teleport) -> HitResult or None -- Move the Actor to the specified location.
    param: new_location (Vector) -- The new location to move the Actor to.
    param: sweep (bool) -- Whether we sweep to the destination location, triggering overlaps along the way and stopping short of the target if blocked by something. Only the root component is swept and checked for blocking collision, child components move without sweeping. If collision is off, this has no effect.
    param: teleport (bool) -- Whether we teleport the physics state (if physics collision is enabled for this object). If true, physics velocity for this object is unchanged (so ragdoll parts are not affected by change in location). If false, physics velocity is updated based on the change in position (affecting ragdoll parts). If CCD is on and not teleporting, this will affect objects along the entire swept volume.
    param: sweep_hit_result (HitResult) -- The hit result from the move if swept.
    param: return_value (bool)
    return: Whether the location was successfully set (if not swept), or whether movement occurred at all (if swept).

Ok, awesome, so I’ll tell it I don’t want it to sweep, and I do want it to teleport:

location.z += 28 #location = 60
actor.set_actor_location(location. False, True)

You should see the table move!

NOTE: Careful, in order for your Python changes to be added to the undo stack, you need to use unreal.ScopedEditorTransaction.

I hope this was helpful, I can post some other examples later.

posted by Chris at 1:28 AM  

Tuesday, March 20, 2018

Python Ships in Unreal Engine 4.19

ADDING THE PLUGIN

For anyone with version 4.19 of the engine or later, you now have access to Python. It’s marked ‘experimental’, enable it under plugins:

You then can enter python through the command line:

As the Python implementation wraps blueprint and blutilities, you only have access to things exposed through blueprint.

INSTALLING THIRD PARTY LIBRARIES

The team has made it a lot easier to install 3rd party libraries by shipping with pip.exe, it’s in this folder: <build>\Engine\Source\ThirdParty\Python\Win64\Scripts

Here’s an example of installing PySide into the build:

&gt;&gt;pip install --target=Y:\Build\UE_4.19\Engine\Source\ThirdParty\Python\Win64\Lib\site-packages Pyside
Collecting Pyside
  Using cached PySide-1.2.4-cp27-none-win_amd64.whl
Installing collected packages: Pyside
Successfully installed Pyside-1.2.4
posted by Chris at 10:15 AM  

Tuesday, June 6, 2017

Crysis Technologies

In 2006, the team at Crytek was hard at work trying to come up with ways to ship Crysis –we had definitely bitten off more than we could chew. I found some videos that are now a decade old when cleaning out my HD, but it’s interesting to see –that’s for sure!

Facial Editor

We didn’t have a clue how we would animate all the lines of dialog that were required for the game.  3D Studio Max, which we used to animate at the time had nothing as a means of animating faces, now a decade later, Max and Maya still have zero offering to help with facial rigging or animation.  So.. we decided to write our own. Stephen Bender (Animation Lead) and I worked with Timur Davidenko and Michael Smith (Programmers) on this tool. Marco Corbetta wrote the 2.5D head/facial tracker. Here’s a video:

 

The user would feed into the system a text file, an audio file, and a webcam video of themselves. It would generate the mouth phonemes from the text/audio, and upper 2/3rds of the face from the video. The system would generate this animation on the same interface the animators used to animate so it was easily editable. It shipped with the MS speech DLL, but you could swap that for Annosoft if you licensed it. Crysis shipped with all characters having 98 blendshapes, driven by Facial Editor curves/animation using non-linear expressions. Imagine shipping a game today without having animators touch a face in a DCC app!

SequencePane

click to enlarge

PhotoBump

Many people know that Crytek released the first commercially available normal map generator: PolyBump, but rarely has anyone heard of it’s companion: PhotoBump. This was Created by Marco Corbetta around the same time, but released only to CryEngine licensees in 2005. It was probably one of the first commercial photogrammetry apps, and definitely one of the first uses of photogrammetry in games. Much of the rocky terrain in Crysis was created with the help of  PhotoBump! Marco also stamped/derived high frequency details from the diffuse, which I hadn’t seen others do until sometime after.

SIGGRAPH Best Realtime Graphics 2007

Here’s the SIGGRAPH ET reel from the year we released Crysis. I still can’t believe some of this stuff, like the guy pathfinding across the bridge of constrained boards and pieces of rope! I actually cut and edited this video myself back then, rendering it all out from the engine as well!

posted by Chris at 10:25 PM  

Friday, September 2, 2016

SIGGRAPH Realtime Live Demo Stream

The stream of our SIGGRAPH Realtime Live demo is up on teh internets. If you haven’t seen the actual live demo, check it out!

I feels amazing to win the award for best Realtime Graphics amongst such industry giants. There are so many companies from so many industries participating now, and the event has grown such much. Feels really humbling to be honored with this for a third year; no pressure!

posted by Chris at 9:26 AM  

Saturday, May 7, 2016

Simple Runtime Rigging in UE4

This is something that I found when cleaning my old computer out at work this week after an upgrade. This is an example of using a look at controller and a blendspace to drive a fleshy eye deformation in UE4 live at runtime.

This is the technique Jeremy Ernst used on Smaug’s eye in the demo we did with Weta over a year ago. Create a 2d blendspace for the eye poses. Export out left, right, up, down poses and connect them:

eye2

Grab the local transform from the eye, and set two vars with it:

image13

Create a Look At Controller and integrate the blendspace:

image06

posted by Chris at 7:40 AM  

Monday, April 4, 2016

Off on a Tangent

Plutarch tells that Alexander the Great visited the Libyan Sibyl, and was given the correct constellation of checkboxes and steps to get meshes from Max to Maya properly.

Plutarch tells that Alexander the Great visited the Libyan Sibyl in search of the correct constellation of checkboxes and steps to get meshes from Max to Maya.

I have spent many years of my life in studios where characters are modeled in a package other than Maya (often Max) and imported into Maya via FBX. Having worked along side great character artists like Hanno Hagedorn, Abdenour Bachir, and most recently Kevin Lanning and his team here at Epic, I cannot tell you how many hours of our lives were devoted to trying to get mesh tangents into the final product that resembled what they were in the original sculpt/bake. Not to mention brilliant pipeline programmers like Bogdan Coroi, or James Goulding‘s team here at Epic, many hours have been spent trying to solve this issue.

Sometimes it seemed some mystical channeling whereby some constellation of export or import checkboxes along with maybe layering an edit mesh modifier on top of your character before export to Maya worked. Sometimes the solution seemed to have been exporting only triangulated meshes to Maya, whereby you needed a (fragile) pipeline to allow you to have quaded for skinning and triangulated from Max for export.

Well, as it turns out, Maya has always ignored all mesh tangent data on FBX import.

I hope this post saves you some headache. At Crytek we looked to change the pipeline to store all normal maps in world space, another, more pragmatic solution, proposed by Jeremy Ernst here at Epic is to give the engine the static mesh from Max and the skinned mesh from Maya and just transfer the original data. Scott Parrish told me that his team bakes against the skinned FBX as it comes out of UE4, this is another way of solving the issue.

I also understand this is not a simple issue, all DCC packages work differently. Max allows users to turn edges, etc.. But it’s good to know that we’re also not crazy.

posted by Chris at 9:46 PM  

Thursday, March 31, 2016

Why Does Everyone Write Their Own FBX Exporter?

fbx_export

When it comes to characters or character animation, any studio doing anything remotely complex has to write their own FBX exporter. But why?

send_to_unreal

It’s true that the FBX export options dialog is so convoluted that even ADSK has created a front end for exporting FBX to Unreal or Unity, but the main reason boils down to one issue: Maya FBX Export does not bake animation properly.

I *WANT* to use the FBX Exporter core (from Python). As a technical artist, I want a C++ plugin efficiently traversing the dag and baking/exporting animation. I do not want to dupe a skeleton and walk the timeline using python, calling scene update on the entire DAG every frame. The FBX Exporter doesn’t even have python exposure, so we have to eval MEL commands; it’s just not ideal to say the least.

Where Have the Attrs Gone?

When exporting characters elsewhere, you want to use FBX to get important data there as well. Not just an animated transform. In this test file [fbx_test], you see a sphere, skinned to a joint. That joint has an attribute called important_data. In games especially, we can use this scalar to drive a material parameter such as a wrinkle on a face, a blendshape, to allow an animator to blend off a cloth solver, anything really. So here’s our important data, driven my a multiply-divide node:

Capture

Export this scene as an FBX, being sure to tell it to bake complex animation:

mel.eval("FBXExportBakeComplexAnimation -v true")

Now import it back into an empty Maya scene, your important_data is gone!

To make matters worse, in the FBX ascii file I see an anim curve for my important data:

;AnimCurveNode::important_data, Model::joint1
C: “OP”,1519473056,205108448, “important_data”

So I am at a loss as to why programs using the FBX SDK do not get this data on import, including Maya.

DISCLAIMER: On multiple occasions I have reached out to ADSK asking them to please address this issue, but it has not happened. From the image above (new game export options), you can see that, on some level, ADSK wants to make this experience better, so please echo me in asking them to fix this issue.

In line with the title of this post, I have written, and would gladly give out an exporter for UE4, but as it uses the C++ bake / FBX Exporter from Maya, I would prefer to wait until this gets fixed. The ‘Bake Root’ option was actually created by Kevin Vassey at Epic because UE4 imports custom attrs on the root joint, so we just bake that if needed. This is a little puny exporter; internally we use Jeremy’s A.R.T. Toolkit for exporting animation in production.

uExportt

 

UPDATE 1: Thanks for the shares and responses. Some of you have told me that you have also talked to ADSK about this, and others say it was fixed at one time, but is now broken. ADSK has reached out in the comments and has created a bug number in their tracking system. I will update later on it’s priority/timeframe if I get more info!

UPDATE 2: I can verify, this is fixed if you grab FBX 2016.1.2 or later from here. It should be fixed in future versions of Maya, and the bug was fixed before I wrote this post, but was not in Maya 2016. In the comments, Chris Dardis said that the fix was available in a non public cut of Maya he was using. As the link is not compiled, I have uploaded a compiled Maya 2016 FBX 2016.1.2 plugin here.

posted by Chris at 10:01 PM  

Monday, January 5, 2015

An Epic Change of Venue

Epic_Logo

In November I started at Epic Games in North Carolina, joining Jeremy Ernst and his team to focus on high quality characters and related technologies. For quite some time I have been enamored with not only the decisions and business practices of Epic, but the engine itself. Whether it’s putting the code on github, or the deep implementations of things like Blueprint; I am constantly amazed that they seem to remain focused, always putting the user first.

Look for some cool things coming down the pipe this year, we have an amazing team of really talented people, and an huge user-base that spans all industries.

Not many people know that I was born in South Carolina, I grew up in Florida and went to school in Georgia. I always felt a bit out of place on the West Coast, it’s great to finally be back to a place that feels like home.

posted by Chris at 9:34 AM  

Thursday, September 18, 2014

Ryse SIGGRAPH 2014 Reel

I mentioned that we won “best Realtime Graphics” at this year’s SIGGRAPH conference, but I never linked the video:
https://www.youtube.com/watch?v=mLvUQNjgY7E

posted by Chris at 4:24 PM  

Tuesday, August 26, 2014

Multi-Resolution Facial Rigging

At SIGGRAPH we discussed a bit about our facial pipeline that we haven’t talked about before. Namely, facial LODs and multi-platform facial rigging.

I would like to start by saying that we spent a _LOT_ of time thinking about facial levels of detail on Ryse, and put a lot of effort into the area. I know this is a long post, but it’s an important one.

run_on_brian

Lowest Common Denominator

As the ‘next-generation’ seems to be largely defined by mult-platform titles it seems valuable to focus on ways to increase fidelity on next generation hardware while still being able to target older hardware specs. That said, I have yet to see a pipeline to do this. Most next gen games have skeletons and animations limited by the lowest common denominator of the last generation, often Playstation 3.

When you wonder why your awesome next gen game doesn’t seem to have character models and animation like next-gen only titles, this is why.

It’s very easy to increase texture resolution by having a pipeline where you author high and bake to lower targets.  It’s more complicated to author meshes high and publish to lower targets, we did this on Crysis 1 and 2, High end PC saw higher mesh resolution characters than Xbox 360. I would say it’s the hardest to make rigs, deformers, and animations for a high spec hardware target and create a process to publish lower fidelity versions. No one wants to have different character skeletons on each hardware platform.

facial_complexity

You Deserve an Explanation

When we released the specs of our faces, people understandably were a bit taken aback.  Why on earth would you need 250 blendshapes if you have 260 joints? This above is actually a slide from our asset creation SIGGRAPH course that resonated very well with the audience.

Let’s take a look at some goals:

  1. Cut-scene fidelity in gameplay at any time- no cut-scene rigs
  2. Up to 70 characters on screen
  3. Able to run on multiple hardware specs

The only way to achieve the first two is through a very aggressive and granular level of detail (LOD) scheme. Once having that LOD system in place, the third item will come for free, as it did on our previous titles. However, as we have LODed meshes and materials, we had never LODed rigs.

On a feature film, we wouldn’t use joints, we would have a largely blendshape-only face.

But this doesn’t LOD well, we need to be able to strip out facial complexity in the distance and on other platforms.

Facial Level of Detail

So to achieve these goals, we must aggressively LOD our character faces.

Let’s generate some new goals:

  • Improve LOD system to allow the swapping or culling of skinned meshes per-mesh, each at hand-tailored distances per-character instance
  • Not only swap meshes, but skinning algorithms, materials, cull blendshapes, etc..
  • One skeleton – all levels of detail stored in one nested hierarchy, disable/reveal joints at different LOD levels, as I mention above, no one wants multiple skeletons
  • One animation set – drives all layers of detail because it’s one hierarchy, only the enabled joints receive animation
  • All facial animations shareable between characters
  • Faces snapped onto bodies at runtime – “Cry parent constraint” of sorts snaps head, neck, spine4, clavs, and upper arms of facial rig to body, allowing dynamic LODing of face irrespective of body.

LOD_hierarchy

One Hierarchy to Rule them All

Before going into the meshes, skinning algorithms, culling, etc.. it’s important to understand the hierarchy of the face. At any given mesh LOD level, there are many joints that are not skinned. Above you see three layers of joints, 9 LOD0, 3 LOD1, and 1 LOD2.

To use a single hierarchy, but have it drive meshes at different levels, you need to accomplish the following:

  • Make sure you have three layers that can drive different facial LODs, we had something like 260/70/15 on Ryse.
  • Each layer must be driven, and able to deform that LOD alone. Meaning when creating rig logic, you must start from the highest LOD and move down the chain. The LOD0 joints above would only be responsible for skinning the details of the face at LOD0, their gross movement comes from their parent, LOD1.

Here you can see the Marius example video from our slides. Notice the ORANGE joints are responsible for gross movement and the YELLOW or GREEN leaf joints just add detail.

jaw_drop_skel

 

Why blendshapes? Isn’t 260 joints enough?

The facial hierarchy and rig is consistent between all characters. The rig logic that drives those joints is changed and tweaked, the skinning is tweaked, but no two faces are identical. the blendshapes serve two main purposes:

1) Get the joint rig back on scan. Whatever the delta from the joint rig to the scan data that is associated with that solved pose from the headcam data, bridge that. This means fat around Nero’s neck, bags under his eyes, his eyebrow region, etc.

2) Add volume where it’s lost due to joint skinning. Areas like the lips, the cheeks, rig functions like lips together, sticky lips, etc, require blendshapes.

nero_corectives

Look at the image above, there just aren’t enough joints in the brow to catch that micro-expression on Nero’s face. It comes through with the blendshape, and he goes from looking like you kicked his dog, to his accusatory surprise when he figures out that you are Damoclese.

A Look Under the Hood: Ryse Facial LODing

Thanks to the hard work of graphics engineer Jerome Charles we were able to granularly LOD our faces. These values are from your buddy Vitallion, who was a hero and could be a bit less aggressive. Many of the barbarians you fight en masse blew through all their blendshapes in 2m not 4m.

Assets / Technologies (LOD)
Distance
CPU skinning, 8 inf, 260 joints, 230 blendshapes, tangent update, 5k  tris across multiple meshes 0-4m
CPU skinning, 8 inf, 260 joints, 3-5k across multiple meshes with small face parts culled 4-7m
GPU skinning, 4 inf, 70 joints, 2k mesh with integrated eyes 7-10m
GPU skinning , 4 inf, <10 joints, <1k mesh 10m+

 

Here’s a different table showing the face mesh parts that we culled and when:

Distance Face parts
4m Eyebrow meshes replaced, baked into facial texture
3m Eyelash geometry culled
3m Eye AO ‘overlay’ layer culled
4m Eye balls removed, replaced with baked in eyes in head mesh
2m Eye ‘water’ miniscus culled
3m Eye tearduct culled
3m Teeth swapped for built-in mesh
3m Tongue swapped for built-in mesh

Why isn’t this standard?

Because it’s very difficult and very complicated, there aren’t so many people in that can pull something like this off. On Ryse we partnered with my friend Vlad at 3Lateral, after 4 months working on the initial Marius facial prototype, he and his team were able to deliver 23 more facial rigs at the same fidelity in just under three months!

But also, there’s the whole discussion about whether the time and effort spent on that last 5% really pays off in the end. Why not just use PS3 facial rigs on all platforms and spend a little more on marketing? It’s happening! And those guys probably aren’t going bankrupt any time soon..  ¬.¬

I am insanely proud of what the team accomplished on Ryse. Facial rigging is nothing without a great bunch of artists, programmers, animators, etc. Here’s some good moments where the performances really come through, these are all the in-game meshes and rigs:

DISCLAIMER: All of the info above and more is publicly available in our SIGGRAPH 2014 course notes.

posted by Chris at 4:40 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by Chris at 12:54 AM  

Monday, June 30, 2014

Wasted Time, Sunken Cost, and Working In a Team

sunk

YOUR APE ANCESTORS

Let’s say that you want to do something, like watch a movie. When you arrive and open your wallet to purchase a 10 dollar ticket, you notice you have lost a 10 dollar bill, the majority of people buy a movie ticket anyway (88%).

Let’s examine a slightly different situation, where you arrive at the theater, but have misplaced your ticket, would you go buy another? Studies show that a majority of people (54%) would not re-purchase a ticket and watch the film. The situations are the same, but in the first, you lost 10 dollars, it wasn’t associated with the movie, in the second, you lost your ticket, 10 dollars that was specifically allotted to that task, and loss sucks.

This is a great example of the Sunk Cost Fallacy. Kahneman and Tversky are two researchers who have spent a lot of their careers looking at loss aversion and decision theory. The bottom line is, it’s human nature that the more you invest in something, the harder it is to abandon it. As a Technical Artist, you will find yourself in a position where you are the decision-maker, don’t let your ape ancestors trick you into making a poor decision.

..since all decisions involve uncertainty about the future the human brain you use to make decisions has evolved an automatic and unconscious system for judging how to proceed when a potential for loss arises. Kahneman says organisms that placed more urgency on avoiding threats than they did on maximizing opportunities were more likely to pass on their genes. So, over time, the prospect of losses has become a more powerful motivator on your behavior than the promise of gains. Whenever possible, you try to avoid losses of any kind, and when comparing losses to gains you don’t treat them equally. – You Are Not So Smart

51809459

IN PRODUCTION

As a Technical Artist in a position to lead or direct a team, you will often be the person signing off tools or features you and your team have requested. How many times have you been in the following situation:

A feature or tool is requested. Joe, a genius ‘lone wolf’ programmer receives the task, he is briefed and told to update the customers periodically or ask them in the case he needs any clarification. Now, sometimes what happens is what my brother likes to call ‘The Grand Reveal’. It’s where, for whatever reason, Joe sits in his corner working hard on a task, not involving anyone, and on the last day he valiantly returns from the mountain top and presents something that is unfortunately neither really what was requested or needed.

In this situation, you get together with his Lead and point out that what is delivered is not what was requested, he will more than likely reply “But Joe spent four weeks on this! Surely we can just use this until Joe can later rework it?”

No, you really can’t. Joe needs to know he works on a team, that people rely on his work. Nothing gets people to deliver what they are supposed to next time like being forced to redo their work. I guarantee you next time Joe will be at your teams desks any time he has a question about the tool or feature he is working on. You know the needs of your project or team, it’s important that you do not compromise those because someone wasted time running off in the wrong direction or has problems working in a team.

I’m sure Joe is a really smart guy, but he’s also now four weeks behind.

 

HOW TO AVOID SINKING CASH IN WASTED EFFORT

Anything that is wasted effort represents wasted time. The best management of our time thus becomes linked inseparably with the best utilization of our efforts.
– Ted Engstrom

CREATE ‘FEATURE BRIEFS’

A Feature Brief is a one page document that serves as a contract between the person requesting a feature and the one implementing it. My Feature Briefs outline three main things:

  1. A short description of the feature or tool
  2. It’s function – how does it work, what are the expected results
  3. It’s justification – why is it needed? What is the problem that is needed to be solved.

It’s important that work not begin until both parties agree on all terminology and requests in the feature brief -again, treat it as a contract. And it’s worth mentioning that Feature Briefs aren’t always needed, but they’re a great way to make sure that goals are clearly defined, everyone’s on the same page, and leave zero wiggle room for interpretation. Here is an example Feature Brief for the first Pose Driver we developed at Crytek.

GATED DEVELOPMENT

Work with Joe’s Lead or Manager to set up ‘Gates’, it’s important that he get the feedback as early as possible if he’s going down the wrong track. I understand that bothering people halfway through a task may not be kosher in Agile development, but never just assume that someone will deliver what you need on the last day of a sprint.

dilbert

Break down the goal into tasks whose progress can be reviewed, it’s important that you, the primary stakeholder, are involved in signing off these gates. Any gated process is only as useful as the person signing off the work, the above comic may seem harsh, but it’s vitally important that the stakeholder is involved in reviewing work. Joe’s manager has a vested interest in Joe moving on to his next tasks, you have a vested interest in the tool or feature being what your team, the company, and whomever else needs.

Perhaps Joe will first present an outline, or maybe after taking a detailed look at the problem, Joe has a better solution he would like to pitch and you all agree to change the Feature Brief. The next gate would be to evaluate a working prototype. You probably know a lot about the feature as you requested it –are there any gotchas, any things that just wont work or have been overlooked? Last is usually a more polished implementation and a user interface.

check_progress

ALWAYS CHECK THE PROGRESS OF EVERYTHING

If Joe has a Lead or Manager, check with them, no need to bother Joe, that’s what the others are there for. If you ask them details about where he’s at, more often than not they will offer for you to speak with him or get you an update. It’s just important to understand that if Joe delivers something that’s not what you need, it’s your fault too. Joe is only a genius in the trenches, it’s your job to make sure that he’s not barking up the wrong tree and wasting company time. It may be tempting, but never allow these guys to shoot themselves in the foot, if you think he’s not on the right track, you need to do something about it. Even without gated development, frequently check the progress of items that are important to you. The atmosphere should be that of a relay race, you are ready to accept the baton, and it needs to be what was agreed upon or you all fail.

hh8ocms9

NEVER SETTLE FOR A HALF-BAKED TEMPORARY SOLUTION YOU CANNOT LIVE WITH

More-often-than-not, whatever Joe did in the time he had allotted is going to be what you ship with. If you agree he will return to address the issues later, make sure that when this doesn’t happen, your team can still be successful. Nothing should be higher priority than a mistake that holds up another team. I am sure you feel this way when it’s your team, when a rig update from last week is causing all gun holster keys to be lost on animation export, it’s important to address that before new work. The same can be said for Joe’s work, don’t make it personal, he is now behind, your guys are relying on him, and it should be high priority for him to deliver the agreed upon work.

posted by Chris at 12:02 AM  

Saturday, August 24, 2013

Ryse at the Anaheim Autodesk User Event

I have been working on Ryse for almost two years now, it’s one of the most amazing projects I have had the chance to work on. The team we have assembled is just amazing, and it’s great to be in the position to show people what games can look like on next-gen hardware..  Autodesk asked us to come out to Anaheim and talk about some of the pipeline work we have been doing, and it’s great to finally be able to share some of the this stuff.

A lot of people have been asking about the fidelity, like ‘where are all those polygons?’, if you look at the video, you will see that the regular Romans, they actually have leather ties modeled that deform with the movement of the plates, and something that might never be noticed: deforming leather straps underneath the plates modeled/rigged holding together every piece of Lorica Segmata armor, and underneath that: a red tunic! Ryse is a labor of love!

We’re all working pretty hard right now, but it’s the kind of ‘pixel fucking’ that makes great art -we’re really polishing, and having a blast. We hope the characters and world we have created knock your socks off in November.

posted by Chris at 11:16 PM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Wednesday, January 9, 2013

Raucous Ball of Noise

email_overload

I can’t remember the last time I had a new year’s resolution. But this year I decided to go for it.

A friend and I were joking that we increasingly feel like Producers: how we spend a large chunk of our time just making sure that things are moving. That a meeting has action items, or minutes. That tasks are scoped, their dependencies tracked, have resources assigned, or have dates on a calendar. That a process has proper gates to allow for course correction, etc. I now spend a majority of my time writing emails, attending meetings, or talking at desks.

Death by Mail

But what is crippling is the emails. I feel I have made a career out of always trying to be helpful, but I was surprised how easily I reply to anything someone sends me. And how willing people are to just ‘go hunting with a shotgun’ and mail 15 others instead of trying to have a discussion with the right person. Many of the mails I saw myself spending time on were threads involving many people and important topics, I felt the need to be involved, but we rarely seemed to come to solid decisions -just running commentary. These mails often had more than 10 people added in CC ‘for awareness’, but then those people feel the need to contribute their opinion in some way.

It turned the simplest discussion a raucous ball of noise, which often then required the creation of a meeting to make a decision on how to progress.

The meetings were more successful, I think in part to the fact that only the people who needed to be involved in the decision were invited. Unfortunately, I had often spent time on the mail thread to avoid the need for a meeting, only to find myself reiterating my sentiments in a meeting the next day.

I looked for a day where I wrote the least number of mails, the number was ~35, and it was a recent sick day when I had stayed home.

Small Adjustment, Big Victory

So I decided to pull myself out of this, after all it is somewhat self-induced. Of all the options, the best seemed to limit myself to 10 work emails a day. All other communication would be in person, in meetings, or on the phone.

I didn’t think this would have the impact it did.

From this, other things started to fall in place. I really disliked how I would increasingly feel like standard operating procedure was constantly looking for dropped balls. I need to let dependencies and other departments drop their balls, and hope that they will learn from it, or hope that someone else is watching. In essence, trust people more, and as a by product: spend more time being a Director and less a Producer.

10 emails a day forced me to really choose what email discussions I want to be involved with carefully. I was not respecting my own time, and this arbitrary rule forced me to do that. As a result, it allows me to spend more time on Art Technology initiatives, looking at the project, talking with my team, and giving proper direction.

I can’t reject meeting invites, or ignore mails, but this little adjustment has really helped me more than I thought it would.

posted by Chris at 2:42 AM  

Saturday, April 21, 2012

Crytek Cinema Sandbox, FMX Talk

I can finally talk about something I have been working on in the past two years.  One of the reasons I returned to Crytek was to push the use of game engines in linear content creation like film and television. On Avatar I saw how much time and effort went into layout, blocking, virtual sets, etc. The tools were archaic, the feedback loop was abysmal at times. In games we have to layout massive levels that people can roam through for 8-15 hours or more and CryEngine’s tools are some of the best for that.

I have been working as Product Manager with a small team of great guys, where I basically define the goals and backlog. It’s thrilling to finally get to see things like Catmull-Clark subd in runtime, or multi-channel EXR output, or Alembic support. It’s been really fun to define what the product is and prioritize features largely without external dependencies or politics, I thank Crytek for trusting me to helm such a project.

We had a live demo kiosk at GDC; check out the Cinema Sandbox Website for more info.

I will be speaking at FMX about CineBox and the whole idea of using game engines for previs and virtual production: The Long Road to Film / Game Convergence

posted by admin at 12:35 PM  

Thursday, August 26, 2010

Perforce Triggers in Python (Pt 1)

Perforce is a wily beast. A lot of companies use it, but I feel few outside of the IT department really have to deal with it much. As I work myself deeper and deeper into the damp hole that is asset validation, I have really been writing a lot of python to deal with certain issues; but always scripts that work from the outside.

Perforce has a system that allows you to write scripts that are run, server side, when any number of events are triggered. You can use many scripting languages, but I will only touch on Python.

Test Environment

To follow along here, you should set up a test environment. Perforce is freely downloadable, and free to use with 2 users. Of course you are going to need python, and p4python. So get your server running and add two users, a user and an administrator.

Your First Trigger

Let’s create the simplest python script. It will be a submit trigger that says ‘Hello World’ then passes or fails. If it passes, the item will be checked in to perforce, if it fails, it will not. exiting while returning a ‘1’ is considered a fail, ‘0’ a pass.

print 'Hello World!'
print 'No checkin for you!'
sys.exit(1)

Ok, so save this file as hello_trigger.py. Now go to a command line and enter ‘p4 triggers’ this will open a text document, edit that document to point to your trigger, like so (but point to the location of your script on disk):

Triggers:
	hello_trigger change-submit //depot/... "python X:/projects/2010/p4/hello_trigger.py"

Close/save the trigger TMP file, you should see ‘Triggers saved.’ echo’d at the prompt. Now, when we try to submit a file to the depot, we will get this:

So: awesome, you just DENIED your first check-in!

Connecting to Perforce from Inside a Trigger

So we are now denying check-ins, but let’s try to do some other things, let’s connect to perforce from inside a trigger.

from P4 import P4, P4Exception
 
p4 = P4()
 
try:
	#use whatever your admin l/p was
	#this isn't the safest, but it works at this beginner level
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
	info = p4.run("info")
	print info
	sys.exit(1)
 
#this will return any errors
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)

So now when you try to submit a file to depot you will get this:

Passing Info to the Trigger

Now we are running triggers, accepting or denying checkins, but we really don’t know much about them. Let’s try to get enough info to where we could make a decision about whether or not we want the file to pass validation. Let’s make another python trigger, trigger_test.py, and let’s query something from the perforce server in the submit trigger. To do this we need to edit our trigger file like so:

Triggers:
	test change-submit //depot/... "python X:/projects/2010/p4/test_trigger.py %user% %changelist%"

This will pass the user and changelist number into the python script as an arg, the same way dragging/dropping passed args to python in my previous example. So let’s set that up, save the script from before as ‘test_trigger.py’ as shown above, and add the following:

import sys
from P4 import P4, P4Exception
 
p4 = P4()
describe = []
 
try:
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
 
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)
 
print str(sys.argv)
describe = p4.run('describe',sys.argv[2])
print str(describe)
 
p4.disconnect()
sys.exit(1)

So, as you can see, it has returned the user and changelist number:

However, for this changelist to be useful, we query p4, asking the server to describe the changelist. This returns a lot of information about the changelist.

Where to Go From here

The few simple things shown here really give you the tools to do many more things. Here are some examples of triggers that can be  created with the know-how above:

  • Deny check-ins of a certain filetype (like deny compiled source files/assets)
  • Deny check-ins whose hash digest matches an existing file on the server
  • Deny/allow a certain type of file check-in from a user in a certain group
  • Email a lead any time a file in a certain folder is updated

Did you find this helpful? What creative triggers have you written?

posted by admin at 12:33 AM  

Wednesday, April 7, 2010

PyQt4 in wSciTE

I have gotten back into some pyqt in my spare time, just because it’s what I used on a daily basis at the last place I worked at. However, I had trouble getting it to run in my text editor of choice. (SciTE)

I couldn’t find a solution with like 45 minutes of googling. When trying to import PyQt4 it would give me a dll error, but I could paste the code into IDLE and it would execute fine. I found a solution by editing the python preferences of SciTE. I noticed that it wasn’t running python scripts the way IDLE was, but compiling them (?). I edited the last line to just run the script, and viola! It worked.

Find this line (usually the last):

command.1.*.py=python -c "import py_compile; py_compile.compile(r'$(FilePath)')"

And change it to:

command.1.*.py=python "(r'$(FilePath)')"

I don’t really know if this messes anything else up, but it does allow the PyQt4 libs to load and do their thing.

posted by admin at 8:04 PM  

Thursday, December 31, 2009

Avatar: Aspect Ratio Note

cameron-avatar-aspectratios-compimg

Size Matters.

Theaters presenting Avatar in 2D and Real3D, show a cropped 2.35:1 version, while 3D IMAX shows the original work at 1.85:1. You might not think that this matters, but you are losing a lot of the image in the crop. If you want to see it as the artists/director intended it looks like IMAX 3D is your only option.

posted by admin at 7:41 PM  

Powered by WordPress