Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, July 5, 2017

Skin Weights Savior

Lots of people were interested in Deformer Weights and ways to save/load skin weights faster. Trowbridge pointed out that the API now allows for setting in one go, no loops needed, like the C++ call. Aaron Carlisle on our team here at Epic had noticed the same thing. Aaron took the time to write up a post going over it here:

Using GetWeights and SetWeights in the Maya Python API

Also, it looks like you can get/set blind data in one go now… 😮

posted by Chris at 11:20 PM  

Tuesday, June 6, 2017

Crysis Technologies

In 2006, the team at Crytek was hard at work trying to come up with ways to ship Crysis –we had definitely bitten off more than we could chew. I found some videos that are now a decade old when cleaning out my HD, but it’s interesting to see –that’s for sure!

Facial Editor

We didn’t have a clue how we would animate all the lines of dialog that were required for the game.  3D Studio Max, which we used to animate at the time had nothing as a means of animating faces, now a decade later, Max and Maya still have zero offering to help with facial rigging or animation.  So.. we decided to write our own. Stephen Bender (Animation Lead) and I worked with Timur Davidenko and Michael Smith (Programmers) on this tool. Marco Corbetta wrote the 2.5D head/facial tracker. Here’s a video:

 

The user would feed into the system a text file, an audio file, and a webcam video of themselves. It would generate the mouth phonemes from the text/audio, and upper 2/3rds of the face from the video. The system would generate this animation on the same interface the animators used to animate so it was easily editable. It shipped with the MS speech DLL, but you could swap that for Annosoft if you licensed it. Crysis shipped with all characters having 98 blendshapes, driven by Facial Editor curves/animation using non-linear expressions. Imagine shipping a game today without having animators touch a face in a DCC app!

SequencePane

click to enlarge

PhotoBump

Many people know that Crytek released the first commercially available normal map generator: PolyBump, but rarely has anyone heard of it’s companion: PhotoBump. This was Created by Marco Corbetta around the same time, but released only to CryEngine licensees in 2005. It was probably one of the first commercial photogrammetry apps, and definitely one of the first uses of photogrammetry in games. Much of the rocky terrain in Crysis was created with the help of  PhotoBump! Marco also stamped/derived high frequency details from the diffuse, which I hadn’t seen others do until sometime after.

SIGGRAPH Best Realtime Graphics 2007

Here’s the SIGGRAPH ET reel from the year we released Crysis. I still can’t believe some of this stuff, like the guy pathfinding across the bridge of constrained boards and pieces of rope! I actually cut and edited this video myself back then, rendering it all out from the engine as well!

posted by Chris at 10:25 PM  

Friday, September 2, 2016

SIGGRAPH Realtime Live Demo Stream

The stream of our SIGGRAPH Realtime Live demo is up on teh internets. If you haven’t seen the actual live demo, check it out!

I feels amazing to win the award for best Realtime Graphics amongst such industry giants. There are so many companies from so many industries participating now, and the event has grown such much. Feels really humbling to be honored with this for a third year; no pressure!

posted by Chris at 9:26 AM  

Thursday, October 23, 2014

Destiny Rigging/Animation Slides and Videos Up

destiny

The Destiny talks at SIGGRAPH were really interesting, the material just went up and you should definitely check it out:

http://advances.realtimerendering.com/destiny/siggraph2014/animation/index.html

posted by Chris at 11:37 AM  

Sunday, August 10, 2014

RYSE AT SIGGRAPH 2014

ryse_sigg

Crytek has won the SIGGRAPH 2014 award for ‘Best Real-Time Graphics’ with Ryse: Son of Rome, check it out in the Electronic Theater or Computer Animation Festival this week at SIGGRAPH.

We are also giving multiple talks:

I will be speaking in the asset production talk, as well as Sascha Herfort and Lars Martinsson. It’s also the first course we have done at Crytek where the entire course is devoted to one of our projects and we have 50+ pages of coursenotes going into the ACM digital library.

posted by Chris at 12:54 AM  

Tuesday, July 15, 2014

RigPorn: The Last of Us

I realize most of you have seen this, but for those of you who haven’t, Judd walks people through TLOU rigs with a focus on facial as well. Really great stuff.

posted by Chris at 3:15 PM  

Monday, July 7, 2014

Geodesic Voxel Binding in Maya 2015

If you’re like me, your ears will perk up at any technology promising a better initial skin bind. So I decided to take a look at the new geodesic voxel binding in Maya 2015, I couldn’t find much information about it online, so I decided to do the usual and write the post I would have wanted to find when I googled. I hope it’s useful.

Background

This new way of skin binding was presented by Autodesk at SIGGRAPH 2013.

nanosuit

Here’s a link to the SIGGRAPH 2013 white paper: Geodesic Voxel Binding for Production Character Meshes, definitely worth checking out. I do like how Autodesk is now using the word ‘Production’ a lot more. It seems they are no longer using simple test cases to test pipelines and workflows. Above, they used our Nanosuit, from the Crysis franchise. Here’s the full video that accompanies the talk: [LINK]

How It Works

voxelinfo

The basic idea is that it voxelizes characters into three types of voxels, skeleton, interior, and boundary. This way it tries to eliminate cross-talk. At ILM we had a binding solution in Zeno that used mesh normals and this eliminated crosstalk between manifold parts like fingers, but most of this paper focuses on skinning non-manifold meshes, meshes with intersecting parts, open holes, etc.

In Practice

Here’s the hand of the Marius bust we send out for rigging tests, notice when skinned with Closest in Hierarchy, there is some significant crosstalk:

lice_ch

Here’s an initial finger bind with the new algorithm, there’s still some crosstalk at 1024voxel resolution (highest possible), but it’s much better:

lice_gb

As someone who is very nitpicky about my skinning, *any* crosstalk at all is unacceptable, and it takes me about the same amount of time to clean the tiniest values as it does these larger ones. Here’s a closer look at some of the crosstalk from the ‘gb’ binding:

lice_gb_weights_trim

Crosstalk isn’t just bad for deformation, but these tiny little values are inefficient and sloppy, especially if you are sending it to a game engine.

Another area that requires significant cleanup is the underarm area where the serratus anterior lies, here I thought the new approach would work very well, unfortunately the binding didn’t have a noticeable difference from previous methods.

click to enlarge

click to enlarge (Head mesh from CryENGINE Asset Pack on Steam)

Few things are more difficult to skin than the human face. Here you can see traditional vs geodesic. I will say it’s definitely better than the old bind, but still has issues. This is one of the first initial skin binds on a closed-mouth neutral bindpose I have seen that has no cross-talk on one lip. I tweaked the falloffs doing three different binds on the traditional on the left.

Multi-Threaded?

voxelbind_crop

Another thing I like is a hint at a multi-threaded future. The binding process (voxel calculation, etc) is multi-threaded. At Crytek, we even make hardware purchasing decisions based on Maya not being multi-threaded. We get animators the fastest 2 core CPUs, this allows them better interactive framerates, and still a second core for a headless mayapy to export a long linear cutscene or animation. It’s nice to see Autodesk begin to think about multi-threading tools and processes.

In Conclusion

The new Geodesic Bind algorithm from Autodesk is a step forward. There’s still no free lunch, but I will be using this as my default bind in the future. I will update this post if I run into any problems or benefits not outlined here. It would be great if there was a voxel debug view, or the ability to dynamically drive voxel resolution with an input like vertex colors a map, or polygon density.

Backwards Compatibility: New Nodes and Attrs

If you just want to use the latest Maya to try the feature, here are some gotchas. There is a new geomBind node, and some attributes on shape nodes:

// Error: file: C:/Users/chris/Desktop/TechAnimationTest/TechAnimationTest/Head_Mesh_skin.ma line 28725: The skinCluster ‘skinCluster1’ has no ‘gb’ attribute. //
// Warning: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 27464: Unrecognized node type ‘geomBind’; preserving node information during this session. //
// Error: file: C:/Users/chris/Desktop/crytek_sdk_head_a/head_a.ma line 34: The mesh ‘eyes_MSH’ has no ‘.sdt’ attribute. //

The geomBind node stores ‘the post voxel validation state performed during the geodesic voxel bind algorithm.’ and some other attributes. It has a message attr that connects to a skinCluster. The SDT attr on shapes is not related to skinning, it is a new ‘Subdivision Method’ attr for the openSubDiv support.

geomBindNodes

The above said, it seems to work fine for me if I just delete that stuff, the skin weights are fine.

 

posted by Chris at 1:29 AM  

Monday, June 30, 2014

Wasted Time, Sunken Cost, and Working In a Team

sunk

YOUR APE ANCESTORS

Let’s say that you want to do something, like watch a movie. When you arrive and open your wallet to purchase a 10 dollar ticket, you notice you have lost a 10 dollar bill, the majority of people buy a movie ticket anyway (88%).

Let’s examine a slightly different situation, where you arrive at the theater, but have misplaced your ticket, would you go buy another? Studies show that a majority of people (54%) would not re-purchase a ticket and watch the film. The situations are the same, but in the first, you lost 10 dollars, it wasn’t associated with the movie, in the second, you lost your ticket, 10 dollars that was specifically allotted to that task, and loss sucks.

This is a great example of the Sunk Cost Fallacy. Kahneman and Tversky are two researchers who have spent a lot of their careers looking at loss aversion and decision theory. The bottom line is, it’s human nature that the more you invest in something, the harder it is to abandon it. As a Technical Artist, you will find yourself in a position where you are the decision-maker, don’t let your ape ancestors trick you into making a poor decision.

..since all decisions involve uncertainty about the future the human brain you use to make decisions has evolved an automatic and unconscious system for judging how to proceed when a potential for loss arises. Kahneman says organisms that placed more urgency on avoiding threats than they did on maximizing opportunities were more likely to pass on their genes. So, over time, the prospect of losses has become a more powerful motivator on your behavior than the promise of gains. Whenever possible, you try to avoid losses of any kind, and when comparing losses to gains you don’t treat them equally. – You Are Not So Smart

51809459

IN PRODUCTION

As a Technical Artist in a position to lead or direct a team, you will often be the person signing off tools or features you and your team have requested. How many times have you been in the following situation:

A feature or tool is requested. Joe, a genius ‘lone wolf’ programmer receives the task, he is briefed and told to update the customers periodically or ask them in the case he needs any clarification. Now, sometimes what happens is what my brother likes to call ‘The Grand Reveal’. It’s where, for whatever reason, Joe sits in his corner working hard on a task, not involving anyone, and on the last day he valiantly returns from the mountain top and presents something that is unfortunately neither really what was requested or needed.

In this situation, you get together with his Lead and point out that what is delivered is not what was requested, he will more than likely reply “But Joe spent four weeks on this! Surely we can just use this until Joe can later rework it?”

No, you really can’t. Joe needs to know he works on a team, that people rely on his work. Nothing gets people to deliver what they are supposed to next time like being forced to redo their work. I guarantee you next time Joe will be at your teams desks any time he has a question about the tool or feature he is working on. You know the needs of your project or team, it’s important that you do not compromise those because someone wasted time running off in the wrong direction or has problems working in a team.

I’m sure Joe is a really smart guy, but he’s also now four weeks behind.

 

HOW TO AVOID SINKING CASH IN WASTED EFFORT

Anything that is wasted effort represents wasted time. The best management of our time thus becomes linked inseparably with the best utilization of our efforts.
– Ted Engstrom

CREATE ‘FEATURE BRIEFS’

A Feature Brief is a one page document that serves as a contract between the person requesting a feature and the one implementing it. My Feature Briefs outline three main things:

  1. A short description of the feature or tool
  2. It’s function – how does it work, what are the expected results
  3. It’s justification – why is it needed? What is the problem that is needed to be solved.

It’s important that work not begin until both parties agree on all terminology and requests in the feature brief -again, treat it as a contract. And it’s worth mentioning that Feature Briefs aren’t always needed, but they’re a great way to make sure that goals are clearly defined, everyone’s on the same page, and leave zero wiggle room for interpretation. Here is an example Feature Brief for the first Pose Driver we developed at Crytek.

GATED DEVELOPMENT

Work with Joe’s Lead or Manager to set up ‘Gates’, it’s important that he get the feedback as early as possible if he’s going down the wrong track. I understand that bothering people halfway through a task may not be kosher in Agile development, but never just assume that someone will deliver what you need on the last day of a sprint.

dilbert

Break down the goal into tasks whose progress can be reviewed, it’s important that you, the primary stakeholder, are involved in signing off these gates. Any gated process is only as useful as the person signing off the work, the above comic may seem harsh, but it’s vitally important that the stakeholder is involved in reviewing work. Joe’s manager has a vested interest in Joe moving on to his next tasks, you have a vested interest in the tool or feature being what your team, the company, and whomever else needs.

Perhaps Joe will first present an outline, or maybe after taking a detailed look at the problem, Joe has a better solution he would like to pitch and you all agree to change the Feature Brief. The next gate would be to evaluate a working prototype. You probably know a lot about the feature as you requested it –are there any gotchas, any things that just wont work or have been overlooked? Last is usually a more polished implementation and a user interface.

check_progress

ALWAYS CHECK THE PROGRESS OF EVERYTHING

If Joe has a Lead or Manager, check with them, no need to bother Joe, that’s what the others are there for. If you ask them details about where he’s at, more often than not they will offer for you to speak with him or get you an update. It’s just important to understand that if Joe delivers something that’s not what you need, it’s your fault too. Joe is only a genius in the trenches, it’s your job to make sure that he’s not barking up the wrong tree and wasting company time. It may be tempting, but never allow these guys to shoot themselves in the foot, if you think he’s not on the right track, you need to do something about it. Even without gated development, frequently check the progress of items that are important to you. The atmosphere should be that of a relay race, you are ready to accept the baton, and it needs to be what was agreed upon or you all fail.

hh8ocms9

NEVER SETTLE FOR A HALF-BAKED TEMPORARY SOLUTION YOU CANNOT LIVE WITH

More-often-than-not, whatever Joe did in the time he had allotted is going to be what you ship with. If you agree he will return to address the issues later, make sure that when this doesn’t happen, your team can still be successful. Nothing should be higher priority than a mistake that holds up another team. I am sure you feel this way when it’s your team, when a rig update from last week is causing all gun holster keys to be lost on animation export, it’s important to address that before new work. The same can be said for Joe’s work, don’t make it personal, he is now behind, your guys are relying on him, and it should be high priority for him to deliver the agreed upon work.

posted by Chris at 12:02 AM  

Saturday, August 24, 2013

Ryse at the Anaheim Autodesk User Event

I have been working on Ryse for almost two years now, it’s one of the most amazing projects I have had the chance to work on. The team we have assembled is just amazing, and it’s great to be in the position to show people what games can look like on next-gen hardware..  Autodesk asked us to come out to Anaheim and talk about some of the pipeline work we have been doing, and it’s great to finally be able to share some of the this stuff.

A lot of people have been asking about the fidelity, like ‘where are all those polygons?’, if you look at the video, you will see that the regular Romans, they actually have leather ties modeled that deform with the movement of the plates, and something that might never be noticed: deforming leather straps underneath the plates modeled/rigged holding together every piece of Lorica Segmata armor, and underneath that: a red tunic! Ryse is a labor of love!

We’re all working pretty hard right now, but it’s the kind of ‘pixel fucking’ that makes great art -we’re really polishing, and having a blast. We hope the characters and world we have created knock your socks off in November.

posted by Chris at 11:16 PM  

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Thursday, August 26, 2010

Perforce Triggers in Python (Pt 1)

Perforce is a wily beast. A lot of companies use it, but I feel few outside of the IT department really have to deal with it much. As I work myself deeper and deeper into the damp hole that is asset validation, I have really been writing a lot of python to deal with certain issues; but always scripts that work from the outside.

Perforce has a system that allows you to write scripts that are run, server side, when any number of events are triggered. You can use many scripting languages, but I will only touch on Python.

Test Environment

To follow along here, you should set up a test environment. Perforce is freely downloadable, and free to use with 2 users. Of course you are going to need python, and p4python. So get your server running and add two users, a user and an administrator.

Your First Trigger

Let’s create the simplest python script. It will be a submit trigger that says ‘Hello World’ then passes or fails. If it passes, the item will be checked in to perforce, if it fails, it will not. exiting while returning a ‘1’ is considered a fail, ‘0’ a pass.

print 'Hello World!'
print 'No checkin for you!'
sys.exit(1)

Ok, so save this file as hello_trigger.py. Now go to a command line and enter ‘p4 triggers’ this will open a text document, edit that document to point to your trigger, like so (but point to the location of your script on disk):

Triggers:
	hello_trigger change-submit //depot/... "python X:/projects/2010/p4/hello_trigger.py"

Close/save the trigger TMP file, you should see ‘Triggers saved.’ echo’d at the prompt. Now, when we try to submit a file to the depot, we will get this:

So: awesome, you just DENIED your first check-in!

Connecting to Perforce from Inside a Trigger

So we are now denying check-ins, but let’s try to do some other things, let’s connect to perforce from inside a trigger.

from P4 import P4, P4Exception
 
p4 = P4()
 
try:
	#use whatever your admin l/p was
	#this isn't the safest, but it works at this beginner level
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
	info = p4.run("info")
	print info
	sys.exit(1)
 
#this will return any errors
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)

So now when you try to submit a file to depot you will get this:

Passing Info to the Trigger

Now we are running triggers, accepting or denying checkins, but we really don’t know much about them. Let’s try to get enough info to where we could make a decision about whether or not we want the file to pass validation. Let’s make another python trigger, trigger_test.py, and let’s query something from the perforce server in the submit trigger. To do this we need to edit our trigger file like so:

Triggers:
	test change-submit //depot/... "python X:/projects/2010/p4/test_trigger.py %user% %changelist%"

This will pass the user and changelist number into the python script as an arg, the same way dragging/dropping passed args to python in my previous example. So let’s set that up, save the script from before as ‘test_trigger.py’ as shown above, and add the following:

import sys
from P4 import P4, P4Exception
 
p4 = P4()
describe = []
 
try:
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
 
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)
 
print str(sys.argv)
describe = p4.run('describe',sys.argv[2])
print str(describe)
 
p4.disconnect()
sys.exit(1)

So, as you can see, it has returned the user and changelist number:

However, for this changelist to be useful, we query p4, asking the server to describe the changelist. This returns a lot of information about the changelist.

Where to Go From here

The few simple things shown here really give you the tools to do many more things. Here are some examples of triggers that can be  created with the know-how above:

  • Deny check-ins of a certain filetype (like deny compiled source files/assets)
  • Deny check-ins whose hash digest matches an existing file on the server
  • Deny/allow a certain type of file check-in from a user in a certain group
  • Email a lead any time a file in a certain folder is updated

Did you find this helpful? What creative triggers have you written?

posted by admin at 12:33 AM  

Sunday, August 8, 2010

Sigma 8mm vs 4.5mm Comparison on Nikon APS-C

click to enlarge

I have been researching the best options available for the D300 when it comes to quickly generating some lightprobes/panoramas. This of course means fisheye lenses. Currently, Sigma is the only company that makes a 180 degree circular fisheye. They come in two flavors, 8mm, and 4.5mm. The 8mm projects a full circle onto a full 35mm sensor (full frame), but on an APS-C sensor it is cropped. The 4.5mm however, throws a perfect circular image onto an APS-C sized sensor; I believe it is the only lens that does this.

The Pixels

You would think that the 4.5mm would be the way to go, I did until I took a look at both. It really comes down to the pixels. The width in pixels of the image thrown by the 4.5mm lens is roughly 2285px in diameter. So while you can shoot less, an entire panorama taking about 3 shots, it will come out as a <4k equirectangular. However, using the 8mm, you need 4 shots, plus one zenith (5 shots total) and it generates an 8k image.  While the 4.5mm does generate a 180 degree image across, as you can see it is very wasteful.

So why doesn’t the lens have full coverage in at least the short dimension? I think it’s because it’s a lens designed to fit Canon and Sigma cameras, not just Nikon. Canon sensors have a 1.6 crop factor and Sigma’s Foveon X3 has a 1.7 crop factor (13.8mm)! The coverage is so small because Nikon DX format has a 1.5 crop factor, the APS-C sensor is much larger than Canon or Sigma. The actual circle measures 12.3mm, even small for the Sigma, which makes me believe they future-proofed it for Four Thirds.

For an APS-C sensor like the D300, I would recommend the 8mm, unless you really need a full uncropped image. The 4.5mm, while being more expensive, also has an aperture of 2.8, compared to the 8mm (f/3.5)

I am not super constrained on time, if you are on set and shooting bracketed probes between takes or something, the 4.5mm will save you two shots (18 pictures) and this might be preferable. That said, it will only generate a 4k image in the end (which might be enough)

posted by admin at 2:56 PM  

Wednesday, April 7, 2010

RigPorn: Uncharted 2

My friends Judd and Rich gave a talk on some of the Character Tech behind Uncharted 2. Here are the slides.

posted by admin at 8:32 PM  

Thursday, December 31, 2009

Avatar: Aspect Ratio Note

cameron-avatar-aspectratios-compimg

Size Matters.

Theaters presenting Avatar in 2D and Real3D, show a cropped 2.35:1 version, while 3D IMAX shows the original work at 1.85:1. You might not think that this matters, but you are losing a lot of the image in the crop. If you want to see it as the artists/director intended it looks like IMAX 3D is your only option.

posted by admin at 7:41 PM  

Wednesday, July 8, 2009

Buggy Camera Issues In Maya on x64

Many, many people are having weird, buggy camera issues where you rotate a view and it snaps back to the pre-tumbled state (view does not update properly). There are posts all over, and Autodesk’s official response is “Consumer gaming videocards are not supported”. Really? That’s basically saying: All consumer video cards, gaming or not, are unsupported. I have had this issue on my laptop, which is surely not a ‘gaming’ machine. Autodesk says the ‘fix’ is to upgrade to an expensive pro-level video card. But what they maybe would tell you if they weren’t partnered with nVidia is: It’s an easy fix!

Find your Maya ENV file:

C:\Documents and Settings\Administrator\My Documents\maya\2009-x64\Maya.env

And add this environment variable to it:

MAYA_GEFORCE_SKIP_OVERLAY=1

Autodesk buried this information in their Maya 2009 Late Breaking Release Notes, and it fixes the issue completely! However, even on their official forum, Autodesk employees and moderators reply to these draw errors as follows:

Maya 2009 was tested with a finite number of graphics cards from ATI and Nvidia, with drivers from each vendor that provided the best performance, with the least amount of issues. (at the time of product launch).  A list of officially qualified hardware can be found here: http://www.autodesk.com/maya-hardware. Maya is not qualified/supported on consumer gaming cards.  Geforce card users can expect to have issues.  This is clearly stated in the official qualification charts mentioned above.

posted by admin at 10:43 AM  

Tuesday, June 30, 2009

Critical Analysis

One of the Year’s Worst Films

Transformer’s 2 was rated by critics at around a 18% as shown on Rottentomatoes.com. This is possibly one of the lowest ratings for a hugely expensive summer blockbuster that I can remember. This makes the movie less well reviewed than Species III, Rambo IV, or even Rush Hour III.

But it has now shown to have had the second largest opening weekend of all time; raking in over 200 million dollars domestically and 390 million worldwide in it’s first 5 days. This is within 1% of the current reigning champion, Batman: The Dark Knight. Paramount’s national exit polling revealed that more than 90% of those surveyed said the new movie was as good as or better than the first film. About 67% of moviegoers polled said the film was “excellent,” an even better score than that generated by Paramount’s “Star Trek,” one of the year’s best-reviewed movies.

The critics unanimously told their readers this film was trash, and word of mouth brought the film to within one percent of the Dark Knight. Hell, Transformer’s 2 was shown on less screens and even grossed more dollars per screen than the Dark Knight.

So how did a movie that so many flocked to see; nearly toppling the current reigning all time champ, get reviewed so viciously?

As reviews started to roll in, I saw an interesting thing happen. Some reviews were posted before people had seen the film, trashing the Michael Bay, and not really referencing anything from the film itself. (These were not logged as ‘top critics’ on the site.) But it initiated a torrent of others jumping on the hatewagon; beating their chests and scampering in competition to come up with better, more scurrilous, insulting, and defamatory witticisms trashing the director and his film. It became what I termed a giant ‘snoodBall’. Each critic seemed to feel that in order to stand out above the rest, he had to give an even worse, more scathing review. This led to professional critics actually printing things I just find ridiculous:

“I hated every one of the 149 minutes. This is so bad it’s immoral. Michael Bay is a time-sucking vampire who will feast off your lost time.”
– Victoria Alexander

“Michael Bay has once again transformed garbage into something resembling a film..”
– Jeffrey M. Anderson

“Transformers: The Revenge of The Fallen is beyond bad, it carves out its own category of godawfulness.”
– Peter Travers (Rolling Stone)

Who can say they actively *hated* every minute of a movie? I was so surprised. I had seen an advance screening of the film here at ILM, and I knew it was no Citizen Kane, but it surely isn’t an 18%! It seems the reviewers are so disjointed from the public they serve. Apparently there comes a certain time when you simply cannot write a decent review for a movie that all your peers said was garbage, and that is when you are just adding to this gigantic hate machine and not really reviewing anything.

If the film would have been reviewed even a little more realistically (I mean come on, Terminator IV even has a 33%!) it would have easily had the 1% more to topple the Dark Knight; possibly becoming the worst reviewed #1 box office hit of all time.

posted by Chris at 5:00 AM  

Wednesday, March 4, 2009

Common Character Issues: Attachments

I love this picture. It illustrates a few large problems with video games. One of which I have wanted to talk about for a while: Attachments of course. I am talking about the sword (yes, there is a sword, follow her arm to the left side of the image..)

Attaching props to a character that has to dynamically be seen from every angle through thousands of animations can be difficult. So difficult that people often give up. This was a promotional image for an upcoming Soul Calibur title, this goes to show you how difficult the issue is. Or maybe no one even noticed she was holding a sword. So let’s look at a promotional image from another game:

Why does it happen?

Well, props are often interchangeable. Many different props are supposed to attach to the same location throughout the game. This is generally done by marking up the prop and the skeleton with two attachment points that snap to one another.

In this case you often have one guy modeling the prop, one guy placing the skeleton, and one guy creating the animation. All these people have to work together.

How can we avoid these problems?

This problem is most noticeable at the end of the line: you would really only see it in the game. But this is one of the few times you will hear me say that checking it ‘in the engine’ is a bad idea. It’s hard enough to get animators to check their animation, much less test all the props in a ‘prop test level’ of sorts.

I feel problems like this mainly show up in magazines and final games because you are leaving it up to QA and other people who don’t know what to look for. There was a saying I developed while at Crytek when trying to impart some things on new tech art hires: “Does QA know what your alien should deform like? And should they?” The answer is no, and it also goes for the things above. Who knows how robotnik grips his bow.. you do, the guy rigging the character.

So in this case I am all for systems that allow animators to instantly load any common weapons and props from the game directly onto the character in the DCC app. You need a system that allows animators to be able to attach any commonly used prop at any time during any animation (especially movement anims)

Order of operations

Generally I would say:

  1. The animator picks a pivot point on the character. They will be animating/pivoting around this.
  2. The tech artist ‘marks’ up the skeleton with the appropriate offset transform.
  3. The modeler ‘marks’ his prop and tests it (iteratively) on one character
  4. The tech artist adds the marked up prop (or low res version) to a special file that is streamlined for automagically merging in items. Then adds a UI element that allows the animator to select the prop from a drop down and see it imported and attached to the character.

Complications

I can remember many heated discussions about problems like this. The more people that really care about hte final product, and the more detailed or realistic games and characters get, the more things like this will be scrutinized.

This is more of a simple problem that just takes care and diligence, whereas things like multiple hand positions and hand poses are a little more difficult. Or attachments that attach via a physics constraint in the engine. There are also other, much more difficult issues in this realm, like exact positioning of AI characters for interacting with each other and the environment, which is another tough ‘snap me into the right place’ problem dealing with marking up a character and an item in the world to interact with.

posted by Chris at 11:25 AM  

Monday, March 2, 2009

Make a 3D Game for the Right Reasons! (My SF4 post)

I ran out and got Street Fighter 4 (SF4) just like everyone else. Street Fighter was ‘the game’ you had to beat all the kids in the neighborhood at for an entire generation (sadly replaced by PES), and I have very fond memories of playing it.

SF4 is the first 3D game in the series created by Capcom, in the past, Street Fighter EX was developed by Arika, a company formed by one of the creators of the original game as well as many other Capcom employees. Even though porting the franchise to 3D was largely considered a complete and utter failure, they decided to give it another go, this time ‘officially’.

Strengths and Weaknesses

As artistic mediums, 2D and 3D are very different. 3D art is perspective correct, it is clean, sterile and perfect. It is much simpler to do rotations and transformations of rigid objects in 3D, this is why Disney started doing vehicles as cel shaded 3D objects in their later films. However, it is very difficult to add character to 3D geometry. As an example, think of Cruella Deville’s car from 101 Dalmatians, it has character. (When it’s not overly-rotoscoped from a real life model)

2D lends itself to organic shapes, which can be gestural, and are ‘rendered’ by a human being; so they’re never perfect. 3D is great for vehicles and space ships, anything that generally lacks character. 3D is also the only way you are going to get a photo-real gaming experience. For instance; when we were making Crysis, we knew this was the only way, there was never a question of which medium to use.

When I go on my ‘2D/3D rant’, I usually hearken back to something I love, so let’s take a look at the transition of an older game from 2D to 3D: the Monkey Island Series.

Many years ago developers felt that in order to compete, they had to ship games with the latest 3D technology. This is really unfortunate, and leads to them choosing to sometimes develop an ‘ok’ 3D game over a ‘beautiful’ 2D game. I believe in Curse of Monkey Island (last 2D title in the series (so far)), in the options menu there was an option to “Enable 3D acceleration”, upon clicking it, the words “We were only kidding” and other phrases pop up next to the radio button. The developers were already feeling the pressure to release a 3D game.

2D games are still profitable, just look at Xbox Live, where 2D games like Castle Crashers have been some of the top selling titles this year.

Lastly, lets not forget that 3D games are actually cheaper, or have been, historically. However, maybe not with some current-gen titles; where garbage cans have 4k texture maps and take two weeks to sculpt in Z Brush. But animation is definitely easier than it ever was. Of course the other side of that argument is that you can now have 6k animations in a game.

Street Fighter 4 Is A Three Dimensional ‘2D’ Game

Before going on, it’s important to note that in SF4, the characters still move on a 2D plane as they always have. It’s actually nearly identical to all the other games in the series as far as design.

As always, you are pitting your guy up against someone else, and both of your characters are the focal point, they are the only interactive things in a game which centers around them. This is a series that has always been about character, and has always been 2D with great hand drawn art. Remember: Capcom offered fans a 3D game and they did not want it.

So, SF4 is a game that takes place in 2D space and focuses on only two characters at any given time. This is great news, it means you can really focus on the characters, moreso than almost any other game genre.

The Constraints of a 3D Character Art Style

3D characters are driven by ‘joints’ or ‘bones’. Each joint has some 3D points rigidly ‘glued’ to it, because of this, 3D characters, especially in games, look rigid; like action figures. In my opinion SF4 characters feel like lifeless marionettes. In a 2D game, you can quickly and easily draw any form you want. The more you want to alter the ‘form’ a 3D character, the more joints it takes, and the more complex the ‘rig’ that drives the character. Also, on consoles, the number of joints you can use are limited. This is easily distinguished when comparing 2D and 3D art:

Notice how the 3D characters look lifeless? They don’t have to, it’s just more difficult. Whereas before, adding a cool facial expression meant simply drawing it by hand. Now it means sculpting a 3D shape: by hand. It’s tedious and difficult. Also, notice how in 3D Chun Li’s cloth is ‘clipping’ into her leg, or Cammy’s wrist guard is ‘clipping’ into her bicep. 3D is much more difficult to get right, because you are messing with sculptures, not drawings. You could also say the foreshortening on Chun Li’s arm in 2D looks weird; there are trade-offs, but in a 2D pipeline is also much easier to alter character proportions and fix things.

There are entire web pages dedicated to the weird faces of SF4 characters. It seems one of the easiest ways to make a character look in ‘pain’ was to translate the eyeballs out of the head: it looks ridiculous when compared to the hand-drawn hit reactions:

Whereas before you had one guy drawing pictures of a character in motion (maybe someone to color), now it takes a team to do the same job. You often have a modeler, technical artist, and animator, then hopefully a graphics engineer for rendering. That’s a lot of people to do something one person used to handle, and it introduces not only beaurocracy, but a complicated set of choreographed events that culminate in the final product.

This is a Capcom press image of Chun Li and it highlights my point exactly. It is harder and much more complicated to sculpt a form than draw it. Not to mention sculpt it over time, using complicated mathematical tools to manipulate geometry. However, it’s not an impossible task, and to think that this is ‘ok’ enough to release as a press image for an upcoming AAA game is crazy.

It’s not just deformation and posing, but animation in general. There is a lot of laziness associated with 3D animation. Let me be more precise: it is easier to ‘create’ animation because the computer interpolates between poses for you. As an animator, you have to work much harder to not fall into this ‘gap’ of letting the machine do the work for you. Playing SF4 you will see sweeps, hurricane kicks, and various other animations that just rotate the entire character as if on a pin. They also share and recycle the same animations on different characters, this was not possible in 2D.

One thing I find interesting is that, though the new game is 3D, it really has no motionblur. The 2D games had Chuck-Jonesesque motionblur drawn into the frames to add a quickness and ‘snap’, but it also adds an organic quality that is lacking in SF4.

EDIT: Having now logged a lot more time playing, there is indeed a weird kind of motion blur, it’s barely noticeable at all and looks almost hand painted/added.

Another odd thing, I can spot mocap when I see it, and I think the technique was used on some of the background characters, like the children playing under the bridge. The motion is so stellar that it puts the main characters to shame. That’s kind of sad. Though, all new characters introduced on the console seem to have much better animation, so maybe this is something Capcom have worked on more.

So Why Make A 3D Street Fighter?

If you aren’t going to make a game where characters can move through 3D space (no Z depth), why use a 3D art style, especially when it is harder to create expressive characters?

I will offer some reasons to ‘reboot’ the Street Fighter franchise as a 3D fighter:

  • Finally use collision detection to not have characters clip into one another as they always have
  • Use physics to blend ragdoll into hit reactions, also for hit detection and impulse generation; maybe allow a punch to actually connect with the opponent (gasp)
  • Use jiggly bones for something other than breasts/fat, things like muscles and flesh to add a sense of weight
  • Employ a cloth solver, c’mon this is a character showcase; if NBA games can solve cloth for all on court characters, you can surely do nice cloth for two.
  • Markup the skeletons to allow for ‘grab points’ so that throw hand positions vary on char size and are unique
  • Attach proxies to the feet and have them interact with trash/grass on the ground in levels
  • Use IK in a meaningful way to always look at your opponent, dynamically grab him in mid animation, always keep feet on slightly uneven ground, or hit diff size opponents (or parameterize the anims to do these)
  • Play different animations on different body parts at different times, you are not locked into the same full body on a frame like 2D
  • For instance: se ‘offset animations’ blended into the main animation set to dynamically create the health of the character, or heck, change the facial animation to make them look more tired/hurt.
  • Shaders! In 3D you can use many complex shaders, to render ‘photorealistic or non-photorealistic images (like cartoons)
  • You can also write shaders to do things like calculate/add motionblur!

Unfortunately: Capcom did none of these. Sure, a few of the above would have been somewhat revolutionary for the franchise, but as it stands, 3D characters add nothing to SF4, I believe they actually degrade the quality of the visuals.

EDIT: After playing more I have noticed that they are using IK (look IK) on just the head bone, shorter characters look up when a large character jumps in front of them.

posted by Chris at 12:15 PM  

Wednesday, December 17, 2008

Kavan et al Have Done It!

Ladislav Kavan is presenting a paper entitled ‘Automatic Linearization of Nonlinear Skinning‘ at the 2009 Symposium on Interactive 3D Graphics and Games on skinning arbitrary deformations! Run over to his site and check it out. In my opinion, this is the holy grail of sorts. You rig any way you want, have complex deformation that can only solve at 1 frame an hour? No problem, bake a range of motion to pose-driven, procedurally placed, animated, and weighted joints. People, Kavan included, have been presenting papers in the past with systems somewhat like this, but nothing this polished and final. I have talked to him about this stuff in the past and it’s great to see the stuff he’s been working on and that it really is all I had hoped for!

This will change things.

posted by Chris at 12:16 PM  

Saturday, December 13, 2008

Quantic Dreams

This is what it looks like on the other side of the uncanny valley.

No longer working for Crytek, maybe I can comment on some industry related things without worrying that my opinions could be misconstrewn as those of my former employer.

EuroGamer visited Quantic Dream this week, the studio working on the game ‘Heavy Rain’, who’s founder, de Fondaumière, arrogantly proclaimed that there was ‘no longer an uncanny valley‘, and that there are ‘very, very few‘ real artists in the video game industry. (A real class act, no?)

So their article starts with “We can’t tell you how Heavy Rain looks, sounds or plays…”, which I find kind of ridiculous seeing as how the studio’s only real claim to fame right now is the hype of it’s co-founder who casually claims they have accomplished one of the most amazing visual feats in the history of computer graphics (in real-time no less!).

Across the world there are thousands of outstanding artists chasing this same dream, from Final Fantasy, to Polar Express and Beowulf; people have tried to cross the ‘uncanny valley’ for years, and are getting closer every day. At Christmas you will be treated to what is probably one of the closest attempts yet. (Digital Domain’s work in Benjamin Button)

Not really having any videos to back up the hyperbole, they gave the EuroGamer staff a laundry list of statistics about their production.

I have yet to see anything stunning to back up the talk, 8 months after making his statement about crossing the uncanny valley, they released this video, which was just not even close, to be frank.

It looks like they aren’t using performance capture. Without markers on the face this means they have to solve the facial animation from elsewhere, usually a seated actress who pretends to be saying lines that were said in the other full body capture session. There’s a reason why studios like Imageworks don’t do this, it’s hard to sync the two performances together. If they have accomplished what other’s have not, with much less hardware/technology, it means they have some of the best artists/animators out there, and I say hats off to them.

But with every image they do release, and every arrogant statement, it is digging the hole deeper. The sad thing is they could release of of the greatest interactive experiences yet, but their main claim is the most realistic cg humans yet to be seen, and if they fail at this, it will overshadow everything.

At least they know how their fellow ps3 devs over at Guerilla must have been feeling for a few years now.

posted by Chris at 6:53 AM  

Sunday, November 16, 2008

Change of Venue

I am now living in San Francisco! My last day at Crytek was October 31st, and it was pretty difficult for me as it is one of the best companies I have ever worked for. I have so much respect for all the guys that helped constantly push the envelope and make Crytek the renowned world player that it is today.

I started last week as a Creature TD at Industrial Light + Magic; about the only thing that could wrench me away from Frankfurt. I have always been so interested in creatures and anatomy, and, from a young age, considered ILM the best of the best when it came to these. I feel very lucky to be able to join another great team of people, and not only that, but learn so much from them on a daily basis.

I don’t know what effect that will have on this blog. I can continue to comment on games stuff, but, being a large company ILM is a lot more restrictive in what I can do (even in my spare time!) versus Crytek. Not to mention I will be very, very busy the next few months.

posted by Chris at 9:19 AM  

Sunday, October 19, 2008

Epic Pipeline Presentation

I saw this presentation about a year ago, talking about the pipeline Epic uses on their games. Maybe some interesting stuff for others here. The images are larger, you can right click -> view image to see a larger version.

45 days or more to create a single character… wow.

They don’t use polycruncher to generate LODs, they do this by hand. They just use it to import the mesh into max in a usable form from mudbox/zbrush.

They don’t care so much about intersecting meshes when making the high res, as it’s just used to derive the nMap, not RP a statue or anything.

They said they only use DeepUV for it’s ‘relax’ feature. They make extensive use of the 3DS Max ‘render to texture’ as well.

Their UT07 characters are highly customizable. Individual armor parts can be added or removed, or even modded. Their UV maps are broken down into set sections that can be generated on the fly. So there are still 2×2048 maps but all the maps can be very different. This is something I have also seen in WoW and other games.

They mentioned many times how they use COLLADA heavily to go between DCC apps.

They share a lot of common components accross characters

posted by Chris at 4:44 PM  

Wednesday, September 17, 2008

Making of the Image Metrics ‘Emily’ Tech Demo

I have seen some of the other material in the SIGGRAPH Image Metrics presskit posted online [Emily Tech Demo] [‘How To’ video], but not the video that shows the making of the Emily tech demo. So here’s that as well:

At the end, there’s a quote from Peter Plantec about how Image Metrics has finally ‘crossed the uncanny valley’, but seriously, am I the only one who thinks the shading is a bit off, and besides that, what’s the point of laying a duplicate of face directly on top of one in a video? Shouldn’t they have shown her talking in a different setting? Maybe showed how they can remap the animation to a different face? There is no reason not to just use the original plate in this example.

posted by Chris at 4:44 PM  

Friday, September 5, 2008

Talking about Light Transport

EDIT: I would like this to be a ‘living document’ of sorts, please send me terms and definitions and feel free to correct mine!

Whether you’re a technical artist in games or film, when trying to create realistic scenes and characters, the more you know about how light works and interacts with surfaces in the world, and the more reference of this you have, the better you can explain why you think an image looks ‘off’.

You are an technical artist. You need to be able to communicate with technical people using terminology they understand. We often act as bridges between artists and programmers, it is very important for us to be able to communicate with both appropriately.

Light transport is basically the big nerd word for how light gets from one place to another, and scattering is usually how surfaces interact with light.

You can see something in a rendered image and know it looks ‘wrong’, but it’s important to understand why it looks wrong, and be able to accurately explain to the programming team how it can be improved upon. To do this you should be able to:

1) present examples of photographic reference

2) communicate with general terms that others can understand

General Terminology

The following terms come from optics, photography and art, you should not only understand these, but use them when explaining why something does not look ‘right’. I will give both the technical term and my shortest approximation:

Specular Reflection – sharp reflection of light from a surface that somewhat retains an image (eg. glossy)
Diffuse Reflection – uneven reflection of light from a surface that does not retain the image (eg. matte)
Diffuse Interreflection – light reflected off other diffuse objects
Diffraction – what happens to a wave when it hits an obstacle, this could be an ocean wave hitting a jetty, or a light wave hitting a grate.
Depth od Field – the area in an image that is in focus
Bokeh – the blurry background in a photo that is not in focus
Chromatic Abberation – the colored fringes around an object or light refracted through an object, it’s because certain wavelengths of light get bent ‘out-of-sync’, i usually think of it as an old projector or monitor that is misaligned; that’s what this effect looks like.
Caustics – light rays shined through a refractive object onto another surface
Angle of Incidence – this is actually the angle something is off from ‘straight on’, but we mainly use this when talking about shaders or things that are view-dependent. If you were to draw a line from your eyes to a surface, the angle between this and it’s ‘normal’ is the ‘angle of incidence’. Car paint whose color changes as you walk around it is a good example: it changes based on the angle you see it. Just remember, your head doesn’t have to move, the object can move, changing the angle between your sightline and the surface.
Refractive Index (Refraction) – how light’s direction changes when moving through an object. the refractive index of water is 1.3, glass has a higher refractive index at 1.4 to 1.6
Reflection – the changing of direction of light, usually casting light onto something, like the camera or our eyes
Glossiness – the ability of a surface to reflect specular light, the smaller amount of specular light reflected usually makes something look ‘glossier’
Ray – think of a ray as a single beam of light; a single particle. This particle moves in a ‘ray’, when we talk about ‘ray tracing’ we mean tracing the path of a ray from a light source through a scene.
Fresnel – pronounced ‘fre-nel’, it is the amount of view dependent reflectance on a surface. a great example is rim lighting, but fresnel effects are used to fake a fuzzy look, x-ray effects, light reflected off the ocean, etc.
Aerial Perspective – this is how things get lighter as they recede into the distance, the more air, or ‘atmosphere’ between you and the object (mountain, building, etc) the lighter it is visually. I grew up in Florida, we don’t have much of this effect at all due to elevation and clear skies.
High Dynamic Range Imaging (HDR) – this just means you are dealing with more light data than a normal image. An HDR image has a larger range of light information stored in it. With today’s prosumer DSLR’s it is possible to capture 14bit images that theoretically contain ’13-14 stops’ of linear data. A digital example could be the sky in the game Crysis, it was a dynamic HDR skydome, this meant that the game engine was computing more light than could be displayed on the monitor. In these situations, this data is tonemapped to create visually interesting lighting situations.
Tone Mapping – this is how you can ‘map’ one set of colors onto another, in games it generally means ‘mapping’ high dynamic range data into a limited dynamic range, like a tv set or monitor. This can be done by ‘blooming’ areas that are overbright and other various techniques.\
Bloom – ‘bloom’ is the gradient fringe you see around really brightly lit areas in an image, like a window to a bright sky seen from inside a dark room.
Albedo – the extent to which a surface diffusely reflects light from the sun.
Afterimage Effect – this belongs to a groups of effects I call ‘accumulation-buffer effects’. the after-image effect visually ‘burns-in’ the brightest parts of a previous image, simulating the effect our eyes have when adjusting to bright light.
Deferred Rendering – this is a type of rendering where you render parts of the image to framebuffer storage instead of rendering directly to the pixel-output. Deferred rendering generally allows you ot use many more light sources in real-time rendering. One problem deferred rendering has is that it cannot properly deal with transparent items.
Scanline Rendering – Scanline rendering is a very old technique where you render one line of pixels after another. Pixar’s Renderman is a scanline renderer, but also the NintendoDS uses scanline rendering.
Skylight (or Diffuse Sky Radiation) – this is the fancy term for light that comes not from the sun, but is reflected from the sky. It is what makes sunlight on earth inherently blue, or orange.
Scattering (including Sub-Surface Scattering) – this just means how particles are ‘scattered’ or deviate from an original path. In sub-surface-scattering, light enters an object, and bounces around inside (sub-surface). This leads to things like the orange/red color of your ear when there it a light behind it.
Participating Media – the way a group of particles can effect light transport through their volume, not only reflect or refract light, but scatter it. Things like glass, water, fog and smoke are all participating media.
Ambient Occlusion – this is a shading effect where occluded areas are shaded, much like access maps of the old days, cracks and areas where light would have a hard time ‘getting into’ are shaded.
Screen Space Ambient Occlusion – a rendering technique that fakes ambient occlusion with some z-buffer trickery. By taking the distances between objects in a scene, the algorithm generates approximated occlusion data in real time. (first used on Crysis!)
Global Illumination – a way of rendering where you measure light bounces, as the light bounces around a scene, this generates indirect lighting. An example of this would be how a red ball next to a white wall will cast red light onto the wall.
Z-Buffer – is where 3d depth information is stored in a 2d image. A 16bit z-buffer has 65536 levels of depth, while an 8 bit has 256. Items on the same level cause flickering or ‘z-fighting’.
Z-Fighting – this occurs when polygons have similar z-buffer values, it is a term you should know when dealing with virtual cameras, not real ones. You can see this flickering when you create 2 co-planer planes on top of each other in a 3d app. To eliminate z-fighting you can use 24 or 32bit zbuffers.
Frustrum – everything in the camera’s field of view; generally the entire volume that the camera can see.
Environment Reflection – the way of faking a reflection by applying an image to a surface, this can be a spherical map, cube map, etc. Some environmental reflections (cubemaps) can be generated at rutime as you move an object around. (most notably in racing games)
Cubic Environment Mapping – a way of generating an environmental reflection map with six sides that are mapped onto a cube, recreating the reflection of the environment around an object.
SkyBox – creating a ‘sky’ in a virtual scene by enclosing the entire scene in a large box with images on 5 sides.

Here are some example sentences:

Artist: This place here where the light shines on the surface is too small, it makes my object look too wet.
Technical Artist: The surface is too glossy, as a result, the area of specular reflection where you see the light is very small.

Artist: Like in the photos we took, things in the distance should be lighter, in the engine can we make things lighter as they get farther away?
Technical Artist: As things recede into the distance, aerial perspective causes them to become lighter, to acheive this we should increase the environment fog slightly.

Taking Photographic Reference

I feel every technical artist who assesses visual output should own a proper Digital Single Lens Reflex camera (DSLR), no matter what quality or how old. This will force you to understand and work with many of the terms above. The artist in you will want to take good pictures, and this is much more than good composition, you are essentially recording light. You will need to learn a lot to be able to properly meter and record light in different situations. Because it’s digital, you will be able to iterate and learn fast, recognizing cause and effect relationships the same way we do with the realtime feedback of scripting languages in 3d apps.

posted by Chris at 8:17 AM  

Thursday, August 7, 2008

Three Headed Monkey Magics!

woah!


I am currently in the US, home for the first time in 8 months. I had some packages here, one of which my now (ex)girlfriend had said was too important to mail to Germany, despite the sketch of a three headed monkey on the shipping box. Behold: the original Secret of Monkey Island PC game, signed by Tim Schafer, Ron Gilbert and Dave Grossman! Tim was nice enough to arrange this, we met and he showed us around his studio, Double Fine, at GDC this year. I had to fight hard to hold back the fanboy-ness!

posted by Chris at 9:10 AM  
Next Page »

Powered by WordPress