Stumbling Toward 'Awesomeness'

A Technical Art Blog

Tuesday, August 5, 2008

The Price of Tech: Lost in Tran$lation

I grew up in the US though I now live in Europe. This is just a short post about something that I find really unfair and frustrating: International Pricing of High End Tech Items. Let’s check out the new Nikon D700:

Nikon d700 Germany: 2,599 eur

Nikon d700 United States: 2,999 usd – 1,825 eur

Nikon d200 Britain 1892 pounds – 2,383 eur

It certainly would seem that Germany is getting the short end of that stick. In many cases, people in Europe could fly to the US, buy electronics, and come back for the price of getting them here. And many people do.

Not to menaion many companies have better warranties in the US where the market is more competitive. (Example: Many Nikon cameras and lenses have a 5 year warranty in the US and 2 year here in Germany)

When the Wii came out int he US, it was 250usd, when it came out here, it was 250eur. The eur was riding high, in the US it was impossible to get a Wii, hwever, they were readily available in all stores here; leading many to speculate that it was because Nintendo was making 400usd per Wii (250eur) in Europe. This isn’t just about inflation, some items are priced 1:1 or a little over, 3dsMax below, but others are more ridiculous, Photoshop for example.

3dsMax 2009

3dsMax 2009 Germany: 3,900 eur – (4,641 eur with mandatory tax)

3dsMax 2009 United States: 3,495 usd – 2,257 eur

Photoshop CS3

Photoshop CS3 Germany: 1,027 eur

Photoshop CS3 United States: 607 usd390 eur

Photoshop CS3 Britain: 500 pounds – 629 eur

The above is just completely inane. Some companies will tell you they have to charge a premium on products in Europe because it costs extra to localize them. But come on.. Stuff like the above is ridiculous.

When you start looking at really high end tech, items that only have one distributor in Europe, but many in the US; like motion capture systems, the difference in pricing due to 1) inflation 2) companies just charging more in europe and 3) single distributors in a region having no competition, makes it inhibitively expensive (we’re talking tens of thousands of dollars price difference). It would be cheaper to set up a company in the US just to make these purchases, and I am sure people do.

But seriously, Adobe, you should be ashamed of yourselves.

posted by Chris at 3:33 AM  

Thursday, July 31, 2008

MGS4 Character Pipeline

Character Creation Pipeline

Thanks to my brother, Mike, for translating  this from the original japanese [here]

The hero, Snake, and nearly all other characters we animate on the PS3 and make an appearance in the game are restrained to the range of about 5 thousand to 1 million polygons (including the face). Also, in both gameplay and “cutscenes” the same resolution polygon characters are used. This allows for seamless transition between the gameplay and cutscenes and makes it easier for the player to get emotionally involved in the reality of the game.

Furthermore, for all other characters, except crowds, the same resolution of polygon characters are used in game as well as in cutscenes. Separate from the resolution models used on the PS3, high rez data is modeled at the same time to generate a normal map. Wrinkles in clothing and other details are expressed through this normal map, created from the high rez model.

Of all the bones within the character’s body, the number that contain and are driven by animation data is roughly around 21. But, in reality a number of helper (auxiliary) bones are used to supplement motions like twisting in the knees, elbows, arms and legs.  These however are not driven by animation data. Instead, they reference values of the basic animation driven joints and move in like manner.


The same method is employed on the PS3, not just in XSI; all you have to do is extract the helper bones’ definition files from the XSI data and you can achieve the same kind of control on the PS3 as well. (Awesome! Rig syncing constraints and driven bones between DCC app and game engine)
Since there is no actual motion data stored inside the driven-bones, you are able to not only limit the data volume but even in the event that you need to add or delete helper bones, there’s no need to reconvert the motion data- you can just adjust the model data instead.

posted by Chris at 8:27 AM  

Thursday, July 31, 2008

MGS4 Facial Animation

Shockingly Realistic Facial Animation

Thanks to my brother, Mike, for translating  this from the original japanese [here]

One of the most notable things about MGS4 is its world-leading cutting edge facial animation. Exactly how were these real-to-life facial expressions created?

Since the Metal Gear Solid series is lip-sinked for localization, from a workload standpoint voice analysis software is employed

In MGS4 for example, lip-sinking for Japanese and English were done seperately with different voice analysis software.Other emotions and expressions besides lip-sinking were animated by hand. In nearly all cases, the expression and phoneme elements were worked on together simultaneously, reducing interference and allowing MGS4 to achieve its simultanious world release.

When doing voice analysis, it’s necessary to set parameters for both expression components (i.e., anger, smile, etc.) and phoneme components (all languages) seperately. After setting this up, we need to see how it behaves as a rig. It’s possible to use parameters for the rotation and movement of bones; however, the rig can become more complicated and it can also become more difficult to predict how the bones with transform/change once enveloped. In other words, when facial animation is done by only controlling the bones, ituitively the designer’s job becomes more difficult and he runs into the following two problems: 1) expressing the behavior of bones, and 2) setting parameters for phonemes.

However, with shape animation (even though it has the drawback of linear interpolation) it’s extremely easy to set up parameters for all your phonemes and
expressions. Most of all, it’s adventagous in that the desiger will be able to intuitively predict the result.

For these reasons, this time on our rig we used bone-driven animation based on the results of various parameter shapes.

With this set up, using voice analysis automated animation (not just the mouth, but automatic animation of the tounge and throat phonemes as well) and hand animation for emotions, we are able to achieve an abundance of realistic expressions.

In the following flash movie you can see how smooth muscular expressions are achieved through superb rig setup

Flash Movie:

Facial rig setup pipeline

————————————————
1. Lo-poly model driven by shape animation
2. Above that, the constrained bones
3. Polygonal mesh enveloped to the bones
4.Tangent color
5. OpenGL display (wrinkles expressed also with normal map)

————————————————

Expressions, phonemes, eyes (eyebrows), and shader driven wrinkle animation are all tab selectable.
Through the combination of various parameters we can create life-like expressions like those shown above.

The most suprising thing is, we developed a tool that automatically sets up this facial rig that allows such sophisticated control. In other words, if you enter the facial model data and run the tool it will automatically identify the optimal position for bones– in this system the tool will create controls that include the preset parameters for emotions. (a smily face, an angry face, etc.) To perform the automated facial rigging, the facial data’s topology information needs to be standardized ahead of time. If you adhere to this one rule, your set up can be done automatically, and all that’s left to do is for the designer to fine-tune the controls and you have a constructed enviorment where you can get right into your facial animation.

Next, a rig that controls the movement of the eyeball and surrounding muscles can also be generated automatically using this tool. Since the area around the eye, like the area surrounding the mouth, is controlled by the simultanious usage of shapes and bones, when you move the eyeball locater you get smooth muscular movement. What’s more, even if you edit the shape, or redefine the configuration of the outline of the eye, it doesn’t disrupt the expression of brow wrinkles or the blinking of the eye in any way.

Behind all the characters that make an appearance in this game, and appeal to the player’s emotions, we have implemented this set up and animation system; and, through it we are able to increase and maintain a high quality user experience.

posted by Chris at 1:14 AM  

Monday, June 23, 2008

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Under the Hood: The Inner Workings of Animation on Assassin’s Creed

Sylvain Bernard, Animation Director, Ubisoft

Animation:

  • All animation was done in 3dsMax with Biped
    • ‘Our animators do not like MotionBuilder for creating animation’
    • Would have meant porting all their tools to MotionBuilder
  • MotionBuilder was only used to clean mocap
  • They decided to ignore foot sliding in order to concentrate on a better performance and gameplay experience
  • They stressed the importance of Technical Animators
  • Up to 15 animators worked on Assassin’s Creed
  • 40% of all animation was hand keyed
  • There is no procedural animation(not counting blending)
  • They showed the entire move tree
    • sprint, run, walk, jog, slow walk, banking, strafe, 4 idles
    • 168 ground animations for altair locomotion group
    • 122 anims in climbing group

Production:

  • 90% of work was integrating animation into the environment
  • The key was pairing animators with programmers
    • Sit them together
  • Before they started one main goal of the project was ‘to do as much animation as we could’
    • They saw Next Gen as an animation showcase
  • They prototype gameplay in max to show programmers how the game should look/feel
    • How AI should react
    • How a character should interact with the environment
  • ‘In the beginning designers were given free reign to make anything they wanted, in the end we had to make a 20 page document telling them how to create levels’
    • Too much freedom leads to chaos
  • Stressed the need to involve animators in animation system development

Pipeline/Rigging:

  • All characters share the same skeleton (male/female npc, altair)
    • ‘the art director wanted characters of different heights, we said ‘no”
    • made mocking things up easy
  • They call their movement locator the ‘magic bug’
    • Locators ‘joined together’ when two characters interacted
  • NPCs use simple hinge constraints for ponytails and things
  • They had ‘no working AI for almost the first two years‘ of the project
  • They do edge detection on the collision mesh
  • Auto nav mesh generation
  • Auto ‘animation object’ placement
posted by Chris at 12:34 PM  

Sunday, June 22, 2008

3D Models not Subject to Copyright

I saw this over at slashdot:

“The US Court of Appeals for the Tenth Circuit has affirmed (PDF) a ruling that a plain, unadorned wireframe model of a Toyota vehicle is not a creative expression protected under copyright law. The court analogized the wire-frame models to photographs: the owner of an object does not have a copyright in all images of the object, but a photographer may have a limited copyright over a particular image based on artistic choices such as costumery, lighting, posing, etc. Thus, the modelers could only copyright any ‘incremental contribution’ they made to Toyota’s vehicles; in the case of plain models, there was nothing new to protect. This could be a two-edged sword — companies that produce goods may not be able to stop modelers from imaging those products, but modelers may not be able to prevent others from copying their work.”

This will have some interesting ramifications. And I don’t just mean for the Limbo of the Lost guys. (j/k)

posted by Chris at 11:09 PM  
« Previous Page

Powered by WordPress