Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, August 31, 2016

Calibrating the Alienware 13 R2 OLED Laptop

Last month Dell had the Black Friday in July sale and this beauty was on sale for 500 dollars off, plus 10% if you ordered by phone. I decided it was time to replace my beloved Lenovo x220t.

The Alienware 13 might be ugly and lack a Wacom digitizer, but it does have an nVidia GTX 965M and an OLED screen with 211% sRGB coverage! As the Lenovo Yoga X1 only has integrated graphics, I think the Alienware is the machine for 3D artists.

If you’re a gamer who landed here because you wondered how to get the most out of your amazing display, or wondered why everything is super-red-pink, it’s time to put your big boy pants on! Calibrating the monitor wasn’t so straight forward, but let’s jump into it.


We are going to use an open source Color Management toolkit called ArgyllCMS [Download it here]. It can use many different hardware calibration devices, I have used it with the xRite Huey and the Spyder5.

One thing that’s important to know is that all the sensors are the same, you only pay for software features. If you don’t own a calibrator, you can buy the cheapest Spyder, because it’s the same sensor and you are using this software, not the OEM software.


Next we’re going to use a GUI front end built to make ArgyllCMS more user friendly. It’s called DisplayCAL, but it requires a lot of libs (numPy, wxWidgets, etc) so I recommend downloading this zero install that has everything.


Be sure to set the ‘White level drift compensation’ on. You will need to ignore the RGB adjustment it first asks you to fuss with because there is no RGB adjustment on your monitor.

When you are through, you will see the following (or something like it!):


Note: DisplayCAL can also output a 3d LUT for madVR, which works with any number of video playback apps. Check here to force your browser to use your color management profile. If it’s useful, I can make another post detailing all the different settings and color-managed apps to make use of your monitor.

I hope you found this useful, it should apply to the Lenovo Yoga X1 and any other OLED laptops in the coming months.

posted by Chris at 9:23 PM  

Tuesday, August 30, 2016

The Jaw

All ‘virtual’ joints that we place are based on virtual anatomical surface landmarks. By ‘virtual’ I mean the polygonal rendermeshes that our ‘puppet’ will drive. As a rigger, you have to be able to look at the surface anatomy (polygonal mesh) and determine where a joint should be placed, but the solution is not always obvious.

“The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

The Dental Distress Syndrome” (Dr. A.C. Fonder). (1988)

The jaw is one of these tricky situations! Many people think the jaw rotates from the ‘socket’ or ‘fossa’ that the jaw (or mandible — or mandibular condyle) fits into, but this is not the case.

As riggers, we sometimes need to ignore internal anatomy and focus on surface anatomy, which is our final deliverable. This means think about the center of rotation for the entire mass of flesh (or vertices) that we’re moving. For the jaw of a human(oid) this pivot is under the earlobe when viewed from the side.

Rotating directly from the ‘socket’ would result in this incredible ‘derp‘ shown above, but the temporomandibular (TMJ) joint doesn’t work like this when we open our jaw/mouth.


Instead, the jaw/mandible slides forward as the mouth opens, like you see in this ‘live MRI’ slice above, resulting in this sexay jaw open below. You can see her mandible/jaw slide forward as she opens her mouth.

This post is of course ignoring the fact that you can rig the jaw in a way that, through a combination of rotation and translation uses the TMJ as a pivot and rotates around the true jaw mass center of rotation described above. You can do virtually anything, you can drive with your feet; that doesn’t mean it’s a good idea. This post is probably most important for riggers creating characters fast, using tools that generate skeletons from user-placed signposts or locators.

Speaking of signposts and locators, another tool to help you with placement is to constrain a toroid or circle to your joint, it can help you visualize the jaw swing quickly:


For those interest in further investigation, the paper ‘Rotation and Translation of the Jaw During Speech’ by Jan Edwards and Katherine Harris (1990) can be downloaded [here].

Jaw and Teeth Placement (Modeling)


Quite a few people have told me they found this helpful. I would like to add that teeth placement is very important and something you should check before rigging.  For modelers, here are some radiograms showing teeth placement in reference to the facial surface anatomy (click to enlarge):

jaw_placement02   jaw_placement046dfd119e23d059ea798630e2b40b7567
Setting the teeth the correct distance from the lips is important, I urge any facial modelers to take interest in forensic facial reconstruction. Books like Forensic Art and Illustration have lots of good data, like the Rhine Facial Soft Tissue Depth charts. Keeping in line with my post above, we need to know the upper and lower teeth dept in relation to our surface anatomy.

As you see above, we’re primarily interested in 6, 7, 20, and 21. Rhine et al have created charts for caucasoid and negroid Americans of varying builds. (Click below to enlarge)


NOTE: Soft tissue thickness charts for the face are also a great place to gut-check your sub surface scattering maps and profiles!

Up next, placing the teeth so that they are large enough or wide enough is also important. Below, notice the item marked ‘J’, this is the average upper lip line in relation to the upper teeth.



Above is a forensic facial reconstruction proportion guideline. I like to augment that a bit to help with teeth placement. I find that it’s helpful to look at the incisors vs the nostrils, and the pupils vs the lip corners/molars. Here are some examples of that (click to enlarge):


posted by Chris at 5:36 PM  

Powered by WordPress