Last month Dell had the Black Friday in July sale and this beauty was on sale for 500 dollars off, plus 10% if you ordered by phone. I decided it was time to replace my beloved Lenovo x220t.
The Alienware 13 might be ugly and lack a Wacom digitizer, but it does have an nVidia GTX 965M and an OLED screen with 211% sRGB coverage! As the Lenovo Yoga X1 only has integrated graphics, I think the Alienware is the machine for 3D artists.
If you’re a gamer who landed here because you wondered how to get the most out of your amazing display, or wondered why everything is super-red-pink, it’s time to put your big boy pants on! Calibrating the monitor wasn’t so straight forward, but let’s jump into it.
One thing that’s important to know is that all the sensors are the same, you only pay for software features. If you don’t own a calibrator, you can buy the cheapest Spyder, because it’s the same sensor and you are using this software, not the OEM software.
Next we’re going to use a GUI front end built to make ArgyllCMS more user friendly. It’s called DisplayCAL, but it requires a lot of libs (numPy, wxWidgets, etc) so I recommend downloading this zero install that has everything.
Be sure to set the ‘White level drift compensation’ on. You will need to ignore the RGB adjustment it first asks you to fuss with because there is no RGB adjustment on your monitor.
When you are through, you will see the following (or something like it!):
Note: DisplayCAL can also output a 3d LUT for madVR, which works with any number of video playback apps. Check here to force your browser to use your color management profile. If it’s useful, I can make another post detailing all the different settings and color-managed apps to make use of your monitor.
I hope you found this useful, it should apply to the Lenovo Yoga X1 and any other OLED laptops in the coming months.
After Ryse wrapped, Colleen and I went diving in Asia for a month. I finally finished the epic After Effects project, srsly.. not for the faint of heart, I think I had over 200 layers. So yeah, my love for creatures has completely enveloped my spare time as well.
Colleen shot roughly half of this, while I am fiddling around with my aperture and strobes, she’s already gotten a video of the thing doing a backflip while waving to the camera. The little frogfish yawning, the tozeuma shrimp changing directions, and others are all hers..
The Nikon 400mm 2.8 has a lens hood that costs $400. For this price, you would think they use pretty solid parts, but there is a block that a thumbscrew goes into that’s actually hard plastic. If your lens is in a backpack with the hood attached, this retaining screw will strip.
I have read many forum posts where people begrudgingly REPLACED THE ENTIRE HOOD because of this. People said they contacted Nikon directly and were told there are no replacement parts.
After contacting Nikon, I would like anyone googling for a solution to know that you can order the parts from Nikon, the parts and numbers are pictured above. Contact the Nikon Parts Service at: 310-414-5121
GoPro makes small $250 no-frills video cameras that record 1080p and come in waterproof polycarbonate housings rated to 60m depth. They have a 170º angle of view, glass lens, fixed focus (2.5ft – ∞), f/2.8 aperture, and 2.5 hour battery life. These cameras are ‘bare bones’; there is no way to even know if it’s recording but to look directly into the lens, no backfacing LCD or even blinking LED!
Not Usable Under Water! – I did some test dives as soon as I received the housing and was in for a rude awakening. The glass domed ports blur the image underwater. This is because domed ports create a secondary focal point or ‘virtual image’ underwater that must be focused on. It seems that GoPro did not take this into account; after contacting them directly I was told: “It is not possible at this time for the GoPro Hero to focus in an underwater environment.” One funny thing to point out, the cool underwater videos on GoPro’s own site are not shot with their own lens/housing!
SOLUTION: 3rd Party Lenses or Housing – That’s right, to use your GoPro Hero underwater you have to buy a replacement housing from a 3rd party with flat ports (crisp images underwater). At the time of my writing this, there were none available for the 3d Hero System, so I purchased replacement lenses from Pursuit Diving. These lenses are very soft polycarbonate, and you might want to carry some Novus 2 polish with you as they scratch easily [image]. Mine also had some small areas of blurriness: this is not an ideal solution. Eye of Mine has a complete 3d housing replacement in the works, and GoPro themselves say they are ‘working’ on a solution. Either way, be warned: These cameras are unable to produce decent images underwater!
Poor Dynamic Range / Image Quality – As you see below, bright highlights easily get blown out. They claim the 1/2.5″ CMOS sensor is great for low light (>1.4 V/lux-sec), this may be, but it is woefully bad at images that vary in bright and dark.
Highlights are easily blown out, and create bad image artifacts
The H264 (12mbit) really butchers the image at times (PNG)
Rolling Shutter Artifacts (Wobble or Jelly) – Like most CMOS video cameras, the GoPro has some rolling shutter issues; I would say more than other CMOS cameras I have used. Unfortunately, for a camera that is meant to be strapped to moving objects –this is pretty bad! You have no control over the shutter speed, so unfortunately the less light, the more rolling shutter artifacts. Here’s an example looking out my window, but you can also see this in the Thistlegorm wreck footage below.
There is a great free solution for VirtualDub called DeShaker. For the HD Hero you should enter a rolling shutter amount of 82%.
Poor Battery Retention – On more than one occasion I left full batteries in the camera and did not turn it on for one to two days. I was often surprised to find the batteries low or half-drained. I have many other smaller canon, fuji, etc cameras and they have much better retention.
3D Hero System
Before, if you wanted to make a GoPro s3d rig, you had to put both cameras on a plate, then clank it to later sync the videos by audio waveform. Not only that, but the cameras dropped frames, so you had to time warp the footage to take into account drift: It was less than ideal.
In March (2011) GoPro released the ‘3d Hero System’ which is a new housing and a sync cable for two existing cameras. They also purchased CineForm Studio and skinned the software to make for a slightly less painful s3d workflow; unless you know what you’re doing, then their CineForm app can be pretty obnoxious/unintuitive.
Somewhat Buggy / Unreliable
I was surprised by the bugs I encountered. It’s very frustrating to think you are recording something and later realize that the camera rig left you with unusable data. For instance, I thought the cameras were recording for an entire dive, but it turned out that they somehow entered a state where they made 400+ individual one second MP4 files. Other times one camera would turn off, or unsync and begin recording in a separate format (like one eye 1080p, the other 5mp stills). Many times one battery would run out well before the other, in which case you at least have 2d video, but still annoying.
Sync Cable does not ‘Sync’ Cameras
The ‘Sync Cable’ does not really ‘sync’ the cameras; treating it as a sync cable will only lead to complete frustration. This can really cause some issues, you need to think of the cable as a ‘controller cable’, where one camera is the master and the other a slave, or you will end up with only one camera recording. Again, the functions of the cameras are not sync’d! The Camera with the sync cable marked ‘R’ must start/stop the recording for both cameras to record. It is easy to place the cameras in the housing so that the ‘slave’ camera shutter button is pressed, this does not work, so be careful!
Here is a schematic of the ‘sync cable’ for DIY people.
Sync’d Recording Not Perfect (Don’t think ‘Genlock’)
While better than clanging a metal bar and timewarping to the audio, the sync cable doesn’t really sync the sensors. The videos seem a full frame off, so maybe the CineForm software compensates for this.
As you can see, the right (master) camera is a frame or so ahead of the left
Camera Settings Like White Balance and Exposure NOT Sync’d
Many times I find myself with stereo video where each eye is widely different. Whether it’s exposure, white balance, etc.. it’s frustrating, and the included CineForm software doesn’t offer much of a solution.
CineForm Software is Slow, Can be Frustrating
An example of some frustration advanced users may have is: “EXPORT TO MP4”: just a big button, nothing about datarate or other export options.. just a button. Unfortunately the UI has been dumbed down to the point of ambiguity and frustration. They should have continued and just made a “UPLOAD TO YOUTUBE 3D” because the software is dumbed down to the point of not being useful to advanced users, but not being easy enough for novices.
The interocular of the housing is ~3.5cm which is a bit too close for my liking. Reducing the interocular to something smaller than the distance between your eyes causes the 3d effect to be weakened and things to appear larger than they really are. The interocular was decent for underwater, and I guess if you are filming yourself on a surfboard, but not great for driving through the Serengeti. The sync cable is of fixed length, so you cannot use it with other GoPro housings.
Unable to Use Other Attachments
Because the sync cable uses the expansion port, and the housing doesn’t accomodate, you cannot use the LCD backpac or the larger battery with the 3D Hero System.
Final Thoughts and Some Footage!
Sure I pointed out a lot of issues that I had, but for the price, the GoPro system is pretty great. The housing, though cheap, never flooded (many 30m dives). This is the first footage I have posted, and I have not post-processed it much. I will maybe make another post once I figure out the best ways to automatically post-process the footage to remove artifacts and distortion.
I have been researching the best options available for the D300 when it comes to quickly generating some lightprobes/panoramas. This of course means fisheye lenses. Currently, Sigma is the only company that makes a 180 degree circular fisheye. They come in two flavors, 8mm, and 4.5mm. The 8mm projects a full circle onto a full 35mm sensor (full frame), but on an APS-C sensor it is cropped. The 4.5mm however, throws a perfect circular image onto an APS-C sized sensor; I believe it is the only lens that does this.
You would think that the 4.5mm would be the way to go, I did until I took a look at both. It really comes down to the pixels. The width in pixels of the image thrown by the 4.5mm lens is roughly 2285px in diameter. So while you can shoot less, an entire panorama taking about 3 shots, it will come out as a <4k equirectangular. However, using the 8mm, you need 4 shots, plus one zenith (5 shots total) and it generates an 8k image. While the 4.5mm does generate a 180 degree image across, as you can see it is very wasteful.
So why doesn’t the lens have full coverage in at least the short dimension? I think it’s because it’s a lens designed to fit Canon and Sigma cameras, not just Nikon. Canon sensors have a 1.6 crop factor and Sigma’s Foveon X3 has a 1.7 crop factor (13.8mm)! The coverage is so small because Nikon DX format has a 1.5 crop factor, the APS-C sensor is much larger than Canon or Sigma. The actual circle measures 12.3mm, even small for the Sigma, which makes me believe they future-proofed it for Four Thirds.
For an APS-C sensor like the D300, I would recommend the 8mm, unless you really need a full uncropped image. The 4.5mm, while being more expensive, also has an aperture of 2.8, compared to the 8mm (f/3.5)
I am not super constrained on time, if you are on set and shooting bracketed probes between takes or something, the 4.5mm will save you two shots (18 pictures) and this might be preferable. That said, it will only generate a 4k image in the end (which might be enough)
This is what the final product will look like. Two D300s, mounted as close as possible, sync’d metering, focus, flash, and shutter. Rig cost: Less than 30 dollars! Of course you are going to need two d300s and paired lenses, primes or zooms with a wide rubberband spanning them if you are really hardcore. Keep in mind, the intraoccular is 13.5cm, this is a tad more than double the normal human width, but it’s the best we can do with the d300 [horizontal].
Creating the Camera Bar
This mainly involves you going to the local hardware store with your D300 and looking at the different L-brackets available. It’s really critical that you get the cameras as close as possible, so mounting one upside down is preferable. It may look weird, but heck, it already looks weird; might as well go full retard.
I usually get an extra part for the buttons, because they will need to be somewhere that you can easily reach
Creating the Cabling
Nikon cables with Nikon 10-pin connectors aren’t cheap! The MC-22, MC-23, MC-25, or MC-30 are all over 60 dollars! I bought remote shutter cables at DealExtreme.com. I wanted to make my own switch, and also be able to use my GPS, and change the intraoccular, so the below describes that setup. If you just want to sync two identical cams, the fastest way is to buy a knock-off MC-23, which is the JJ MA-23 or JJ MR-23. I bought two JueYing RS-N1 remote shutters and cut them up. [$6 each]
I only labeled the pins most people would be interested in, for a more in depth pin-out that covers more than AE/AF and shutter (GPS, etc), have a look here. I decided to use molex connectors from RC cars, they make some good ones that are sealed/water-resistant and not too expensive.
So the cables have a pretty short lead. This so that I can connect them as single, double, have an intraoccular as wide or as short as any cable I make.. The next thing is to wire these to a set of AF/AE and shutter buttons.
Black focuses/meters and red is the shutter release. It’s not easy to find buttons that have two press states: half press and full press. If you see above, shutter is the combination of AF/AE, ground, and shutter. This is before the heat shrink is set in place.
So that should be it. Here’s my first photo with sync’d metering, focus, flash, and shutter. They can even do bursts at high speed. Next post I will try to look into the software side, and take a look at lens distortion, vignetting, and other issues.
I got some good feedback from the last post and updated the script to export JPEG Stereo (JPS) and PNG Stereo (PNS, really.) This way you can convert your images into a single lossless image that you can pop into photoshop and adjust hsv/levels, etc.
This is a super simple python script, no error padding. Also, keep in mind that coming from most modern camera rigs, you are saving like a 20-40 megapixel PNG compressed file here, wait until it says it is done saving, it may take a few seconds.
Many stereo cameras are using the new MPO format to store multiple images in a file. Unfortunately, nothing really works with these files (Other than Stereo Photo Maker). Here is a simple python wrapper around ExifTool that will extract the Right and Left image, and return EXIF data as a dict. I think this is probably easier than explaining how to use ExifTool, but you can see from looking at the simple wrapper code.
#Name of MPO file, name of output, whether or not you want all EXIF in a txt log
mpo.extractImagePair('DSCF9463.MPO', 'DSCF9463', True)#>>Created DSCF9463_R.jpg#>>Created DSCF9463_L.jpg#>>Writing EXIF data
The above leaves you with two images and a text file that has all the EXIF data, even attributes that xnView and other apps do not read:
exif = getExif('DSCF9463.MPO')print exif["Convergence Angle"]#>>0print exif["Field Of View"]#>>53.7 degprint exif["Focal Length"]#>>6.3 mm (35 mm equivalent: 35.6 mm)
The market is flooded with cheap ‘TN’ TFT panels. TN (twisted nematic) panels are terrible when it comes to reproducing color and have a very limited viewing angle. I used to have one and if I just slouched in my chair (or sat up too straight) the black level would change drastically. These panels are much cheaper to manufacture, so vendors have been flocking to them for years.
As artists, we need at least _decent_ color, even on our home machines. Because it can sometimes be difficult to determine the actual panel used in a diaplay, and because I care, I have compiled a list of > 24″ monitors and their panel type. I really would have liked to have seen this last week.
EDIT:I would like this to be a ‘living document’ of sorts, please send me terms and definitions and feel free to correct mine!
Whether you’re a technical artist in games or film, when trying to create realistic scenes and characters, the more you know about how light works and interacts with surfaces in the world, and the more reference of this you have, the better you can explain why you think an image looks ‘off’.
You are an technical artist. You need to be able to communicate with technical people using terminology they understand. We often act as bridges between artists and programmers, it is very important for us to be able to communicate with both appropriately.
Light transport is basically the big nerd word for how light gets from one place to another, and scattering is usually how surfaces interact with light.
You can see something in a rendered image and know it looks ‘wrong’, but it’s important to understand why it looks wrong, and be able to accurately explain to the programming team how it can be improved upon. To do this you should be able to:
1) present examples of photographic reference
2) communicate with general terms that others can understand
The following terms come from optics, photography and art, you should not only understand these, but use them when explaining why something does not look ‘right’. I will give both the technical term and my shortest approximation:
Specular Reflection – sharp reflection of light from a surface that somewhat retains an image (eg. glossy) Diffuse Reflection – uneven reflection of light from a surface that does not retain the image (eg. matte) Diffuse Interreflection – light reflected off other diffuse objects Diffraction – what happens to a wave when it hits an obstacle, this could be an ocean wave hitting a jetty, or a light wave hitting a grate. Depth od Field – the area in an image that is in focus Bokeh – the blurry background in a photo that is not in focus Chromatic Abberation – the colored fringes around an object or light refracted through an object, it’s because certain wavelengths of light get bent ‘out-of-sync’, i usually think of it as an old projector or monitor that is misaligned; that’s what this effect looks like. Caustics – light rays shined through a refractive object onto another surface Angle of Incidence – this is actually the angle something is off from ‘straight on’, but we mainly use this when talking about shaders or things that are view-dependent. If you were to draw a line from your eyes to a surface, the angle between this and it’s ‘normal’ is the ‘angle of incidence’. Car paint whose color changes as you walk around it is a good example: it changes based on the angle you see it. Just remember, your head doesn’t have to move, the object can move, changing the angle between your sightline and the surface. Refractive Index (Refraction) – how light’s direction changes when moving through an object. the refractive index of water is 1.3, glass has a higher refractive index at 1.4 to 1.6 Reflection – the changing of direction of light, usually casting light onto something, like the camera or our eyes Glossiness – the ability of a surface to reflect specular light, the smaller amount of specular light reflected usually makes something look ‘glossier’ Ray – think of a ray as a single beam of light; a single particle. This particle moves in a ‘ray’, when we talk about ‘ray tracing’ we mean tracing the path of a ray from a light source through a scene. Fresnel – pronounced ‘fre-nel’, it is the amount of view dependent reflectance on a surface. a great example is rim lighting, but fresnel effects are used to fake a fuzzy look, x-ray effects, light reflected off the ocean, etc. Aerial Perspective – this is how things get lighter as they recede into the distance, the more air, or ‘atmosphere’ between you and the object (mountain, building, etc) the lighter it is visually. I grew up in Florida, we don’t have much of this effect at all due to elevation and clear skies. High Dynamic Range Imaging (HDR) – this just means you are dealing with more light data than a normal image. An HDR image has a larger range of light information stored in it. With today’s prosumer DSLR’s it is possible to capture 14bit images that theoretically contain ’13-14 stops’ of linear data. A digital example could be the sky in the game Crysis, it was a dynamic HDR skydome, this meant that the game engine was computing more light than could be displayed on the monitor. In these situations, this data is tonemapped to create visually interesting lighting situations. Tone Mapping – this is how you can ‘map’ one set of colors onto another, in games it generally means ‘mapping’ high dynamic range data into a limited dynamic range, like a tv set or monitor. This can be done by ‘blooming’ areas that are overbright and other various techniques.\ Bloom – ‘bloom’ is the gradient fringe you see around really brightly lit areas in an image, like a window to a bright sky seen from inside a dark room. Albedo – the extent to which a surface diffusely reflects light from the sun. Afterimage Effect – this belongs to a groups of effects I call ‘accumulation-buffer effects’. the after-image effect visually ‘burns-in’ the brightest parts of a previous image, simulating the effect our eyes have when adjusting to bright light. Deferred Rendering – this is a type of rendering where you render parts of the image to framebuffer storage instead of rendering directly to the pixel-output. Deferred rendering generally allows you ot use many more light sources in real-time rendering. One problem deferred rendering has is that it cannot properly deal with transparent items. Scanline Rendering – Scanline rendering is a very old technique where you render one line of pixels after another. Pixar’s Renderman is a scanline renderer, but also the NintendoDS uses scanline rendering. Skylight (or Diffuse Sky Radiation) – this is the fancy term for light that comes not from the sun, but is reflected from the sky. It is what makes sunlight on earth inherently blue, or orange. Scattering (including Sub-Surface Scattering) – this just means how particles are ‘scattered’ or deviate from an original path. In sub-surface-scattering, light enters an object, and bounces around inside (sub-surface). This leads to things like the orange/red color of your ear when there it a light behind it. Participating Media – the way a group of particles can effect light transport through their volume, not only reflect or refract light, but scatter it. Things like glass, water, fog and smoke are all participating media. Ambient Occlusion – this is a shading effect where occluded areas are shaded, much like access maps of the old days, cracks and areas where light would have a hard time ‘getting into’ are shaded. Screen Space Ambient Occlusion – a rendering technique that fakes ambient occlusion with some z-buffer trickery. By taking the distances between objects in a scene, the algorithm generates approximated occlusion data in real time. (first used on Crysis!) Global Illumination – a way of rendering where you measure light bounces, as the light bounces around a scene, this generates indirect lighting. An example of this would be how a red ball next to a white wall will cast red light onto the wall. Z-Buffer – is where 3d depth information is stored in a 2d image. A 16bit z-buffer has 65536 levels of depth, while an 8 bit has 256. Items on the same level cause flickering or ‘z-fighting’. Z-Fighting – this occurs when polygons have similar z-buffer values, it is a term you should know when dealing with virtual cameras, not real ones. You can see this flickering when you create 2 co-planer planes on top of each other in a 3d app. To eliminate z-fighting you can use 24 or 32bit zbuffers. Frustrum – everything in the camera’s field of view; generally the entire volume that the camera can see. Environment Reflection – the way of faking a reflection by applying an image to a surface, this can be a spherical map, cube map, etc. Some environmental reflections (cubemaps) can be generated at rutime as you move an object around. (most notably in racing games) Cubic Environment Mapping – a way of generating an environmental reflection map with six sides that are mapped onto a cube, recreating the reflection of the environment around an object. SkyBox – creating a ‘sky’ in a virtual scene by enclosing the entire scene in a large box with images on 5 sides.
Here are some example sentences:
Artist: This place here where the light shines on the surface is too small, it makes my object look too wet. Technical Artist: The surface is too glossy, as a result, the area of specular reflection where you see the light is very small.
Artist: Like in the photos we took, things in the distance should be lighter, in the engine can we make things lighter as they get farther away? Technical Artist: As things recede into the distance, aerial perspective causes them to become lighter, to acheive this we should increase the environment fog slightly.
Taking Photographic Reference
I feel every technical artist who assesses visual output should own a proper Digital Single Lens Reflex camera (DSLR), no matter what quality or how old. This will force you to understand and work with many of the terms above. The artist in you will want to take good pictures, and this is much more than good composition, you are essentially recording light. You will need to learn a lot to be able to properly meter and record light in different situations. Because it’s digital, you will be able to iterate and learn fast, recognizing cause and effect relationships the same way we do with the realtime feedback of scripting languages in 3d apps.
I grew up in the US though I now live in Europe. This is just a short post about something that I find really unfair and frustrating: International Pricing of High End Tech Items. Let’s check out the new Nikon D700:
It certainly would seem that Germany is getting the short end of that stick. In many cases, people in Europe could fly to the US, buy electronics, and come back for the price of getting them here. And many people do.
Not to menaion many companies have better warranties in the US where the market is more competitive. (Example: Many Nikon cameras and lenses have a 5 year warranty in the US and 2 year here in Germany)
When the Wii came out int he US, it was 250usd, when it came out here, it was 250eur. The eur was riding high, in the US it was impossible to get a Wii, hwever, they were readily available in all stores here; leading many to speculate that it was because Nintendo was making 400usd per Wii (250eur) in Europe. This isn’t just about inflation, some items are priced 1:1 or a little over, 3dsMax below, but others are more ridiculous, Photoshop for example.
The above is just completely inane. Some companies will tell you they have to charge a premium on products in Europe because it costs extra to localize them. But come on.. Stuff like the above is ridiculous.
When you start looking at really high end tech, items that only have one distributor in Europe, but many in the US; like motion capture systems, the difference in pricing due to 1) inflation 2) companies just charging more in europe and 3) single distributors in a region having no competition, makes it inhibitively expensive (we’re talking tens of thousands of dollars price difference). It would be cheaper to set up a company in the US just to make these purchases, and I am sure people do.
But seriously, Adobe, you should be ashamed of yourselves.
Facial motion capture stabilization is basically where you isolate the movement of the face from the movement of the head. This sounds pretty simple, but it is actually a really difficult problem. In this post I will talk about the general process and give you an example facial stabilization python script.
Disclaimer: The script I have written here is loosely adapted from a MEL script in the book Mocap for Artists, and not something proprietary to Crytek. This is a great book for people of all experience levels, and has a chapter dedicated to facial mocap. Lastly, this script is not padded out or optimized.
To follow this you will need some facial mocap data, there is some freely downloadable here at www.mocap.lt. Grab the FBX file.
Get at least 3 markers on the actor that do not move when they move their face. These are called ’stabilization markers’ (STAB markers). You will use these markers to create a coordinate space for the head, so it is important that they not move. STAB markers are commonly found on the left and right temple, and nose bridge. Using a headband and creating virtual markers from multiple solid left/right markers works even better. Headbands move, it’s good to keep this in mind, above you see a special headrig used on Kong to create stable markers.
It is a good idea to write some tools to help you out here. At work I have written tools to parse a performance and tell me the most stable markers at any given time, if you have this data, you can also blend between them.
Load up the facial mocap file you have downloaded, it should look something like this:
In the data we have, you can delete the root, the headband markers, as well as 1-RTMPL, 1-LTMPL, and 1-MNOSE could all be considered STAB markers.
As you can see, mocap data is just a bunch of translating points. So what we want to do is create a new coordinate system that has the motion of the head, and then use this to isolate the facial movement.
You create a library ‘myLib’ that you load into motionbuilder’s python environment. This is what does the heavy lifting, I say this because you don’t want to do things like send the position of every marker, every frame to your external app via telnet. I also load pyEuclid, a great vector library, because I didn’t feel like writing my own vector class. (MBuilder has no vector class)
So we will now create our own library that sits inside MBuilder, this will essentially be a ‘toolkit’ that we communicate with from the outside. Your ‘myLib’ can be called anything, but this should be the place you store functions that do the real processing jobs, you will feed into to them from the outside UI later. The first thing you will need inside the MB python environment is something to cast FBVector3D types into pyEuclid. This is fairly simple:
#casts point3 strings to pyEuclid vectorsdef vec3(point3):
return Vector3(point3, point3, point3)#casts a pyEuclid vector to FBVector3ddef fbv(point3):
return FBVector3d(point3.x, point3.y, point3.z)
Next is something that will return an FBModelList of models from an array of names, this is important later when we want to feed in model lists from our external app:
#returns an array of models when given an array of model names#useful with external apps/telnetlib uidef modelsFromStrings(modelNames):
output = for name in modelNames:
Now, if you were to take these snippets and save them as a file called myLib.py in your MBuilder directory tree (MotionBuilder75 Ext2\bin\x64\python\lib), you can load them into the MBuilder environment. (You should have also placed pyEuclid here)
It’s always good to mock-up code in telnet because, unlike the python console in MBuilder, it supports copy/paste etc..
In the image above, I get the position of a model in MBuilder, it returns as a FBVector3D, I then import myLib and pyEuclid and use our function above to ‘cast’ the FBVector3d to a pyEuclid vector. It can now be added, subtracted, multiplied, and more; all things that are not possible with the default MBuilder python tools. Our other function ‘fbv()‘ casts pyEuclid vectors back to FBVector3d, so that MBuilder can read them.
So we can now do vector math in motionbuilder! Next we will add some code to our ‘myLib’ that stabilizes the face.
Adding Stabilization-Specific Code to ‘myLib’
One thing we will need to do a lot is generate ‘virtual markers’ from the existing markers. To do this, we need a function that returns the average position of however many vectors (marker positions) it is fed.
#returns average position of an FBModelList as FBVector3ddef avgPos(models):
mLen = len(models)if mLen == 1:
total = vec3(models.Translation)for i inrange(1, mLen):
total += vec3(models[i].Translation)
avgTranslation = total/mLen
Here is an example of avgPos() in use:
Now onto the stabilization code:
#stabilizes face markers, input 4 FBModelList arrays, leaveOrig for leaving original markersdef stab(right,left,center,markers,leaveOrig):
pMatrix = FBMatrix()
lScene = lSystem.Scene
newMarkers = def faceOrient():
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))#build the coordinate system of the head
xVec = (Cpos - Rpos)
xVec = xVec.normalize()
zVec = ((Cpos - vec3(faceAttach.Translation)).normalize()).cross(xVec)
zVec = zVec.normalize()
yVec = xVec.cross(zVec)
yVec = yVec.normalize()
facePos = (Rpos + Lpos)/2
pMatrix = xVec.x
pMatrix = xVec.y
pMatrix = xVec.z
pMatrix = yVec.x
pMatrix = yVec.y
pMatrix = yVec.z
pMatrix = zVec.x
pMatrix = zVec.y
pMatrix = zVec.z
pMatrix = facePos.x
pMatrix = facePos.y
pMatrix = facePos.z
lScene.Evaluate()#keys the translation and rotation of an animNodeListdef keyTransRot(animNodeList):
for lNode in animNodeList:
if(lNode.Name == 'Lcl Translation'):
lNode.KeyCandidate()if(lNode.Name == 'Lcl Rotation'):
Rpos = vec3(avgPos(right))
Lpos = vec3(avgPos(left))
Cpos = vec3(avgPos(center))#create a null that will visualize the head coordsys, then position and orient it
faceAttach = FBModelNull("faceAttach")
faceAttach.Show = True
faceAttach.Translation = fbv((Rpos + Lpos)/2)
faceOrient()#create new set of stabilized nulls, non-destructive, this should be tied to 'leaveOrig' laterfor obj in markers:
new = FBModelNull(obj.Name + '_stab')
newTran = vec3(obj.Translation)new.Translation = fbv(newTran)new.Show = Truenew.Size = 20new.Parent = faceAttach
lPlayerControl = FBPlayerControl()
FStart = int(lPlayerControl.ZoomWindowStart.GetFrame(True))
FStop = int(lPlayerControl.ZoomWindowStop.GetFrame(True))
animNodes = faceAttach.AnimationNode.Nodesfor frame inrange(FStart,FStop):
#build proper head coordsys
faceOrient()#update stabilized markers and key themfor m inrange(0,len(newMarkers)):
markerAnimNodes = newMarkers[m].AnimationNode.Nodes
We feed our ‘stab‘function FBModelLists of right, left, and center stabilization markers, it creates virtual markers from these groups. Then ‘markers’ is all the markers to be stabilized. ‘leavrOrig’ is an option I usually add, this allows for non-destructive use, I have just made the fn leave original in this example, as I favor this, so this option does nothing, but you could add it. With the original markers left, you can immediately see if there was an error in your script. (new motion should match orig)
Creating an External UI that Uses ‘myLib’
Earlier I mentioned Creating Interactive MotionBuilder User Interface Tools, where I explain how to screenscrape/use the telnet Python Remote Server to create an interactive external UI that floats as a window in MotionBuilder itself. I also use the libraries mentioned in the above article.
The code for the facial stabilization UI I have created is here: [stab_ui.py]
I will now step through code snippets pertaining to our facial STAB tool:
This returns a list of strings that are the currently selected models in MBuilder. This is the main thing that our external UI does. The person needs to interactively choose the right, left, and center markers, then all the markers that will be stabilized.
At the left here you see what the UI looks like. To add some feedback to the buttons, you can make them change to reflect that the user has selected markers. We do so by changing the button text.
This also stores all the markers the user has chosen into the variable ‘rStabMarkers‘. Once we have all the markers the user has chosen, we need to send them to ‘myLib‘ in MBuilder so that it can run our ‘stab‘ function on them. This will happen when they click ‘Stabilize Markerset‘.
Above we now use ‘modelsFromStrings‘ to feed ‘myLib’ the names of selected models. When you run this on thousands of frames, it will actually hang for up to a minute or two while it does all the processing. I discuss optimizations below. Here is a video of what you should have when stabilization is complete:
Kill the keyframes on the root (faceAttach) to remove head motion
Remember: Your stabilization will only be as good as your STAB markers. It really pays off to create tools to check marker stability.
Sometimes the terminal/screen scraping runs into issues. The mbPipe function can be padded out a lot and made more robust, this here was just an example. If you look at the external python console, you can see exactly what mbPipe is sending to MBuilder, and what it is receiving back through the terminal:
Sending>>> selectedModels = FBModelList()
Sending>>> for item in selectedModels: print item.Name
['Subject 1-RH1', 'Subject 1-RTMPL']
All of the above can be padded out and optimized. For instance, you could try to do everything without a single lPlayerControl.StepForward() or lScene.Evaluate(), but this takes a lot of MotionBuilder/programming knowhow; it involves only using the keyframe data to generate your matrices, positions etc, and never querying a model.
This is a video from a company called Immersive Media. It’s a 360 degree streaming video you can pan around, and even zoom in. Awesome stuff, their hardware even does realtime stitching, and they have an underwater housing. Check out the site for more vids, they have been to some great locations.
At work we got the Casio EX-F1 for animation reference. It’s a really great, cheap solution for those looking to record high speed reference (300/600/1200fps) or hd (1080) video. Here are some videos I took a few weeks ago: