Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, September 17, 2008

Making of the Image Metrics ‘Emily’ Tech Demo

I have seen some of the other material in the SIGGRAPH Image Metrics presskit posted online [Emily Tech Demo] [‘How To’ video], but not the video that shows the making of the Emily tech demo. So here’s that as well:

At the end, there’s a quote from Peter Plantec about how Image Metrics has finally ‘crossed the uncanny valley’, but seriously, am I the only one who thinks the shading is a bit off, and besides that, what’s the point of laying a duplicate of face directly on top of one in a video? Shouldn’t they have shown her talking in a different setting? Maybe showed how they can remap the animation to a different face? There is no reason not to just use the original plate in this example.

posted by Chris at 4:44 PM  

Wednesday, September 17, 2008

Visualizing MRI Data in 3dsMax

Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.

The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.

At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]

Volumetric Pixels

Above is an example of 128×128 10 slice reconstruction with greyscale cubes.

I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]

The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.

But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):

on loadVideoBTN pressed do
     (
          --ask the user for an avi
          f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
          mapLoc = f
          if f == undefined then (return undefined)
          else
          (
               map = openBitMap f
               --get the width and height of the video
               heightEDT2.text = map.height as string
               widthEDT2.text = map.width as string
               --gethow many frames the video has
               vidLBL.text = (map.numFrames as string + " slices loaded.")
               loadVideoBTN.text = getfilenamefile f
               imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
               slicesEDT2.text = map.numFrames as string
               threshEDT.text = "90"
          )
          updateImgProps()
     )

We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.

We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.

Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:

height = 0.0
updateImgProps()
 
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
     --seek to the frame of video that corresponds to the current slice
     map.frame = frame
     --loop that traverses y, which corresponds to the image height
     for y = mapHeight1 to mapHeight2 do
     (
          voxels = #()
          currentSlicePROG.value = (100.0 * y / totalHeight)
          --read a line of pixels
          pixels = getPixels map [0,y-1] totalWidth
          --loop that traverses x, the line of pixels across the width
          for x = 1 to totalWidth do
          (
               if (greyscale pixels[x]) < threshold then
               (
                    --if you are not a color we want to store: do nothing
               )
               --if you are a color we want, we will make a cube with your color in 3d space
               else
               (
                    b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
                    b.pos = [x,-y,height]
                    b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
                    append voxels b
               )
          )
          --grabage collection is important on large datasets
          gc()
     )
     --increment the height to bump your cubes to the next slice
     height+=1
     progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
     slicePROG.value = (100.0 * (height/totalSlices))
)

Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…

Point Clouds and ‘MetaBalls’

I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:

I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:

p = convertToPoly(Plane lengthsegs:1 widthsegs:1)
p.name = "VoxelPoint_dataSet"
polyop.deleteVerts $VoxelPoint_dataSet #(1,2,3,4)

That and when we created a box before, we now create a point:

polyop.createVert $VoxelPoint_dataSet [x,-y,height]

This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.

Faking Volume with Transparent Planes


I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.

I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:

--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
     meditMaterials[matNum].materialIDList[m] = m
)

Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:

p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1

This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.


I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]


You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:

Where to Go From here

I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.

posted by Chris at 3:48 PM  

Friday, September 5, 2008

Talking about Light Transport

EDIT: I would like this to be a ‘living document’ of sorts, please send me terms and definitions and feel free to correct mine!

Whether you’re a technical artist in games or film, when trying to create realistic scenes and characters, the more you know about how light works and interacts with surfaces in the world, and the more reference of this you have, the better you can explain why you think an image looks ‘off’.

You are an technical artist. You need to be able to communicate with technical people using terminology they understand. We often act as bridges between artists and programmers, it is very important for us to be able to communicate with both appropriately.

Light transport is basically the big nerd word for how light gets from one place to another, and scattering is usually how surfaces interact with light.

You can see something in a rendered image and know it looks ‘wrong’, but it’s important to understand why it looks wrong, and be able to accurately explain to the programming team how it can be improved upon. To do this you should be able to:

1) present examples of photographic reference

2) communicate with general terms that others can understand

General Terminology

The following terms come from optics, photography and art, you should not only understand these, but use them when explaining why something does not look ‘right’. I will give both the technical term and my shortest approximation:

Specular Reflection – sharp reflection of light from a surface that somewhat retains an image (eg. glossy)
Diffuse Reflection – uneven reflection of light from a surface that does not retain the image (eg. matte)
Diffuse Interreflection – light reflected off other diffuse objects
Diffraction – what happens to a wave when it hits an obstacle, this could be an ocean wave hitting a jetty, or a light wave hitting a grate.
Depth od Field – the area in an image that is in focus
Bokeh – the blurry background in a photo that is not in focus
Chromatic Abberation – the colored fringes around an object or light refracted through an object, it’s because certain wavelengths of light get bent ‘out-of-sync’, i usually think of it as an old projector or monitor that is misaligned; that’s what this effect looks like.
Caustics – light rays shined through a refractive object onto another surface
Angle of Incidence – this is actually the angle something is off from ‘straight on’, but we mainly use this when talking about shaders or things that are view-dependent. If you were to draw a line from your eyes to a surface, the angle between this and it’s ‘normal’ is the ‘angle of incidence’. Car paint whose color changes as you walk around it is a good example: it changes based on the angle you see it. Just remember, your head doesn’t have to move, the object can move, changing the angle between your sightline and the surface.
Refractive Index (Refraction) – how light’s direction changes when moving through an object. the refractive index of water is 1.3, glass has a higher refractive index at 1.4 to 1.6
Reflection – the changing of direction of light, usually casting light onto something, like the camera or our eyes
Glossiness – the ability of a surface to reflect specular light, the smaller amount of specular light reflected usually makes something look ‘glossier’
Ray – think of a ray as a single beam of light; a single particle. This particle moves in a ‘ray’, when we talk about ‘ray tracing’ we mean tracing the path of a ray from a light source through a scene.
Fresnel – pronounced ‘fre-nel’, it is the amount of view dependent reflectance on a surface. a great example is rim lighting, but fresnel effects are used to fake a fuzzy look, x-ray effects, light reflected off the ocean, etc.
Aerial Perspective – this is how things get lighter as they recede into the distance, the more air, or ‘atmosphere’ between you and the object (mountain, building, etc) the lighter it is visually. I grew up in Florida, we don’t have much of this effect at all due to elevation and clear skies.
High Dynamic Range Imaging (HDR) – this just means you are dealing with more light data than a normal image. An HDR image has a larger range of light information stored in it. With today’s prosumer DSLR’s it is possible to capture 14bit images that theoretically contain ’13-14 stops’ of linear data. A digital example could be the sky in the game Crysis, it was a dynamic HDR skydome, this meant that the game engine was computing more light than could be displayed on the monitor. In these situations, this data is tonemapped to create visually interesting lighting situations.
Tone Mapping – this is how you can ‘map’ one set of colors onto another, in games it generally means ‘mapping’ high dynamic range data into a limited dynamic range, like a tv set or monitor. This can be done by ‘blooming’ areas that are overbright and other various techniques.\
Bloom – ‘bloom’ is the gradient fringe you see around really brightly lit areas in an image, like a window to a bright sky seen from inside a dark room.
Albedo – the extent to which a surface diffusely reflects light from the sun.
Afterimage Effect – this belongs to a groups of effects I call ‘accumulation-buffer effects’. the after-image effect visually ‘burns-in’ the brightest parts of a previous image, simulating the effect our eyes have when adjusting to bright light.
Deferred Rendering – this is a type of rendering where you render parts of the image to framebuffer storage instead of rendering directly to the pixel-output. Deferred rendering generally allows you ot use many more light sources in real-time rendering. One problem deferred rendering has is that it cannot properly deal with transparent items.
Scanline Rendering – Scanline rendering is a very old technique where you render one line of pixels after another. Pixar’s Renderman is a scanline renderer, but also the NintendoDS uses scanline rendering.
Skylight (or Diffuse Sky Radiation) – this is the fancy term for light that comes not from the sun, but is reflected from the sky. It is what makes sunlight on earth inherently blue, or orange.
Scattering (including Sub-Surface Scattering) – this just means how particles are ‘scattered’ or deviate from an original path. In sub-surface-scattering, light enters an object, and bounces around inside (sub-surface). This leads to things like the orange/red color of your ear when there it a light behind it.
Participating Media – the way a group of particles can effect light transport through their volume, not only reflect or refract light, but scatter it. Things like glass, water, fog and smoke are all participating media.
Ambient Occlusion – this is a shading effect where occluded areas are shaded, much like access maps of the old days, cracks and areas where light would have a hard time ‘getting into’ are shaded.
Screen Space Ambient Occlusion – a rendering technique that fakes ambient occlusion with some z-buffer trickery. By taking the distances between objects in a scene, the algorithm generates approximated occlusion data in real time. (first used on Crysis!)
Global Illumination – a way of rendering where you measure light bounces, as the light bounces around a scene, this generates indirect lighting. An example of this would be how a red ball next to a white wall will cast red light onto the wall.
Z-Buffer – is where 3d depth information is stored in a 2d image. A 16bit z-buffer has 65536 levels of depth, while an 8 bit has 256. Items on the same level cause flickering or ‘z-fighting’.
Z-Fighting – this occurs when polygons have similar z-buffer values, it is a term you should know when dealing with virtual cameras, not real ones. You can see this flickering when you create 2 co-planer planes on top of each other in a 3d app. To eliminate z-fighting you can use 24 or 32bit zbuffers.
Frustrum – everything in the camera’s field of view; generally the entire volume that the camera can see.
Environment Reflection – the way of faking a reflection by applying an image to a surface, this can be a spherical map, cube map, etc. Some environmental reflections (cubemaps) can be generated at rutime as you move an object around. (most notably in racing games)
Cubic Environment Mapping – a way of generating an environmental reflection map with six sides that are mapped onto a cube, recreating the reflection of the environment around an object.
SkyBox – creating a ‘sky’ in a virtual scene by enclosing the entire scene in a large box with images on 5 sides.

Here are some example sentences:

Artist: This place here where the light shines on the surface is too small, it makes my object look too wet.
Technical Artist: The surface is too glossy, as a result, the area of specular reflection where you see the light is very small.

Artist: Like in the photos we took, things in the distance should be lighter, in the engine can we make things lighter as they get farther away?
Technical Artist: As things recede into the distance, aerial perspective causes them to become lighter, to acheive this we should increase the environment fog slightly.

Taking Photographic Reference

I feel every technical artist who assesses visual output should own a proper Digital Single Lens Reflex camera (DSLR), no matter what quality or how old. This will force you to understand and work with many of the terms above. The artist in you will want to take good pictures, and this is much more than good composition, you are essentially recording light. You will need to learn a lot to be able to properly meter and record light in different situations. Because it’s digital, you will be able to iterate and learn fast, recognizing cause and effect relationships the same way we do with the realtime feedback of scripting languages in 3d apps.

posted by Chris at 8:17 AM  

Powered by WordPress