Stumbling Toward 'Awesomeness'

A Technical Art Blog

Wednesday, September 17, 2008

Visualizing MRI Data in 3dsMax

Many of you might remember the fluoroscopic shoulder carriage videos I posted on my site about 4 years ago. I always wanted to do a sequence of MRI’s of the arm moving around. Thanks to Helena, an MRI tech that I met through someone, I did just that. I was able to get ~30 mins of idle time on the machine while on vacation.

The data that I got was basically image data. It’s slices along an axis, I wanted to visualize this data in 3D, but they did not have software to do this in the hospital. I really wanted to see the muscles and bones posed in three dimensional space as the arm went through different positions, so I decided to write some visualization tools myself in maxscript.

At left is a 512×512 MRI of my shoulder; arm raised (image downsampled to 256, animation on 5’s, ). The MRI data has some ‘wrap around’ artifacts because it was a somewhat small MRI (3 tesla) and I am a big guy, when things are close to the ‘wall’ they get these artifacts, and we wanted to see my arm. I am uploading the raw data for you to play with, you can download it from here: [data01] [data02]

Volumetric Pixels

Above is an example of 128×128 10 slice reconstruction with greyscale cubes.

I wrote a simple tool called ‘mriView’. I will explain how I created it below and you can download it and follow along if you want. [mriView]

The first thing I wanted to do was create ‘volumetric pixels’ or ‘voxels’ using the data. I decided to do this by going through all the images, culling what i didn’t want and creating grayscale cubes out of the rest. There was a great example in the maxscript docs called ‘How To … Access the Z-Depth channel’ which I picked some pieces from, it basically shows you how to efficiently read an image and generate 3d data from it.

But we first need to get the data into 3dsMax. I needed to load sequential images, and I decided the easiest way to do this was load AVI files. Here is an example of loading an AVI file, and treating it like a multi-part image (with comments):

on loadVideoBTN pressed do
     (
          --ask the user for an avi
          f = getOpenFileName caption:"Open An MRI Slice File:" filename:"c:/" types:"AVI(*.avi)|*.avi|MOV(*.mov)|*.mov|All|*.*|"
          mapLoc = f
          if f == undefined then (return undefined)
          else
          (
               map = openBitMap f
               --get the width and height of the video
               heightEDT2.text = map.height as string
               widthEDT2.text = map.width as string
               --gethow many frames the video has
               vidLBL.text = (map.numFrames as string + " slices loaded.")
               loadVideoBTN.text = getfilenamefile f
               imageLBL.text = ("Full Image Yeild: " + (map.height*map.width) as string + " voxels")
               slicesEDT2.text = map.numFrames as string
               threshEDT.text = "90"
          )
          updateImgProps()
     )

We now have the height in pixels, the width in pixels, and the number of slices. This is enough data to begin a simple reconstruction.

We will do so by visualizing the data with cubes, one cube per pixel that we want to display. However be careful, a simple 256×256 video is already possibly 65,536 cubes per slice! In the tool, you can see that I put in the original image values, but allow the user to crop out a specific area.

Below we go through each slice, then go row by row, looking pixel by pixel looking for ones that have a gray value above a threshold (what we want to see), when we find them, we make a box in 3d space:

height = 0.0
updateImgProps()
 
--this loop iterates through all slices (frames of video)
for frame = (slicesEDT1.text as integer) to (slicesEDT2.text as integer) do
(
     --seek to the frame of video that corresponds to the current slice
     map.frame = frame
     --loop that traverses y, which corresponds to the image height
     for y = mapHeight1 to mapHeight2 do
     (
          voxels = #()
          currentSlicePROG.value = (100.0 * y / totalHeight)
          --read a line of pixels
          pixels = getPixels map [0,y-1] totalWidth
          --loop that traverses x, the line of pixels across the width
          for x = 1 to totalWidth do
          (
               if (greyscale pixels[x]) < threshold then
               (
                    --if you are not a color we want to store: do nothing
               )
               --if you are a color we want, we will make a cube with your color in 3d space
               else
               (
                    b = box width:1 length:1 height:1 name:(uniqueName "voxel_")
                    b.pos = [x,-y,height]
                    b.wirecolor = color (greyscale pixels[x]) (greyscale pixels[x]) (greyscale pixels[x])
                    append voxels b
               )
          )
          --grabage collection is important on large datasets
          gc()
     )
     --increment the height to bump your cubes to the next slice
     height+=1
     progLBL.text = ("Slice " + (height as integer) as string + "/" + (totalSlices as integer) as string + " completed")
     slicePROG.value = (100.0 * (height/totalSlices))
)

Things really start to choke when you are using cubes, mainly because you are generating so many entities in the world. I added the option to merge all the cubes row by row, which sped things up, and helped memory, but this was still not really the visual fidelity I was hoping for…

Point Clouds and ‘MetaBalls’

I primarily wanted to generate meshes from the data so the next thing I tried was making a point cloud, then using that to generate a ‘BlobMesh’ (metaball) compound geometry type. In the example above, you see the head of my humerus and the tissue connected to it. Below is the code, it is almost simpler than boxes, it just takes finessing edit poly, i have only commented changes:

I make a plane and then delete all the verts to give me a ‘clean canvas’ of sorts, if anyone knows a better way of doing this, let me know:

p = convertToPoly(Plane lengthsegs:1 widthsegs:1)
p.name = "VoxelPoint_dataSet"
polyop.deleteVerts $VoxelPoint_dataSet #(1,2,3,4)

That and when we created a box before, we now create a point:

polyop.createVert $VoxelPoint_dataSet [x,-y,height]

This can get really time and resource intensive. As a result, I would let some of these go overnight. This was pretty frustrating, because it slowed the iteration time down a lot. And the blobMesh modifier was very slow as well.

Faking Volume with Transparent Planes


I was talking to Marco at work (Technical Director) and showing him some of my results, and he asked me why I didn’t just try to use transparent slices. I told him I had thought about it, but I really know nothing about the material system in 3dsMax, much less it’s maxscript exposure. He said that was a good reason to try it, and I agreed.

I started by making one material per slice, this worked well, but then I realized that 3dsMax has a limit of 24 materials. Instead of fixing this, they have added ‘multi-materials’, which can have n sub-materials. So I adjusted my script to use sub-materials:

--here we set the number of sub-materials to the number of slices
meditMaterials[matNum].materialList.count = totalSlices
--you also have to properly set the materialIDList
for m=1 to meditMaterials[matNum].materialList.count do
(
     meditMaterials[matNum].materialIDList[m] = m
)

Now we iterate through, generating the planes, assigning sub-materials to them with the correct frame of video for the corresponding slice:

p = plane name:("slice_" + frame as string) pos:[0,0,frame] width:totalWidth length:totalHeight
p.lengthsegs = 1
p.widthsegs = 1
p.material = meditMaterials[matNum][frame]
p.castShadows = off
p.receiveshadows = off
meditMaterials[matNum].materialList[frame].twoSided = on
meditMaterials[matNum].materialList[frame].selfIllumAmount = 100
meditMaterials[matNum].materialList[frame].diffuseMapEnable = on
newMap = meditMaterials[matNum].materialList[frame].diffuseMap = Bitmaptexture filename:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
newmap = meditMaterials[matNum].materialList[frame].opacityMap = Bitmaptexture fileName:mapLoc
newmap.starttime = frame
newmap.playBackRate = 1
showTextureMap p.material on
mat += 1

This was very surprising, it not only runs fast, but it looks great. Of course you are generating no geometry, but it is a great way to visualize the data. The below example is a 512×512 MRI of my shoulder (arm raised) rendered in realtime. The only problem I had was an alpha-test render error when viewed directly from the bottom, but this looks to bea 3dsMax issue.


I rendered the slices cycling from bottom to top. In one MRI the arm is raised, in the other, the arm lowered. The results are surprisingly decent. You can check that video out here. [shoulder_carriage_mri_xvid.avi]


You can also layer multiple slices together, above I have isolated the muscles and soft tissue from the skin, cartilage and bones. I did this by looking for pixels in certain luminance ranges. Above in the image I am ‘slicing’ away the white layer halfway down the torso, below you can see a video of this in realtime as I search for the humerus; this is a really fun & interesting way to view it:

Where to Go From here

I can now easily load up any of the MRI data I have and view it in 3d, though I would like to be able to better create meshes from specific parts of the data, in order to isolate muscles or bones. To do this I need to allow the user to ‘pick’ a color from part of the image, and then use this to isolate just those pixels and remesh just that part. I would also like to add something that allows you to slice through the planes from any axis. That shouldn’t be difficult, just will take more time.

posted by Chris at 3:48 PM  

9 Comments »

  1. That’s extraordinarily well done Chris. The animations are fantastic! A very clean a simple solution with impressive results.

    Comment by Martin — 2008/09/17 @ 5:12 PM

  2. Cool data (what about part2?), thanks for posting. Have you tried using software dedicated for processing medical images, such as MeVisLab (http://www.mevislab.de/) to extract geometry? I tried quickly but without the Dicom files it’s a little complicated to rebuild the stack of slices.

    Comment by MartinB — 2008/09/18 @ 2:06 PM

  3. As you can see I just updated the post a bit, I will upload more data, I would need to take the hospital name out of the DICOM files. I did try many free viewers, but to make good geometry they needed slices from different axis, and my work taught me some stuff and only took a few hours… Plus it was fun! 🙂

    Comment by admin — 2008/09/18 @ 2:39 PM

  4. I actually grabbed your data and created it in After Effects… it worked surprisingly well! I’ll email you the clip if you’d like to see it.

    Comment by Martin — 2008/09/20 @ 7:02 AM

  5. Hi Chris,

    I was searching for some other maxScript stuff, and stumbled across your project. Nicely done!
    Thought you might want to know that Max is not limited to the # of materials; at least not in a practical sense (you can go into the hundreds with no problem). It’s the material EDITOR that is limited to 24 slots at a time. Clear the editor, and you can load in 24 other materials. FWIW, I usually use only 5 or 6 active slots, but believe me, there are plenty more materials in any given scene!

    Just diving into the material exposure via maxScript myself, or I could offer some more practical tips.

    cheers,
    Jeff

    Comment by Jeff Brown — 2008/10/10 @ 9:07 PM

  6. Amazing work. I did notice that you created a “empty” editPoly object using 3 lines. That’s what I did before, but now I use:

    myPolyObj = convertToPoly (mesh mesh:(triMesh()))

    One line. Nice and clean. You asked if there was a better way. 🙂

    Comment by Dave Black — 2008/10/27 @ 11:34 PM

  7. hey,amazing maxscript you ve done here. I was looking for point render with maxscript. There is 3ds documentation to do that but nothing about shading. My wish is to create a basic version of Krakatoa renderer (http://www.youtube.com/watch?v=E_uu6K30-Ik&feature=related). Just render particles as shaded point, nothing more. Any ideas or link to help??

    Comment by loran — 2009/10/12 @ 10:27 PM

  8. Pretty cool stuff…I’m actually trying to do a smilar thing in Maya…and I just found this great tool: http://www.mevislab.de/ Thought you might be interested…Have fun

    Cheers

    Comment by Stophe — 2011/12/12 @ 4:45 PM

  9. Amazing script!
    Is there any way to keep the colors of the .avi frame when creating cubes?
    Thank you.

    Comment by Remi — 2014/01/21 @ 4:34 AM

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress