Stumbling Toward 'Awesomeness'

A Technical Art Blog

Friday, January 18, 2013

Moving to ‘Physically-Based’ Shading

damo_engine

At the SIGGRAPH Autodesk User Group we spoke a lot about our character technology and switch to Maya. One area that we haven’t spoken so much about is next-gen updates to our shading and material pipeline, however Nicolas and I have an interview out in Making Games where we talk about that in detail publicly for the first time, so I can mention it here. One of the reasons we have really focused on character technology is that it touches so many departments and is a very difficult issue to crack, at Crytek we have a strong history of lighting and rendering.

What is ‘Physically-Based’ Shading?

The first time I ever encountered a physically-based pipeline was when working at ILM. The guys had gotten tired of having to create different light setups and materials per shot or per sequence. Moving to a more physically-based shading model would mean that we could not waste so much time re-lighting and tweaking materials, but also get a more natural, better initial result -quicker. [Ben Snow’s 2010 PBR SIGGRAPH Course Slides]

WHAT IS MEANT BY ‘PHYSICAL’

http://myphysicswebschool.blogspot.de/

image credit: http://myphysicswebschool.blogspot.de/

A physically based shading model reacts much more like real world light simulation, one of the biggest differences is that the amount of reflected light can never be more than the incoming amount that hit the surface, older lighting models tended to have overly bright and overly broad specular highlights. With the Lambert/Blinn-Phong model it was possible to have many situations where a material emitted more light than it received. An interesting caveat of physically-based shading is that the user no longer has control over the specular response (more under ‘Difficult Transition’ below). Because the way light behaves is much more realistic and natural, materials authored for this shading model work equally well in all lighting environments.

Geek Stuff:‘Energy conservation’ is a term that you might hear often used in conjunction with physically-based lighting, here’s a quote from the SIGGRAPH ’96 course notes that I always thought was a perfect explanation of reflected diffuse and specular energy:

“When light hits an object, the energy is reflected as one of two components; the specular component (the shiny highlight) and the diffuse (the color of the object). The relationship of these two components is what defines what kind of material the object is. These two kinds of energy make up the 100% of light reflected off an object. If 95% of it is diffuse energy, then the remaining 5% is specular energy. When the specularity increases, the diffuse component drops, and vice versa. A ping pong ball is considered to be a very diffuse object, with very little specularity and lots of diffuse, and a mirror is thought of as having a very high specularity, and almost no diffuse.”

PHYSICALLY- PLAUSIBLE

It’s important to understand that everything is a hack, whether it’s V-Ray or a game engine, we are just talking about different levels of hackery. Game engines often take the cake for approximations and hacks, one of my guys once said ‘Some people just remove spec maps from their pipeline and all the sudden they’re ‘physically-based”. It’s not just the way our renderers simulate light that is an approximation, but it’s important to remember that we feed the shading model with physically plausible data as well, when you make an asset, you are making a material that is trying to mimic certain physical characteristics.

DIFFICULT TRANSITION

Once physics get involved, you can cheat much less, and in film we cheeeeeaaat. Big time. Ben Snow, the VFX Supe who ushered in the change to a physically-based pipeline at ILM was quoted in VFXPro as saying: “The move to the new [pipeline] did spark somewhat of a holy war at ILM.” I mentioned before that the artist loses control of the specular response, in general, artists don’t like losing control, or adopting new ways of doing things.

WHY IT IS IMPORTANT FOR GAMES AND REAL-TIME RENDERING

Aside from the more natural lighting and rendering, in an environment where the player determines the camera, and often the lighting, it’s important that materials work under all possible lighting scenarios. As the product Manager of Cinebox, I was constantly having our renderer compared to Mental Ray, PRMAN and others, the team added BRDF support and paved the way for physically-based rendering which we hope to ship in 2013 with Ryse.

microcompare05

General Overview for Artists

At Crytek, we have always added great rendering features, but never really took a hard focus on consistency in shading and lighting. Like ILM in my example above, we often tweaked materials for the lighting environment they were to be placed in.

GENERAL RULES / MATERIAL TYPES

Before we start talking about the different maps and material properties, you should know that in a physically-based pipeline you will have two slightly different workflows, one for metals, and one for non-metals. This is more about creating materials that have physically plausible values.

Metals:

  • The specular color for metal should always be above sRGB 180
  • Metal can have colored specular highlights (for gold and copper for example)
  • Metal has a black or very dark diffuse color, because metals absorb all light that enters underneath the surface, they have no ‘diffuse reflection’

Non-Metals:

  • Non-metal has monochrome/gray specular color. Never use colored specular for anything except certain metals
  • The sRGB color range for most non-metal materials is usually between 40 and 60. It should never be higher than 80/80/80
  • A good clean diffuse map is required

GLOSS

gloss_chart

At Crytek, we call the map that determines the roughness the ‘gloss map’, it’s actually the inverse roughness, but we found this easier to author. This is by far one of the most important maps as it determines the size and intensity of specular highlights, but also the contrast of the cube map reflection as you see above.  A good detail normal map can make a surface feel like it has a certain ‘roughness’, but you should start thinking about the gloss map as adding a ‘microscale roughness’. Look above at how as the roughness increases, as does the breadth of the specular highlight. Here is an example from our CryENGINE documentation that was written for Ryse:

click to enlarge

click to enlarge

click to enlarge

click to enlarge

DIFFUSE COLOR

Your diffuse map should be a texture with no lighting information at all. Think a light with a value of ‘100’ shining directly onto a polygon with your texture. There should be no shadow or AO information in your diffuse map. As stated above, a metal should have a completely black diffuse color.

Geek Stuff: Diffuse can also be reffered to as ‘albedo‘, the albedo is the measure of diffuse reflectivity. This term is primarily used to scare artists.

SPECULAR COLOR

As previously discussed, non-metals should only have monochrome/gray-scale specular color maps. Specular color is a real-world physical value and your map should be basically flat color, you should use existing values and not induce noise or variation, the spec color map is not a place to be artistic, it stores real-world values. You can find many tables online that have plausible color values for specular color, here is an example:

Material sRGB Color Linear (Blend Layer)
Water 38 38 38 0.02
Skin 51 51 51 0.03
Hair 65 65 65 0.05
Plastic / Glass (Low) 53 53 53 0.03
Plastic High 61 61 61 0.05
Glass (High) / Ruby 79 79 79 0.08
Diamond 115 115 115 0.17
Iron 196 199 199 0.57
Copper 250 209 194 N/A
Gold 255 219 145 N/A
Aluminum 245 245 247 0.91
Silver 250 247 242 N/A
If a non-metal material is not in the list, use a value between 45 and 65.

Geek Stuff: SPECULAR IS EVERYWHERE: In 2010, John Hable did a great post showing the specular characteristics of a cotton t-shirt and other materials that you wouldn’t usually consider having specular.

EXAMPLE ASSET:

Here you can see the maps that generate this worn, oxidized lion sculpture.

rust

click to enlarge

rust2

EXAMPLES IN AN ENVIRONMENT

640x

See above how there are no variations in the specular color map? See how the copper items on the left have a black diffuse texture? Notice there is no variation in the solid colors of the specular color maps.

SETTING UP PHOTOSHOP color_settings In order to create assets properly, we need to set up our content creation software properly, in this case: Photoshop. If you go to Edit>Color Settings… Set the dialog like the above. It’s important that you author textures in sRGB

Geek Stuff: We author in sRGB because it gives us more precision in darker colors, and reduces banding artifacts. The eye has 4.5 million cones that can perceive color, but 90 million rods that perceive luminance changes. Humans are much more perceptive to contrast changes than color changes!

Taking the Leap: Tips for Leads and Directors

New technologies that require paradigm shifts in how people work or how they think about reaching an end artistic result can be difficult to integrate into a pipeline. At Crytek I am the Lead/Director in charge of the team that is making that initial shift to physically-based lighting, I also lead the reference trip, and managed the hardware requests to get key artists on calibrated wide gamut display devices. I am just saying this to put the next items in some kind of context.

QUICK FEEDBACK AND ITERATION

It’s very important that your team be able to test their assets in multiple lighting conditions. The easiest route is to make a test level where you can cycle lighting conditions from many different game levels, or sampled lighting from multiple points in the game. The default light in this level should be broad daylight IMO, as it’s the hardest to get right.

USE EXAMPLE ASSETS

I created one of the first example assets for the physically based pipeline. It was a glass inlay table that I had at home, which had wooden, concrete (grout), metal, and multi-colored glass inlay. This asset served as a reference asset for the art team. Try to find an asset that can properly show the guys how to use gloss maps, IMO understanding how roughness effects your asset’s surface characteristics is maybe the biggest challenge when moving to a physically-based pipeline.

TRAIN KEY PERSONNEL

As with rolling out any new feature, you should train a few technically-inclined artists to help their peers along. It’s also good to have the artists give feedback to the graphics team as they begin really cutting their teeth on the system. On Ryse, we are doing the above, but also dedicating a single technical artist to helping with environment art-related technology and profiling.

CHEAT SHEET

It’s very important to have a ‘cheat sheet’, this is a sheet we created on the Ryse team to allow an artist to use the color picker to sample common ‘plausible’ values.

SPEC_Range_new.bmp

click to enlarge

HELP PEOPLE HELP THEMSELVES

We have created a debug view that highlights assets whose specular color was not in a physically-plausible range. We are very in favor of making tools to help people be responsible, and validate/highlight work that is not. We also allowed people to set solid specular values in the shader to limit memory consumption on simple assets.

CALIBRATION AND REFERENCE ACQUISITION

calibrate

Above are two things that I actually carry with me everywhere I go. The X-Rite ColorChecker Passport, and the Pantone Huey Pro monitor calibration toolset. Both are very small, and can be carried in a laptop bag. I will go into reference data acquisition in another post. On Ryse we significantly upgraded our reference acquisition pipeline and scanned a lot of objects/surfaces in the field.

 

TECHNICAL IMPROVEMENTS BASED ON PRODUCTION USE

Nicolas Shulz has presented many improvements made based on production use at GDC 2014. His slides are here. He details things like the importance of specular filtering on to preserve highlights as objects recede into the distance, and why we decided to couple normals and roughness.

UPDATE: We’ve now shipped Ryse, I have tried to update the post a little.  I was the invited speaker at HPG 2014, where I touched on this topic a bit and can now update this post with some details and images. (Tips for Leads and Directors) Nicolas also spoke at GDC 2014 and I have linked to his slides above. Though this post focuses on environments, in the end, with the amount of armor on characters, the PBR pipeline was really showcased everywhere. Here’s an image of multiple passes of Marius’ final armor:

marus_breackUp

click to enlarge

posted by Chris at 7:26 PM  

Wednesday, January 9, 2013

Raucous Ball of Noise

email_overload

I can’t remember the last time I had a new year’s resolution. But this year I decided to go for it.

A friend and I were joking that we increasingly feel like Producers: how we spend a large chunk of our time just making sure that things are moving. That a meeting has action items, or minutes. That tasks are scoped, their dependencies tracked, have resources assigned, or have dates on a calendar. That a process has proper gates to allow for course correction, etc. I now spend a majority of my time writing emails, attending meetings, or talking at desks.

Death by Mail

But what is crippling is the emails. I feel I have made a career out of always trying to be helpful, but I was surprised how easily I reply to anything someone sends me. And how willing people are to just ‘go hunting with a shotgun’ and mail 15 others instead of trying to have a discussion with the right person. Many of the mails I saw myself spending time on were threads involving many people and important topics, I felt the need to be involved, but we rarely seemed to come to solid decisions -just running commentary. These mails often had more than 10 people added in CC ‘for awareness’, but then those people feel the need to contribute their opinion in some way.

It turned the simplest discussion a raucous ball of noise, which often then required the creation of a meeting to make a decision on how to progress.

The meetings were more successful, I think in part to the fact that only the people who needed to be involved in the decision were invited. Unfortunately, I had often spent time on the mail thread to avoid the need for a meeting, only to find myself reiterating my sentiments in a meeting the next day.

I looked for a day where I wrote the least number of mails, the number was ~35, and it was a recent sick day when I had stayed home.

Small Adjustment, Big Victory

So I decided to pull myself out of this, after all it is somewhat self-induced. Of all the options, the best seemed to limit myself to 10 work emails a day. All other communication would be in person, in meetings, or on the phone.

I didn’t think this would have the impact it did.

From this, other things started to fall in place. I really disliked how I would increasingly feel like standard operating procedure was constantly looking for dropped balls. I need to let dependencies and other departments drop their balls, and hope that they will learn from it, or hope that someone else is watching. In essence, trust people more, and as a by product: spend more time being a Director and less a Producer.

10 emails a day forced me to really choose what email discussions I want to be involved with carefully. I was not respecting my own time, and this arbitrary rule forced me to do that. As a result, it allows me to spend more time on Art Technology initiatives, looking at the project, talking with my team, and giving proper direction.

I can’t reject meeting invites, or ignore mails, but this little adjustment has really helped me more than I thought it would.

posted by Chris at 2:42 AM  

Monday, January 7, 2013

Abusing ‘Blind Data’ in Maya

‘Blind data’ is custom data that you can store on any object or its components (vertex, edge, polygon, etc). The documentation says ‘Blind data is information stored with polygons which is not used by Maya in any way..’ I believe it is used when importing meshes from other apps that have properties that do not map to Maya, so that when you take them back to those apps, those properties remain.

Anyway, the important point here is that blind data is metadata (int, float, double, boolean, string, binary) that you can attach to any component. It matters not what happens to said component, you can extract a polygon from a mesh, it’s index will have changed, it’s object will have changed, but its blind data will remain with it. The only drawback can be that it can be painfully slow to write this data, but we will get to that later.

Simple Example

First let’s create a blind data template, this is required to store the data later. We use the command ‘blindDataType’ to create a template for a string type called ‘skinningInfo’ or ‘skin’ for short, giving it an ID 12344. Then we query the ID and it returns the blind data attribs we have created.

cmds.blindDataType(id=12344, dt='string', ldn='skinningInfo', sdn='skin')
print cmds.blindDataType(id=12344, tn=1, q=1)
>>['skinningInfo', 'skin', 'string']

So now we have our template, let’s try using it, this is more focused on getting the idea across than speed:

#query vertex # of mesh
v = cmds.polyEvaluate(node, v=1)
#loop through vertices
for vtx in range(0, v):
    #get influences
    infs = cmds.skinCluster(sc, inf=1, q=1)
    #get weights
    objVtx = node + ".vtx[" + str(vtx) + "]"
    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
    #build dict of influence:weight
    for i in range(0, len(infs)):
        weightDict[infs[i]] = vals[i]
    #write value to blind data
 cmds.polyBlindData(objVtx, id=12344, at='vertex', ldn='skinningInfo', sd=str(weightDict))

So here you have saved a dictionary per vertex that has key/value pairs of influence/weight. You can query like so:

#I have a vertex selected in component mode
print cmds.polyQueryBlindData(cmds.ls(sl=1), id=12344, showComp=1)
['polySurface2.vtx[64].skin', "{u'joint2': 0.49755714634259796, 'joint3': 0.49755714634259784, 'joint1': 0.0048857073148042395}"]

Now On To Something More Useful

So let’s create a function to store skinning data per-vertex, as you may have seen with the above, that was painfully slow. If you have written any skinning tools, you know that the solution to this (other than learning C++) is to apply your change to all vertices at once. Below we build two lists, one of vertices and one of weights, then we

def storeBlindSkinning(mesh, sc):
	'''
	mesh is a skinned mesh, and sc is the skincluster affecting the mesh
	'''
	vtxList = []
	v = cmds.polyEvaluate(mesh, v=1)
	infs = cmds.skinCluster(sc, inf=1, q=1)
	vtxWeights = []
	for vtx in range(0, v):
	    objVtx = mesh + ".vtx[" + str(vtx) + "]"
	    vtxList.append(objVtx)
	    vals = cmds.skinPercent(sc, objVtx, q=1, v=1)
	    for i in range(0, len(infs)):
        	weightDict[infs[i]] = vals[i]
            vtxWeights.append(str(weightDict))
        cmds.polyBlindData(vtxList, id=12344, at='vertex', ldn='skinningInfo', sd=vtxWeights)

Setting all the data at once is 1/3 faster, however, setting this data takes quite some time, and you may want to take a hit for a progress bar. (break it up into groups) On ~60,000 vertices this took 10min (15min doing it inside the loop). I don’t mind that hit if it means that I can now detach/alter/slice my mesh without losing skinning data. You can even extract faces and the new vertices created will get the same blind data as their original. (one becomes two)

As always, the C++ API is much faster, my colleague, Bogdan speed tested the above function and 50,000 vertices took only a few milliseconds, compared to 10 minutes in pythonland.

Remember, there are other ways to store skinning data, using UVs, position, vertex color channels, etc. I just wanted to introduce people to blind data in Maya and show a potential use.

posted by admin at 4:04 AM  

Sunday, January 6, 2013

My 2012 in Review

2012 blew by incredibly fast. If I had to sum the year up into three categories it would be:

  • RYSE / CINEBOX: At Crytek, for the first time ever I broke away from the Crysis franchise and have been working on Ryse with my old friend Hanno Hagedorn, who returned to Crytek this year. In my ‘20% time’ I am still on CINEBOX, which saw some of it’s first production use this year on some high profile film and game projects, but I can’t say much more than that.
  • MAYA: New project, new team, and new software/pipeline! I mentioned this in my SIGGRAPH class, which cleared PR, so no issue mentioning it here: Ryse is the first Maya project at Crytek. In the past year, with the help of Crytek UK, we have been building up a Maya pipeline from scratch. The 3dsMax pipeline was ~10 years old and had a lot of legacy stuff.  Any Maya studio I have worked at always had a legacy pipeline, and I had a mental checklist of things we all would have done differently ‘if we could rewrite everything’. It has been really fun working with the Ryse TechArt team to build this pipeline, we have some really great guys (and gal!), but I won’t out them here. (to the dismay of recruiters everywhere)
  • DIVING: This year I spent a lot more time in the water! Not only diving, but I stepped up my photography; nothing raises your pulse like a changing lenses out over a 200m dropoff! Colleen and I were lucky enough to get to Indonesia, Malaysia, and Egypt. (video, photos) We stayed on an old oil rig off Sipadan where we met a group of great photographers, one of which was Sin Hwa, who took that photo of me above.
posted by admin at 2:27 AM  

Monday, July 16, 2012

CINEBOX SIGGRAPH Talk and Studio Workshops

CRYENGINE CINEBOX

I am giving a talk at SIGGRAPH 2012 entitled ‘Film/Game Convergence: What’s Taking So Long?‘ where I discuss the inherent differences between games and film and go over a few case studies of projects that attempted to use a game engine for film previs. I also talk a bit about the development of our CINEBOX application, the decisions we had to make, and how we dealt with many of the issues previous attempts have run into.

STUDIO WORKSHOPS

I will be giving two more Studio Workshops this year, the first is a followup to last year’s Introduction to Python, entitled ‘Python Scripting in Maya‘. The other workshop is ‘Building a Game Level‘, which is the same basic workshop I gave last year where I show people how to make a playable game level in CryEngine in an hour. Studio Workshops are hands-on sessions where each attendee has a computer and follows along with the instructor. It’s a great chance for people of all ages to learn new things.

posted by admin at 8:06 PM  

Thursday, July 12, 2012

Not Dead Yet

Click to Enlarge

I have been really busy on Ryse, this past weekend I found some time to wrap the XNA import methods I had written in a UI.  I will post it soon in an un-padded form for the people asking for it.

For those who don’t know what I am referring to, a while back I wrote some python to import XNA character files (from retail discs) into Maya as textured characters with original joint names, skinning, etc. I hit some snags on the UV, texturing, and then viewport 2.0 stuff. It’s really great to see topology, bind pose, weighting, joint layout, etc.. of your favorite characters. Great reference!

I would also like to make a post about viewport 2.0 in the next week or so, that whole system is such a complete piece of frustrating garbage, hopefully you can benefit from my aimless bumping into walls in the darkness.  Anyway, gotta start ramping up for SIGGRAPH, so that might have to wait.

posted by admin at 1:36 AM  

Wednesday, April 25, 2012

RigPorn: Halo4 Skeleton and Loco Debug

Found a screen of 343's in-game locmotion debug for anyone interested (click to enlarge)

posted by admin at 12:47 AM  

Saturday, April 21, 2012

Crytek Cinema Sandbox, FMX Talk

I can finally talk about something I have been working on in the past two years.  One of the reasons I returned to Crytek was to push the use of game engines in linear content creation like film and television. On Avatar I saw how much time and effort went into layout, blocking, virtual sets, etc. The tools were archaic, the feedback loop was abysmal at times. In games we have to layout massive levels that people can roam through for 8-15 hours or more and CryEngine’s tools are some of the best for that.

I have been working as Product Manager with a small team of great guys, where I basically define the goals and backlog. It’s thrilling to finally get to see things like Catmull-Clark subd in runtime, or multi-channel EXR output, or Alembic support. It’s been really fun to define what the product is and prioritize features largely without external dependencies or politics, I thank Crytek for trusting me to helm such a project.

We had a live demo kiosk at GDC; check out the Cinema Sandbox Website for more info.

I will be speaking at FMX about CineBox and the whole idea of using game engines for previs and virtual production: The Long Road to Film / Game Convergence

posted by admin at 12:35 PM  

Saturday, April 21, 2012

Maya: Walking the Line

I am still finding my feet in Maya, on my project, some files have grown to 800mb in size. Things get corrupt, hand editing MAs is common; I am really learning some of the internals.

In the past week I have had to do a lot of timeline walking to switch coord spaces and get baked animations into and out of hierarchies. In 3dsMax you can do a loop and evaluate a node ‘at time i’, and there is no redraw or anything. I didn’t know how to do this in Maya.

I previously did this with looping cmds.currentTime(i) and ‘walking the timeline’, however, you can set the time node directly like so: cmds.setAttr(“time1.outTime”, int(i))

Unparenting a child with keyed compensation (1200 frames)
10.0299999714 sec – currentTime
2.02 sec – setAttr

There are some caveats, whereas in a currentTime loop you can just cmds.setKeyframe(node), I now have to cmds.setKeyframe(node, time=i). But when grabbing a matrix, I don’t need to pass time and it works, I don’t think you can anyway.. I guess it gets time from the time node.

Here’s a sample loop that makes a locator and copies a nodes animation to world space:

#function feeds in start, end, node
	if not start: start = cmds.playbackOptions(minTime=1, q=1)
	if not end: end = cmds.playbackOptions(maxTime=1, q=1)
	loc = cmds.spaceLocator(name='parentAlignHelper')
	for i in range(start, (end+1)):
		cmds.setAttr("time1.outTime", int(i))
		matrix = cmds.xform(node, q=1, ws=1, m=1)
		cmds.xform(loc, ws=1, m=matrix)
		cmds.setKeyframe(loc, time=i)
posted by admin at 11:44 AM  

Tuesday, October 25, 2011

Quick Note About Range(), Modulus, and Step

Maybe it’s me, but I often find myself parsing weird ascii text files from others. Sometimes the authors knew what the data was and there’s no real markup. Take this joint list for example:

143 # bones
root ground
-1
0 0 0
root hips
0
0 0.9512207 6E-08
spine 1
1
4E-08 0.9522207 1.4E-07
spine 2
2
3E-07 1.0324 8.3E-07
spine 3
3
5.6E-07 1.11357 1.53E-06
spine 4
4
8.2E-07 1.194749 2.22E-06
head neck lower

So the first line is the number of joints then it begins in three line intervals stating from the root outwards: joint name, parent integer, position. I used to make a pretty obtuse loop using a modulus operator. Basically, modulus is the remainder left over after division. So X%Y gives you the remainder of X divided by Y; here’s an example:

for i in range(0,20+1):
	if i%2 == 0: print i
#>> 0
#>> 2
#>> 4
#>> 6
#>> 8
#>> 10

The smart guys out there see where this is goin.. so I never knew range had a ‘step’ argument. (Or I believe I did, I think I actually had this epiphany maybe two years ago, but my memory is that bad.) So parsing the above is as simple as this:

jnts = []
for i in range(1,numJnts*3+1,3):
	jnt = lines[i].strip()
	parent = int(lines[i+1].strip())
	posSplit = lines[i+2].strip().split(' ')
	pos = (float(posSplit[0])*jointScale, \
	float(posSplit[1])*jointScale, float(posSplit[2])*jointScale)
	jnts.append([jnt, parent, pos])

Thanks to phuuchai on #python (efnet) for nudging me to RTFM!

posted by admin at 1:42 AM  

Wednesday, October 12, 2011

SIGGRAPH 2011: Intro To Python Course

I gave a workshop/talk at SIGGRAPH geared toward introducing people to Python. There were ~25 people on PCs following along, and awkwardly enough, many more than that standing and watching. I prefaced my talk with the fact that I am self-taught and by no means an expert. That said, I have created many python tools people use every day at industry-leading companies.

Starting from zero, in the next hour I aimed to not only introduce them to Python, but get them doing cool, usable things like:

  • Iterating through batches/lists
  • Reading / writing data to excel files
  • Wrangling data from one format to another in order to create a ‘tag cloud’

Many people have asked for the notes, and I only had rough notes. I love Python, and I work with this stuff every day, so I have had to really go back and flesh out some of what I talked about. This tutorial has a lot less of the general chit-chat and information. I apologize for that.

Installation / Environment Check


Let’s check to see that you have the tools properly installed. If you open the command prompt and type ‘python’ you should see this:

So Python is correctly installed, for the following you can either follow along in the cmd window (more difficult) or in IDLE, the IDE that python ships with (easier). This can be found by typing IDLE into the start menu:

Variables


Variables are pieces of information you store in memory, I will talk a bit about different types of variables.

Strings

Strings are pieces of text. I assume you know that, so let’s just go over some quick things:

string = 'this is a string'
print string
#>>this is a string
num = '3.1415'
print num
#>>3.1415

One thing to keep in mind, the above is a string, not a number. You can see this by:

print num + 2
#>>Traceback (most recent call last):
#>>  File "basics_variables.py", line 5, in
#>>    print num + 2
#>>TypeError: cannot concatenate 'str' and 'int' objects

Python is telling you that you cannot add a number to a string of text. It does not know that ‘3.1415’ is a number. So let’s convert it to a number, this is called ‘casting’, we will ‘cast’ the string into a float and back:

print float(num) + 2
#>>5.1415
print str(float(num) + 2) + ' addme'
#>>5.1415 addme

Lists

Lists are the simplest ways to store pieces of data. Let’s make one by breaking up a string:

txt = 'jan tony senta michael brendon phillip jonathon mark'
names = txt.split(' ')
print names
#>>['jan', 'tony', 'senta', 'michael', 'brendon', 'phillip', 'jonathon', 'mark']
for item in names: print item
#>>jan
#>>tony
#>>senta
#>>michael
...

Split breaks up a string into pieces. You tell it what to break on, above, I told it to break on spaces txt.split(‘ ‘). So all the people are stored in a List, which is like an Array or Collection in some other languages.
You can call up the item by it’s number starting with zero:

print names[0], names[5]
#>>jan phillip

TIP: [-1] index will return the last item in an array, here’s a quick way to get a file from a path:

path = 'D:\\data\\dx11_PC_(110)_05_09\\Tools\\CryMaxInstaller.exe'
print path.split('\\')[-1]
#>>CryMaxInstaller.exe

Dictionaries

These store keys, and the keys reference different values. Let’s make one:

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
print dict['sean']
#>>False

So this is good, but these are just the keys, we need to know the values. Here’s another way to do this, using .keys()

dict = {'sascha':'tech artist', 'harry': 142.1, 'sean':False}
for key in dict.keys(): print key, 'is', dict[key]
#>>sean is False
#>>sascha is tech artist
#>>harry is 142.1

So, dictionaries are a good way to store simple relationships of key and value pairs. In case you hadn’t notices, I used some ‘floats’ and ‘ints’ above. A float is a number with a decimal, like 3.1415, and an ‘int’ is a whole number like 10.

Creating Methods (Functions)


A method or function is like a little tool that you make. These building blocks work together to make your program.

Let’s say that you have to do something many times, you want to re-use this code and not copy/paste it all over. Let’s use the example above of names, let’s make a function that takes a big string of names and returns an ordered list:

def myFunc(input):
	people = input.split(' ')
	people = sorted(people)
	return people
txt = 'jan tony senta michael brendon phillip jonathon mark'
orderedList = myFunc(txt)
print orderedList
#>>['brendon', 'jan', 'jonathon', 'mark', 'michael', 'phillip', 'senta', 'tony']

Basic Example: Create A Tag Cloud From an Excel Document


So we have an excel sheet, and we want to turn it into a hip ‘tag cloud’ to get people’s attention.
If we go to http://www.wordle.net/ you will see that in order to create a tag cloud, we need to feed it the sentences multiple times, and we need to put a tilde in between the words of the sentence. We can automate this with Python!

First, download the excel sheet from me here: [info.csv] The CSV filetype is a great way to read/write docs easily that you can give to others, they load in excel easily.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
print lines
#>> ['always late to work,13\n', 'does not respect others,1\n', 'does not check work properly,5\n', 'does not plan properly,4\n', 'ignores standards/conventions,3\n']

‘\n’ is a line break character, it means ‘new line’, we want to get rid of that, we also want to just store the items, and how many times they were listed.

file = 'C:\\Users\\chris\\Desktop\\intro_to_python\\info.csv'
f = open(file, 'r')
lines = f.readlines()
f.close()
dict = {}
for line in lines:
	split = line.strip().replace(' ','~').split(',')
	dict[split[0]] = int(split[1])
print dict
#>>{'ignores~standards/conventions': 3, 'does~not~respect~others': 1, 'does~not~plan~properly': 4, 'does~not~check~work~properly': 5, 'always~late~to~work': 13}

Now we have the data in memory in an easily readable way, let’s write it out to disk.

output = ''
for key in dict.keys():
	for i in range(0,dict[key]): output += (key + '\n')
f = open('C:\\Users\\chris\\Desktop\\intro_to_python\\test.txt', 'w')
f.write(output)
f.close()


There we go. In one hour you have learned to:

  • Read and write excel files
  • Iterate over data
  • Convert data sets into new formats
  • Write, read and alter ascii files

If you have any questions, or I left out any parts of the presentation you liked, reply here and I will get back to you.

posted by admin at 5:12 AM  

Tuesday, October 4, 2011

Question: Rigging with MetaData?

As many of you know, I feel the whole ‘autorigging’ schtick is a bit overrated. Though Bungie gave a great talk at GDC09 (Modular Procedural Rigging), Dice was to give one this year at SIGGRAPH (Modular Rigging in Battlefield 3), but never showed up for the talk.

At Crytek we are switching our animation dept from 3dsMax to Maya. This forces us to build a pipeline there from scratch; in 3dsMax we had 7 years of script development focused on animation and rigging tools. So I am looking at quite a bit o Maya work. The past two weeks focusing on a ‘rigging system’ that I guess could be thought of as ‘procedural’ but is not really an ‘autorigger’. My past experience was always regenerating rigs with mel cmds.

Things I would like to solve:

  • Use one set of animator tools for many rigs – common interfaces, rig block encapsulation (oh god i said ‘block’)
  • Abstract things away, thinking of rigging ‘units’ and character ‘parts’ instead of individual rig elements, break reliance on naming, version out different parts
  • Be fluid enough to regenerate the ‘rigging’ at any time

First Weekend: Skeleton ‘Tagging’

I created a wrapper around the common rigging tools that I used, this way, when I rigged, it would automagically markup the skeleton/elements as I went. This looked like so:

The foundation of this was marking up the skeleton or cons with message nodes that pointed to things or held metadata.  This was cool, and I still like how simple it was, however, it didn’t really create the layer of abstraction I was after. There wasn’t the idea of a limb that I could tell to switch from FK to IK.

Second Weekend: Custom Nodes

That Bungie talk got a lot of us all excited, Roman went and created a really cool custom node plugin that does way more than we spec’d it out to do. I rewrote the rigging tools to create ‘rigPart’ nodes, which could be like an IK chain, set of twist joints, expression, or constraint. These together could form a ‘charPart’ like an arm or leg. All these nodes sat under a main ‘character’ node. I realize that many companies abstract their characters into ‘blocks’ or ‘parts’, but I had never seen a system that had another layer underneath that. Roman also whipped up a way that when an attr on a customNode changes, you could evaluate a script. So whether it’s a human arm or alien tentacle arm, the ‘charPart’ node can have one FK/IK enum. I am still not sure if this is a better idea, because of the sheer legwork involved..

Third Weekend: A Mix of Both?

So a class like ‘charParts.gruntLeg()’ not only knew how to build the leg rigParts, but also only the leg ‘rigging’ if needed. This works pretty well, but the above was pretty hard to read. I took some of my favorite things about the tree-view-based system and I created a ‘character’ outliner of sorts. This made it much easier to visualize the rigParts that made up individual ‘systems’ of the character, like leg, spine, arm, etc. I did it as a test, but in a way that I easily swap it out with the treeWidget in the rigging tools dialog.

So how do you guys solve some of these issues?

posted by Chris at 4:00 AM  

Sunday, October 2, 2011

Creatures and Anatomy: Reticulated Python

Pythons have probably my favorite skull of any animal. The reticulated python has a jaw that is in four movable parts, and the lower two can swing open over 120 degrees. Here are the main parts of a reticulated python skull:

Via SkullsUnlimited.com

Four, independently moving jaw bones, in case you thought Predator or the Covenant were original: Mother Nature has had them beat for a while!  –And look at the angle of those teeth: nothing that goes into that mouth is ever coming out! Here’s a video of a guy getting bitten and his friends have to push the skull forward very hard, then open the mouth, and then pull it away. With these four independent jaw parts, they have the ability to really get their mouths around prey that is much larger than their bodies, like this African antelope:

I marked the mouth in red above

As if four independently moving jaw parts wasn’t cool enough: they have a second row of teeth! These are situated on the roof of their mouth, yes, you read that right, reticulated pythons have palatine teeth, circled below:

This is a central row of teeth behind the maxillary teeth on the upper jaw! Here are some better pics to make it harder for you to get to sleep at night:

Here’s a link to a 3d Burmese Python skull 360 render (roll) from the DigiMorph website.

posted by admin at 3:45 PM  

Sunday, September 25, 2011

Replacing Stripped Retaining Screws on a Nikon HK-27

The Nikon 400mm 2.8 has a lens hood that costs $400. For this price, you would think they use pretty solid parts, but there is a block that a thumbscrew goes into that’s actually hard plastic. If your lens is in a backpack with the hood attached, this retaining screw will strip.

I have read many forum posts where people begrudgingly REPLACED THE ENTIRE HOOD because of this. People said they contacted Nikon directly and were told there are no replacement parts.

After contacting Nikon, I would like anyone googling for a solution to know that you can order the parts from Nikon, the parts and numbers are pictured above. Contact the Nikon Parts Service at: 310-414-5121

posted by admin at 10:08 PM  

Tuesday, July 19, 2011

SIGGRAPH 2011

I am volunteering again in the Studio; giving three small talks at SIGGRAPH, drop me a line if you will be in Vancouver.

Rigging Characters for CryENGINE

How to rig, skin, and export a character for CryENGINE 3. Topics include physics setup, building characters from many skinned meshes, and creating Character Definitions and Character Parameter files. These rigging basics are applicable to most run-time game engines.

Introduction to Python Scripting

In this introduction to Python, a powerful scripting language used by many 3D applications, attendees learn the basics and explore small example scenarios gleaned from actual game and film productions. The sessions are taught in a way that should empower attendees to immediately begin creating time-saving python scripts and applications.

World Creation in CryENGINE

Have you ever wanted to make a videogame? This session shows how to build a small level in the freely available CryENGINE 3 SDK. Topics include: world building and tools (FlowGraph, CryENGINE’s visual scripting language, and Trackview, the camera sequencing and directing tools). In less than an hour, attendees create their own playable video games.

posted by admin at 9:32 AM  

Sunday, June 26, 2011

REVIEW: GoPro ‘3D Hero System’

GoPro Cameras

GoPro makes small $250 no-frills video cameras that record 1080p and come in waterproof polycarbonate housings rated to 60m depth. They have a 170º angle of view, glass lens, fixed focus (2.5ft – ∞), f/2.8 aperture, and 2.5 hour battery life. These cameras are ‘bare bones’; there is no way to even know if it’s recording but to look directly into the lens, no backfacing LCD or even blinking LED!

Not Usable Under Water! – I did some test dives as soon as I received the housing and was in for a rude awakening. The glass domed ports blur the image underwater. This is because domed ports create a secondary focal point or ‘virtual image’ underwater that must be focused on. It seems that GoPro did not take this into account; after contacting them directly I was told: “It is not possible at this time for the GoPro Hero to focus in an underwater environment.” One funny thing to point out, the cool underwater videos on GoPro’s own site are not shot with their own lens/housing!

SOLUTION: 3rd Party Lenses or Housing – That’s right, to use your GoPro Hero underwater you have to buy a replacement housing from a 3rd party with flat ports (crisp images underwater). At the time of my writing this, there were none available for the 3d Hero System, so I purchased replacement lenses from Pursuit Diving. These lenses are very soft polycarbonate, and you might want to carry some Novus 2 polish with you as they scratch easily [image]. Mine also had some small areas of blurriness: this is not an ideal solution. Eye of Mine has a complete 3d housing replacement in the works, and GoPro themselves say they are ‘working’ on a solution. Either way, be warned: These cameras are unable to produce decent images underwater!

Poor Dynamic Range / Image Quality – As you see below, bright highlights easily get blown out. They claim the 1/2.5″ CMOS sensor is great for low light (>1.4 V/lux-sec), this may be, but it is woefully bad at images that vary in bright and dark.

Highlights are easily blown out, and create bad image artifacts

The H264 (12mbit) really butchers the image at times (PNG)

Rolling Shutter Artifacts (Wobble or Jelly) – Like most CMOS video cameras, the GoPro has some rolling shutter issues; I would say more than other CMOS cameras I have used. Unfortunately, for a camera that is meant to be strapped to moving objects –this is pretty bad! You have no control over the shutter speed, so unfortunately the less light, the more rolling shutter artifacts. Here’s an example looking out my window, but you can also see this in the Thistlegorm wreck footage below.

There is a great free solution for VirtualDub called DeShaker. For the HD Hero you should enter a rolling shutter amount of 82%.

Poor Battery Retention – On more than one occasion I left full batteries in the camera and did not turn it on for one to two days. I was often surprised to find the batteries low or half-drained. I have many other smaller canon, fuji, etc cameras and they have much better retention.

3D Hero System

Before, if you wanted to make a GoPro s3d rig, you had to put both cameras on a plate, then clank it to later sync the videos by audio waveform. Not only that, but the cameras dropped frames, so you had to time warp the footage to take into account drift: It was less than ideal.

In March (2011) GoPro released the ‘3d Hero System’ which is a new housing and a sync cable for two existing cameras. They also purchased CineForm Studio and skinned the software to make for a slightly less painful s3d workflow; unless you know what you’re doing, then their CineForm app can be pretty obnoxious/unintuitive.

Somewhat Buggy / Unreliable
I was surprised by the bugs I encountered. It’s very frustrating to think you are recording something and later realize that the camera rig left you with unusable data. For instance, I thought the cameras were recording for an entire dive, but it turned out that they somehow entered a state where they made 400+ individual one second MP4 files. Other times one camera would turn off, or unsync and begin recording in a separate format (like one eye 1080p, the other 5mp stills). Many times one battery would run out well before the other, in which case you at least have 2d video, but still annoying.

Sync Cable does not ‘Sync’ Cameras
The ‘Sync Cable’ does not really ‘sync’ the cameras; treating it as a sync cable will only lead to complete frustration. This can really cause some issues, you need to think of the cable as a ‘controller cable’, where one camera is the master and the other a slave, or you will end up with only one camera recording. Again, the functions of the cameras are not sync’d! The Camera with the sync cable marked ‘R’ must start/stop the recording for both cameras to record. It is easy to place the cameras in the housing so that the ‘slave’ camera shutter button is pressed, this does not work, so be careful!

Here is a schematic of the ‘sync cable’ for DIY people.

Sync’d Recording Not Perfect (Don’t think ‘Genlock’)
While better than clanging a metal bar and timewarping to the audio, the sync cable doesn’t really sync the sensors. The videos seem a full frame off, so maybe the CineForm software compensates for this.

As you can see, the right (master) camera is a frame or so ahead of the left

Camera Settings Like White Balance and Exposure NOT Sync’d
Many times I find myself with stereo video where each eye is widely different. Whether it’s exposure, white balance, etc.. it’s frustrating, and the included CineForm software doesn’t offer much of a solution.

CineForm Software is Slow, Can be Frustrating
An example of some frustration advanced users may have is: “EXPORT TO MP4”: just a big button, nothing about datarate or other export options.. just a button. Unfortunately the UI has been dumbed down to the point of ambiguity and frustration. They should have continued and just made a “UPLOAD TO YOUTUBE 3D” because the software is dumbed down to the point of not being useful to advanced users, but not being easy enough for novices.

Fixed Interocular
The interocular of the housing is ~3.5cm which is a bit too close for my liking. Reducing the interocular to something smaller than the distance between your eyes causes the 3d effect to be weakened and things to appear larger than they really are. The interocular was decent for underwater, and I guess if you are filming yourself on a surfboard, but not great for driving through the Serengeti. The sync cable is of fixed length, so you cannot use it with other GoPro housings.

Unable to Use Other Attachments
Because the sync cable uses the expansion port, and the housing doesn’t accomodate, you cannot use the LCD backpac or the larger battery with the 3D Hero System.

Final Thoughts and Some Footage!

Sure I pointed out a lot of issues that I had, but for the price, the GoPro system is pretty great. The housing, though cheap, never flooded (many 30m dives). This is the first footage I have posted, and I have not post-processed it much. I will maybe make another post once I figure out the best ways to automatically post-process the footage to remove artifacts and distortion.









posted by admin at 6:21 PM  

Monday, October 4, 2010

Writing Custom Perforce Plugins in Python

I recently wrote a custom tool to diff CryEngine layer files in P4, and was surprised how simple it was. What follows is a quick tutorial on adding custom python tools to Perforce.

Start by heading over to Tools>Manage Custom Tools… Then click ‘New’:

You can pass a lot of information to an external tool, here is a detailed rundown. As you see above, we pass the client spec (local) file name (%f) to a python script, let’s create a new script called ‘custom_tool.py’:

import sys
from PyQt4 import QtGui    
 
class custom_tool(QtGui.QMessageBox):
	def __init__(self, parent=None):
		QtGui.QMessageBox.__init__(self)
		self.setDetailedText(str(sys.argv))
		self.show()
 
if __name__ == "__main__":
	app = QtGui.QApplication(sys.argv)
	theTool = custom_tool()
	theTool.show()
	sys.exit(app.exec_())

What this does is simply spits out the sys.argv in a way you can see it. So now you can feed any file you right click in Perforce into a python script:

If you would like to actually do something with a file or revision on the server and are passing the %F flag to get the depot file path, you then need to use p4 print to redirect the file contents (non-binary) to a local file:

p4.run_print('-q', '-o', depotFile, localFile)
posted by admin at 1:09 AM  

Saturday, September 25, 2010

Perforce Triggers in Python (Pt 2)

So last time I more introduced you to the idea of triggers, here’s a more complex example. This worked on my db, but if you have branching you would need to check each returned file against the branch you are submitting to.

Check if The File is Already in the Depot

This is a trigger that checks the hash digest of the incoming file against that of the server. This way you can see if the user is checking in a file that already exists.

import sys
from P4 import P4, P4Exception
 
p4 = P4()
describe = []
try:
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
	lst = sys.argv[2]
	stat =  p4.run('fstat', ('-Ol','//depot/...@'+ str(lst)))
	hash = stat[0]['digest']
	fname = stat[0]['depotFile']
	m =  p4.run('fstat', ('-Ol','-F', ('digest = ' + str(hash)),'//depot/...'))
	existing = []
	for file in m:
		if file['depotFile'] != fname: existing.append(file)
 
	if existing != []:
		print '\n\nFILE EXISTS IN DEPOT!!'
		print  'YOUR FILE:  ' + (fname.split('/')[-1])
		for  file in existing: print 'EXACTLY MATCHES:  ' + file['depotFile']
		print 'P4 DIGEST:  ' + hash
		print 'SOLUTION: Contact your lead if you believe this message was generated in error.'
		sys.exit(1)
 
except Exception, e:
	print "Error: %s" % (e)
	sys.exit(1)
 
p4.disconnect()
sys.exit(0)

Then your trigger line looks like this:

Triggers:
	dupeCheck change-submit //depot/... "python X:/projects/2010/p4/dupe_trigger.py %user% %changelist%"

This is what the user will see when they try to check in:

posted by admin at 7:39 PM  

Thursday, August 26, 2010

Perforce Triggers in Python (Pt 1)

Perforce is a wily beast. A lot of companies use it, but I feel few outside of the IT department really have to deal with it much. As I work myself deeper and deeper into the damp hole that is asset validation, I have really been writing a lot of python to deal with certain issues; but always scripts that work from the outside.

Perforce has a system that allows you to write scripts that are run, server side, when any number of events are triggered. You can use many scripting languages, but I will only touch on Python.

Test Environment

To follow along here, you should set up a test environment. Perforce is freely downloadable, and free to use with 2 users. Of course you are going to need python, and p4python. So get your server running and add two users, a user and an administrator.

Your First Trigger

Let’s create the simplest python script. It will be a submit trigger that says ‘Hello World’ then passes or fails. If it passes, the item will be checked in to perforce, if it fails, it will not. exiting while returning a ‘1’ is considered a fail, ‘0’ a pass.

print 'Hello World!'
print 'No checkin for you!'
sys.exit(1)

Ok, so save this file as hello_trigger.py. Now go to a command line and enter ‘p4 triggers’ this will open a text document, edit that document to point to your trigger, like so (but point to the location of your script on disk):

Triggers:
	hello_trigger change-submit //depot/... "python X:/projects/2010/p4/hello_trigger.py"

Close/save the trigger TMP file, you should see ‘Triggers saved.’ echo’d at the prompt. Now, when we try to submit a file to the depot, we will get this:

So: awesome, you just DENIED your first check-in!

Connecting to Perforce from Inside a Trigger

So we are now denying check-ins, but let’s try to do some other things, let’s connect to perforce from inside a trigger.

from P4 import P4, P4Exception
 
p4 = P4()
 
try:
	#use whatever your admin l/p was
	#this isn't the safest, but it works at this beginner level
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
	info = p4.run("info")
	print info
	sys.exit(1)
 
#this will return any errors
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)

So now when you try to submit a file to depot you will get this:

Passing Info to the Trigger

Now we are running triggers, accepting or denying checkins, but we really don’t know much about them. Let’s try to get enough info to where we could make a decision about whether or not we want the file to pass validation. Let’s make another python trigger, trigger_test.py, and let’s query something from the perforce server in the submit trigger. To do this we need to edit our trigger file like so:

Triggers:
	test change-submit //depot/... "python X:/projects/2010/p4/test_trigger.py %user% %changelist%"

This will pass the user and changelist number into the python script as an arg, the same way dragging/dropping passed args to python in my previous example. So let’s set that up, save the script from before as ‘test_trigger.py’ as shown above, and add the following:

import sys
from P4 import P4, P4Exception
 
p4 = P4()
describe = []
 
try:
	p4.user = "admin"
	p4.password = "admin"
	p4.port = "1666"
	p4.connect()
 
except P4Exception:
	for e in p4.errors: print e
	sys.exit(1)
 
print str(sys.argv)
describe = p4.run('describe',sys.argv[2])
print str(describe)
 
p4.disconnect()
sys.exit(1)

So, as you can see, it has returned the user and changelist number:

However, for this changelist to be useful, we query p4, asking the server to describe the changelist. This returns a lot of information about the changelist.

Where to Go From here

The few simple things shown here really give you the tools to do many more things. Here are some examples of triggers that can be  created with the know-how above:

  • Deny check-ins of a certain filetype (like deny compiled source files/assets)
  • Deny check-ins whose hash digest matches an existing file on the server
  • Deny/allow a certain type of file check-in from a user in a certain group
  • Email a lead any time a file in a certain folder is updated

Did you find this helpful? What creative triggers have you written?

posted by admin at 12:33 AM  

Sunday, August 8, 2010

Sigma 8mm vs 4.5mm Comparison on Nikon APS-C

click to enlarge

I have been researching the best options available for the D300 when it comes to quickly generating some lightprobes/panoramas. This of course means fisheye lenses. Currently, Sigma is the only company that makes a 180 degree circular fisheye. They come in two flavors, 8mm, and 4.5mm. The 8mm projects a full circle onto a full 35mm sensor (full frame), but on an APS-C sensor it is cropped. The 4.5mm however, throws a perfect circular image onto an APS-C sized sensor; I believe it is the only lens that does this.

The Pixels

You would think that the 4.5mm would be the way to go, I did until I took a look at both. It really comes down to the pixels. The width in pixels of the image thrown by the 4.5mm lens is roughly 2285px in diameter. So while you can shoot less, an entire panorama taking about 3 shots, it will come out as a <4k equirectangular. However, using the 8mm, you need 4 shots, plus one zenith (5 shots total) and it generates an 8k image.  While the 4.5mm does generate a 180 degree image across, as you can see it is very wasteful.

So why doesn’t the lens have full coverage in at least the short dimension? I think it’s because it’s a lens designed to fit Canon and Sigma cameras, not just Nikon. Canon sensors have a 1.6 crop factor and Sigma’s Foveon X3 has a 1.7 crop factor (13.8mm)! The coverage is so small because Nikon DX format has a 1.5 crop factor, the APS-C sensor is much larger than Canon or Sigma. The actual circle measures 12.3mm, even small for the Sigma, which makes me believe they future-proofed it for Four Thirds.

For an APS-C sensor like the D300, I would recommend the 8mm, unless you really need a full uncropped image. The 4.5mm, while being more expensive, also has an aperture of 2.8, compared to the 8mm (f/3.5)

I am not super constrained on time, if you are on set and shooting bracketed probes between takes or something, the 4.5mm will save you two shots (18 pictures) and this might be preferable. That said, it will only generate a 4k image in the end (which might be enough)

posted by admin at 2:56 PM  

Monday, June 28, 2010

Python: Simple Decorator Example

In Python, a Decorator is a type of macro that allows you to inject or modify code in functions or classes. I was turned onto this by my friend Matt Chapman at ILM, but never fully grasped the importance.

class myDecorator(object):
	def __init__(self, f):
		self.f = f
	def __call__(self):
		print "Entering", self.f.__name__
		self.f()
		print "Exited", self.f.__name__
 
@myDecorator
def aFunction():
	print "aFunction running"
 
aFunction()

When you run the code above you will see the following:

>>Entering aFunction
>>aFunction running
>>Exited aFunction

So when we call a decorated function, we get a completely different behavior. You can wrap any existing functions, here is an example of wrapping functions for error reporting:

class catchAll:
	def __init__(self, function):
		self.function = function
 
	def __call__(self, *args):
		try:
			return self.function(*args)
		except Exception, e:
			print "Error: %s" % (e)
 
@catchAll
def unsafe(x):
  return 1 / x
 
print "unsafe(1): ", unsafe(1)
print "unsafe(0): ", unsafe(0)

So when we run this and divide by zero we get:

unsafe(1):  1
unsafe(0):  Error: integer division or modulo by zero

Using decorators you can make sweeping changes to existing code with minimal effort, like the error reporting function above, you could go back and just sprinkle these in older code.

posted by admin at 9:06 AM  

Saturday, June 26, 2010

Python: Special Class Methods

I have really been trying to learn some Python fundamentals lately, reading some books and taking an online class. So: wow. I can’t believe that I have written so many tools, some used by really competent people at large companies, without really understanding polymorphism and other basic Python concepts.

Here’s an example of my sequence method from before, but making it a class using special class methods:

http://docs.python.org/reference/datamodel.html#specialnames
class imSequence:
	def __init__(self, file):
		dir = os.path.dirname(file)
		file = os.path.basename(file)
		segNum = re.findall(r'\d+', file)[-1]
		self.numPad = len(segNum)
		self.baseName = file.split(segNum)[0]
		self.fileType = file.split('.')[-1]
		globString = self.baseName
		for i in range(0,self.numPad): globString += '?'
		self.images = glob.glob(dir+'\\'+globString+file.split(segNum)[1])
 
	def __len__(self):
		return len(self.images)
 
	def __iter__(self):
		return iter(self.images)

Here’s an example of use:

seq = imSequence('seq\\test_00087.tga')
print len(seq)
>>94
print 'BaseName: %s  FileType: %s  Padding: %s' % (seq.baseName, seq.fileType, seq.numPad)
>>BaseName: test_  FileType: tga  Padding: 5
for image in seq: print image
>>seq\test_00000.tga
>>seq\test_00001.tga
>>seq\test_000002.tga
...

[More info and examples: Dive Into Python: Special Class Methods]

posted by admin at 10:16 PM  

Monday, May 3, 2010

Nikon D300 Stereo Rig [$30 DIY]

This is what the final product will look like. Two D300s, mounted as close as possible, sync’d metering, focus, flash, and shutter. Rig cost: Less than 30 dollars! Of course you are going to need two d300s and paired lenses, primes or zooms with a wide rubberband spanning them if you are really hardcore. Keep in mind, the intraoccular is 13.5cm, this is a tad more than double the normal human width, but it’s the best we can do with the d300 [horizontal].

Creating the Camera Bar

This mainly involves you going to the local hardware store with your D300 and looking at the different L-brackets available. It’s really critical that you get the cameras as close as possible, so mounting one upside down is preferable. It may look weird, but heck, it already looks weird; might as well go full retard.

I usually get an extra part for the buttons, because they will need to be somewhere that you can easily reach

Creating the Cabling

Nikon cables with Nikon 10-pin connectors aren’t cheap! The MC-22, MC-23, MC-25, or MC-30 are all over 60 dollars! I bought remote shutter cables at DealExtreme.com. I wanted to make my own switch, and also be able to use my GPS, and change the intraoccular, so the below describes that setup. If you just want to sync two identical cams, the fastest way is to buy a knock-off MC-23, which is the JJ MA-23 or JJ MR-23. I bought two JueYing RS-N1 remote shutters and cut them up. [$6 each]

I only labeled the pins most people would be interested in, for a more in depth pin-out that covers more than AE/AF and shutter (GPS, etc), have a look here. I decided to use molex connectors from RC cars, they make some good ones that are sealed/water-resistant and not too expensive.

So the cables have a pretty short lead. This so that I can connect them as single, double, have an intraoccular as wide or as short as any cable I make.. The next thing is to wire these to a set of AF/AE and shutter buttons.

Black focuses/meters and red is the shutter release. It’s not easy to find buttons that have two press states: half press and full press. If you see above, shutter is the combination of AF/AE, ground, and shutter. This is before the heat shrink is set in place.

Altogether

So that should be it. Here’s my first photo with sync’d metering, focus, flash, and shutter. They can even do bursts at high speed. Next post I will try to look into the software side, and take a look at lens distortion, vignetting, and other issues.

posted by admin at 12:27 AM  

Monday, April 19, 2010

Dealing with File Sequences in Python

I have been parsing through the files of other people a lot lately, and finally took the time to make a little function to give me general information about a sequence of files. It uses regex to yank the numeric parts out of a filename, figure out the padding, and glob to tell you how many files in the sequence. Here’s the code and an example usage:

#returns [base name, padding, filetype, number of files, first file, last file]
def getSeqInfo(file):
	dir = os.path.dirname(file)
	file = os.path.basename(file)
	segNum = re.findall(r'\d+', file)[-1]
	numPad = len(segNum)
	baseName = file.split(segNum)[0]
	fileType = file.split('.')[-1]
	globString = baseName
	for i in range(0,numPad): globString += '?'
	theGlob = glob.glob(dir+'\\'+globString+file.split(segNum)[1])
	numFrames = len(theGlob)
	firstFrame = theGlob[0]
	lastFrame = theGlob[-1]
	return [baseName, numPad, fileType, numFrames, firstFrame, lastFrame]

Here is an example of usage:

print getSeqInfo('E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000547.jpg')
>>['Frame', 6, 'jpg', 994, 'E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000000.jpg', 'E:\\data\\data\\Games\\Project\\CaptureOutput\\Frame000993.jpg']

I know this is pretty simple, but I looked around a bit online and didn’t see anything readily available showing how to deal with different numbered file sets. I have needed something like this for a while that will work with anything from OBJs sent from external contractors, to images from After Effects…

posted by admin at 6:49 PM  

Monday, April 12, 2010

Drop Files on a Python Script

So I have always been wondering how you can create almost like a ‘droplet’ to steal the photoshop lingo, from a python script. A while ago I came across some sites showing how to edit shellex in regedit to allow for files to be dropped on any python script and fed to it as args (Windows).

It’s really simple, you grab this reg file [py_drag_n_drop.reg] and install it.

Now when you drop files onto a python script, their filenames will be passed as args, here’s a simple script to test.

import sys
 
f = open('c:\\tmp.txt', 'w')
for arg in sys.argv:
    f.write(arg + '\n')
f.close()

When you save this, and drop files onto its icon, it will create tmp.txt, which will look like this:

X:\projects\2010\python\drag_and_drop\drag_n_drop.py
X:\photos\2010.04 - easter weekend\fuji\DSCF9048.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9049.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9050.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9051.MPO
X:\photos\2010.04 - easter weekend\fuji\DSCF9052.MPO

The script itself is the first arg, then all the files. This way you can easily create scripts that accept drops to do things like convert files, upload files, etc..

posted by admin at 12:33 AM  
« Previous PageNext Page »

Powered by WordPress