Render Time


The summer is coming to a close and with the coming of fall, the museum will be filled with school children as much as world travelers. We settle in for winter, and how we think of time and audiences shifts significantly. We stack our project wish-lists and wonder what we can achieve before November? What can we achieve before the new year? The task list that comprises any one project is extensive and often daunting. The UAMN Exhibits and Digital Media Production team is currently working on two major film pieces and half a dozen minor ones, four exhibits, and many smaller remediation and improvement projects in the museum’s galleries, public spaces, and behind the scenes.

Arctic Currents is scheduled for release in late spring, 2014, and while we are continuing to model and animate and program shots, edit vocalized drafts of the script and refine the ideas for shots still on the concept list, we are running completed shots out to the rendering machines.

There are two kinds of Time in animation, personnel time, which is a well-described, sometimes civil animal often handled best with chair in one hand and key-lime pie in the other — and render time, which is both calculated and fickle.

An animated shot is ready for rendering once the models are molded and positioned, the cameras and lights are placed and tuned, the effects such as fog and depth of field and motion blur are set, simple objects are replaced with complex ones, instances are ramped up, the pre-drawn imagery is imported and mapped, the scripts are written and loaded, and the quality levels are compromised. Then a computer stuffed with RAM, more than a little bit of power, and lot of time to spare is told to draw pictures as fast as it can.


We’ve put a lot of work into making computers very fast, and so the drawing happens at a frightening speed. UAMN Production runs  i7 machines sporting 16 or 32GB RAM, 8 cores, and a few Terabytes of hard-drive space each. Not too shabby.

This image of our twin otter in flight at a finished pixel resolution 2160×1080, better than so-called “Full HD”, takes one machine 3 seconds to draw from scratch, not including the clouds and the airplane detailing, which while also from scratch were pre-drawn and imported into the shot to same time.


Not bad, but note the image (click to see details) is a little rough around the edges with aliasing, also known as the jaggies. The single rendering pass takes the model literally and paints the pixels crudely. This also results in the body speckling. The computer isn’t paying enough attention to quality. No matter. This is easy to fix. We’ll have the machine run 5 passes instead of one. Because the computer is programmed to do this efficiently, it doesn’t take 5 times as long, but understands which parts of the image require refining. Now the draw time is 10 seconds, and all the edges are smooth and polished to an acceptable degree.


Funny thing about surfaces in the real world. Every single one of them is in some part reflective, and this reflectivity increases inversely with the angle of the surface viewed. It is greater for glossy objects and less for matte surfaces. Our twin otter has several coats of gloss paint, and it would be nice to see it reflective, so we’re compelled to add this to the render engine – not to mention, for the first time we might see the ocean over which the plane is flying, reflected in parts of the surface. This scene chews up 4.5GB of RAM while rendering, and much of that is owed to the ocean below. The updated render takes 24 seconds to draw.


The ocean is barely visible in the reflection in this image (visible mostly in the far wing), but it becomes more apparent throughout the shot. The reflection of the detailing is clearly visible on the rear stabilizer.

Another less apparent effect in this shot, but important through much of the film, is limited depth of field (DOF). We all know the difficulties with getting photographs properly in focus, what with low light and camera shake. Computers have the opposite problem. Every pixel is in perfect focus – unless we do something about it – and we want to do something about it. Depth of field gives our minds subtle cues about the size of objects and the distances between objects in 3D space. Without DOF in cinema filmography, we could not have lived without 3D glasses for so long. So that we do not reintroduce problems with aliasing when we start computing for depth of field, we have to increase the number of anti-aliasing passes again – now to 9 passes or so (or so, because we actually specify a range of passes which the computer adaptively selects based on circumstances). The draw time is now a whopping 35 seconds for the image.


But now things begin to get serious. A photographer with a high speed camera might very well capture this image of the twin otter cruising along at 100 knots with its props spinning at some 2000 rpm. That’s a fast shutter to capture the props so clearly, freezing time so perfectly. Too perfectly. A video camera would never give us such a crisp image; we need to add motion blur to the shot. This will give a better feel that the plane and the camera are in motion and that the props are indeed spinning. Most of the shots in AC can get away with 3-5 motion blur passes, but for the props, we need something upwards of 11 to get the render to properly blur them in all their high-speed glory. The draw time is now 3 minutes, but these props are worth it. Hannah put that that detailing into the props. Thanks Hannah.


And we’re not done yet. Basic animated scenes in the computer employ cameras and lights, but in the real world, what is a light really? The sun? A bulb? An LED? The great blue dome that we call the sky? If you don’t believe the sky is a light, look at your shadow on a clear summer’s day. What color is it?  But the real world is even more complicated than that. In the real world, EVERYTHING is a light. Best example is perhaps right at your desk. Find a colored Post-it. Hold it close to a sheet of white paper. You just made a light, and it’s illuminating the paper, not very strong and very localized. Again, the computer doesn’t think that parts of the twin otter are light reflectors unless we tell it to do so, and it’s a very complicated process, requiring all sorts of programming cheats to make the effect remotely doable before the turn of the next decade. It is usually referred to as global illumination (GI), and it is basically the principle that any object in the scene acts as a weak reflector or light source that bounces and recolors incoming light rays. In some shots it is critical. In this shot, it is merely highly valuable, brightening the twin otter against the background clouds. It makes sure no shadow are completely black and gives objects better definition in the creases where less light penetrates.

Hello there. Now the image takes 7 minutes to render.


And that’s all we’re going to force the rendering computer to do on this shot. There are ways to optimize time on the render, but that’s a trade-off between render time and personnel time. We could ramp up quality of the antialiasing or the motion blur passes to very high values, ray-trace a more accurate shadows, and certainly give CPU a headache calculating a more perfect global illumination solution, but this will do it for us… because, this is 7 minutes for a single image/frame. We’re making a movie. The movie plays back at 24 frames per second, which for a 5 second shot of the twin otter (120 frames), now takes our rendering machine almost 14 hours.

Not bad. These computers are fast, even imagining everything we’re throwing at them. A job like this could be given to any of our machines to do overnight. Arctic Currents is more than 20 minutes long though, so we had better get started a month ago. If all the shots were this complicated to draw, and we only had the one machine, it would need 140 days (4.6 months) to draw the film.

Many shots are a lot more complicated. Exhibit this example…

We have a basic shot of Baelin, our senior whale, cruising under the ice. There are bits in the water. There is a near-field “fog” caused by the severity at which water filters out light from the sun and sky, and there is water and ice and a cool looking whale with the custom scarring of a 100 years of life spent under the ice.

Render time: 2:20


We add the anti-aliasing passes (Click image to see details).

Render time: 3:47


We add the ray-traced reflections and refractions crucial to rendering the caustic effects of ice and water.

Render time: 8:34


We add the limited depth of field necessarily to establish depth and take the close “bits in the water” out of focus so they add to the atmosphere without distracting from the whale.

Render time: 8:34 (Unchanged because our AA is already high enough to compensate)


The whale and the camera are moving at a brisk pace compared to the ice and the “bits in the water.” We add motion blur, but only 3 passes this time as the camera movement is much slower than the twin otter propellers.

Render time: 24:34


And we’re good… except for the little issue of our water object (pre-simulated over about 6 hours) not knowing that it’s supposed to stop where the ice begins and ends. All the ice appears as if it is floating above the water. To solve this without needing to simulate the water and ice together (Witness that expensive beast known as personnel time), we render the scene three times, once like we have done, once without the water, and once as a series of matte images that will tell our compositor where to draw the water in the final composited (combined) image.

Render Time: 24:34 (for full image)

Render Time: 17:51 (for waterless image)

Render Time: 1:49    (for matte layer)

Total: approximately 44 minutes.


Mix and combine. Add salt and pepper to taste.


A five second sequence would take 89 hours to render. Sounds a like a job for the Labor Day weekend, but since we want at least 15 seconds of this shot, better budget the better part of a week (not all the shot reveals the ice and will need the extra layers). Regardless, this is why we have 3 machines that can render frames 24/7 and up to 3 more staff desktops that can be leveraged for nights, weekends, and holidays. Spring will be here before they know it.

– Roger Topp (UAMN Head of Exhibits and Digital Media Production)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s