Sketch: Effects Omelette
I've been to the equivalent of this sketch for several years, it's a round-up
of how several prominent productions created their effects. The most interesting
sketch was the WETA Digital one, so that's where most of the notes are concentrated,
with only a few comments on the work by ILM on T3 and DD on Star Trek: Nemesis.
Weta Digital: Foliage for Two Towers
In order to do the tremendous amount of modeled and animated foliage (especially
on the Ents) that they anticipated for the second movie, they developed Grove,
a procedural modeling system implemented in Maya.
They considered L-systems but ultimately rejected for control reasons. Although
L-systems had the right properties in terms of definition and ability to procedurally
expand, they were too hard to direct and control to meet the director's and
art director's needs.
Instead, they had a procedurally defined system that had morphological controls
for length between branches, # branches, leaf clustering, etc., etc. Each of
those were driven by random variables; the seed used to generate the random
variables could be set as a property. Thus, for a particular instance of Grove,
by setting all the control parameters for the procedural generation plus setting
the seed, they got a repeatable generation of the specific foliage, even through
they were only storing a handful of numbers.
The Ents themselves were hand-modeled subdivision surfaces. Grove then grew
foliage on the ends of the hand-modeled branches.
Various LODs were supported through varying the amount of branching detail generated
by the system.
Grove support export of proxy geometry in .obj format. This was really useful
in creating crowd scenes during battle for Isengard -- the animators has proxy
geometry to work with in animating the crowds of Ents heading for Isengard.
Modeling went smoothly using Grove. The modelers first manipulated ranges of
the control parameters to define "Grovelets", or subsets of foliage which expressed
the particular species of tree (Oak, Birch, Pine, etc.). Defining the set of
grovelets for an Ent took about half scheduled time. The key to keeping the
time down was reusing the existing Maya skills of artists.
Grovelets were a hierarchy of orientable nodes. This was key to the procedural
generation, but also was the key to introducing animation and dynamics as well.
The Grovelets were rigged with hand-created animation skeletons in Maya. Grovelets
inheirit motion from the skeleton via weighting. Skeletons were typically were
sparse, and didn't have any particular structural relationship to the Grove
hierarchy.
The skeletons simulated by either maya soft bodies driving joints, or a cloth
simulation driving joints (willow in particular used the cloth simulator). Remember
the Grove was only being used for the foliage elements on the Ents -- essentially
for secondary animation. The primary animation on the Ents was done by the animation
team.
Grove is implemented as a RenderMan DSO procedural, meaning that only the handful
of control parameters are recorded in the output RIB file. This means that it
builds the Grove hierarchy of the primitives only when the bounding box of the
procedural is actually rendered. Grove generated LOD elements all the way down
to RiCurves for distant trees; this meant it was very efficient to render. Although
they didn't have specific numbers, they said that Grove elements were an insignificant
part of their rendering time.
The actual leaf geometry is not generated by Grove; it's assigned from pre-built
libraries of NURBS leaves brought in as RiInstances. There were two LODs of
actual leaves used to help control rendering times.
ILM: T3 Face Matching
If you've seen Terminator 3, you've seen the scenes at the end of the movie
where half of Arnold's face has been burned away revealing the endoskeleton.
They talked about the production of those scenes. Not too imaginatively, the
basic point was to take a CyberScan of Arnold's face and then do ridiculously
accurate match-animation of that model.
The match animation group just rolled the footage back and forth as a background
plate to their animation, adjusting the facial controls until they had a CG
model of Arnold's face that was pretty much pixel-accurate to the actual live
action plate.
Once they had that, it was pretty straightforward to peel away the part of the
CG face they didn't want, and draw a mask to matte that out of the live action
plate before compositing in the CG endoskeleton parts. It's an unbelievable
amount of tedious work.
They talked briefly about the explosion simulation here, but it was covered
in much more detail in the paper session (see Paper: T3 Nukes).
ILM: TX Melting Sequence
They talked about the TX Melting Sequence in the particle accelerator as well.
This was reasonably interesting but hopelessly slow -- they used a full-on fluid
simulation which drove a Particle Level Set implicit surface to get the surface.
The most interesting thing they did, which they said they've used quite a bit
at ILM, was how they got a texture to bind to the implicit surface. Since the
underlying fluid flow simulation provides a flow field throughout the liquid,
they can use particles to carry UV information. They basically inject a lot
of particles at the surface of the original shape, and assign to each particles
the UV coordinates of the original texture at the place where the particles
is created. Then, they let the flow field that comes out of the fluids simulation
drive the particles through space. When it's time to render the implicit surface
for the deformed liquid, they reconstruct a UV mapping from the displaced particles.
ILM: Accurate Depth-of-Field in The Hulk
This talk wasn't actually in the Effect Omelette, it was in a separate sketch
session called "Video." The idea of the talk was that they improved the standard
image-based depth-of-field techniques by sliding more distant image layer in
2D before applying blur. Sliding the planes produces holes in the image, which
you fill from nearest pixels.
The standard depth-of-field hack for 2D images is to get a depth image along
with your render. Then you blur the image by a variable filter size at each
pixel, where the filter size used depends on the difference between your in-focus
depth and the actual depth as recorded in the depth map. This standard technique
is very fast and importantly can be done completely as a post-process. The reason
they did the above variation was to improve the color fringing and a few other
subtler effects. Frankly, I wasn't sure it was worth it after watching the talk.
Digital Domain: Star Trek Nemesis
This was a fairly uninteresting talk about the ash decay effect from Nemesis
(the funniest part of the talk was that the speaker kind of made fun of how
badly the movie did at the box office). Basically, they used the same accurate
match animation technique as the T3 faces, plus a very complex RenderMan shader.
They talked some about the pipeline at DD, which sounds fairly imposing: the
modelers and animators work in Maya, and the TDs and compositors work in Houdini,
with some Flames on the back end. A lot of time and trouble goes into converting
things in the pipeline.
Digital Domain: Production Workflow
I also went to another Sketch session called "Production Workflow Sketches"
were there was a Digital Domain talk. That one was even less interesting, it
was basically one of their software engineers complaining about what a pain
it was to convert between Maya and Houdini. The other talks in the Production
Workflow sketches weren't even about production workflow, so I left.