Paper: Rendering Aerodynamic
Sound
Though there has been a subcurrent of papers and interest on rendering sound
in SIGGRAPH previously, this paper takes it even more literally, since sounds
are rendered from (aptly named) sound textures. The specific sounds they were
interested in rendering are aerodynamic sounds, and the two types of examples
they worked on were swinging objects through the air, such as a sword or club;
and, wind blowing over environmental objects, such as fences or window cracks
(the researchers are mostly from the University of Hokkaido, and wind blowing
across a snowy landscape is a major concern there ;-).
The essence of their method is to take the sound-causing object, break it up
into small (~6 inch) linear segments, perform an idealized constant-flow fluid
dynamic simulation offline for the segment, then store the results as a sound
texture, which gives the sound pressure at a receiver as a function of the direction
to the receiver (it uses a sound theory know as Curle's method). Thus, it's
either a 1D or 2D texture, depending on whether the sound-causing object is
radially symmetric or not; in the cases they showed, the data for the sound
texture ranged from 40KBytes to 5MBytes. Each linear segment is then considered
as a point sound source for reconstruction. Constructing sound textures for
the various objects in their demo took several hours on fast machines.
At runtime, the flow direction and strength for each segment of the object is
calculated, the sound pressure is looked up for the receiver's direction, a
frequency correction for flow velocity and an amplitude correction for both
flow velocity and receiver distance is applied, and the results are summed across
all sound sources. Rendering of sound was accomplished at 60Hz on laptops which
were also drawing the scene; they don't present the specific sound-rendering
times in milliseconds but it was clearly very lightweight.
Their reconstruction of outdoor sounds (wind blowing through a fence or across
a window crack) were impressively accurate, enough so to feel a chill in the
warmth of San Diego! I wasn't as impressed with the swinging sounds, but they
were a much more compelling demo of why you would synthesize sound: the sound
was in exact correspondence to the actions of the character onscreen
and varied meaningfully with your location relative to the character. While
at this time, we can get better performances and more controllability with pre-recorded
sounds, as our games get more and more programmatic combinations of animations,
it may become worthwhile to start using these kinds of lightweight sound synthesis
techniques.
The full title of this paper was "Real-time Rendering of Aerodynamic Sound using
Sound Texture based on Computational Fluid Dynamics" by Yoshinori Dobashi, Tsuyoshi
Yamamoto, and Tomoyuki Nishita.