SIGGRAPH 2002 Trip Report

Leo Hourvitz, Maxis/EA

Summary

Another, great, exhausting year of SIGGRAPH! As with last year, the most important stuff ultimately had to do with the new programmable shading hardware and especially nVidia's Cg shading language. That will greatly impact the flexibility and creativity we can bring to surfaces as the hardware it's really targeted at (NV3x and the next generation of consoles) takes hold in the market. Overall, this was a slightly smaller show, but in a really nice city, and with a lot of excitement still around.

Stuff I Wrote Up:

Real-time Shading Course
nVidia Cg Shading Workshop
nVidia Briefing
Electronic Theatre / Animation Theater
    Vermeer: Master of Light
Fast Forward and the Symposium on Computer Animation
    Multi-weight Enveloping
Maya 4.5 Demo
Hardware Rendering Sketches
    Fins and Shells Hair
Electronic Arts Art/Tech Meeting
    Bent Normals
Web Graphics -- Games and Community
Gaming as the Dominant Medium of the Future Panel


Real-time Shading Course

I only sat in on the first half of the afternoon at this course. I actually tried to attend the character rigging course after my morning meetings, but the room had filled up and people were being turned away at the door!

Marc Olano from ATI (formerly of SGI)

Marc spoke about general shading architectures for real-time. He presented the following set of blocks, and talked about how you can configure these blocks in different orders to achieve different passes:

Different passes for different effects will use different blocks and flows within this pipeline model. For instance, the standard fixed-function shading pipeline looks like this:

..but this is only one example. Other examples are copy pixels from the application to the screen, render to texture, etc., etc.
ISL was a language implemented at SGI as a first attempt at providing a high-level shading language for real-time. In all real-time languages, some parts of the hardware restrictions up end exposed to the programmer; in ISL, they exposed these things:

ISL was implemented on top of OpenGL. In particular, they developed a system that compiled RenderMan Shading Language shaders to OpenGL. However, complex sahders became hundreds of passes, limiting it's usefulness.


Eric Chan, Stanford University

Eric talked about the Stanford RTSL shading Language and the recent work they've done to optimize it on the latest nVidia and ATI hardware.

One of the early decisions in the RTSL was to make the frequency of computation (constant, per-vertex, per-light, per-fragment) explicit as types in the language. So, the user only writes one shader which encompasses all processing, but the type system makes visible how often the calculations are being done.

They broke their compiler very explicitly into two phases: a front-end compiler generates a virtual program in intermediate form for each processing frequency. Then, compiler back-ends targetted at particular architectures translate the intermediate form into object code for each target. Most of his talk was about the specific back-end tree matching technology they're recently implemented to create more optimized back-ends for their system.

They've implemented procedural shaders with Perlin noise in their newest compilers, but instruction counts for those are still in the 100s. In general, across all modern hardware, they've found that passes are expensive compared to almost anything else. As part of their new backend they needed to come up with "cost models" for various decompositions of the shader programs into passes. These cost models vary per hardware architecture, but for example the cost model for the ATI current generation hardware is 15 times the number of passes plus two times the number of textures referenced (with no term for the number of operations).

Since Cg was the big news at the show, they compared RTSL to Cg. As was probably be fully agreed to by nVidia, RTSL is by design a higher-level language than Cg:

  1. RTSL separates surface and light shaders
  2. In RTSL, you write a single program for all phases of computation which is split by the compiler.
  3. RTSL hides multipass whereas Cg makes it explicit.

After noting that, they came to the obvious conclusion that an excellent way to implement their compiler backend was to call the Cg compiler. They kept the part of their compiler backend that knows how to split the shader into multiple passes; then, for each computation frequency and each pass, they generate a Cg shader which they compile with the nVidia compiler. The result is that they don't have to generate the assembly code themselves anymore, which is a much cleaner model as hardware advances (they do still need to know how to do the pass splitting, however).

Sampling Procedural Shaders, Wolfgang Heidrich

No [current] hardware is complicated enough to handle complex noise-based shaders. So, automatically sample & reconstruct them!

That's the straightforward idea behind his approach. However, it's greatly complicated because rather than just precompute and sample the complex part of the shader (say, the noise functions at several octaves), he precomputes and stores the entire shader including lighting. Therefore, rather than storing a resulting sampled value, he has to store the entire light field described by the procedural shader, making the result a rather large set of precomputed textures. In the end, interpolation between the samples from 8 of the textures needs to be done to reconstruct the procedural shader.

Conclusion

Overall, this wasn't a great course. Mostly, it made me wish I had been to the first half of the course ("State of the Art in Hardware Shading") the previous day -- unfortunately, I had meeting commitmants that prevented me from attending.

Electronic Arts Recruiting Booth

Patrick Kenney from Worldwide Recruiting organized a great booth and recruiting presence at the show. We had a new booth which looked great, it had three game stations out front running Medal of Honor, Freekstyle, and Madden which were busy essentially the whole show. We had a steady stream of people coming by -- at least when I was there -- and were definitely getting a number of good people. I spent quite a while talking to an ILM CG Effects Supervisor and a couple PDI TDs. They were both very aware that a number of their peers had recently come to EA and were checking out what was going on and some of the reasons folks were coming over. We had over 200 reels accepted at the booth, every one of which Patrick reviewed himself! We interviewed over 60 people -- mostly for art -- at the show, and quality was pretty good. We also collected quite a number of programming resumes, although apparently we didn't do as much interviewing for programmers. At least two people I knew -- plus a couple friends of Stephen Kody, our art intern -- came by for interviews. All in all, a big success for ferreting out candidates, although since EALA has a huge number of openings at the moment many of the candidates may end up getting funneled down their way. Almost everyone from EA who came to the show ended up working at the booth for some part of the night.


nVidia Cg Shader Workshop

This workshop (taught by an old friend from Pixar who has since been the shading lead for Final Fantasy) covered the Cg language itself, the CgFX wrapper, and the actually took everyone through a hands-on programming exercise in Cg. It was repeated many, many times during the week, and lasted about 90 minutes.


The Cg Language

Cg is nVidia's low-level shading language -- the name suggests the analogy they mean, which is that like the original C language, it's merely "high-level assembly langauge" for shaders. It looks very much like C with a complex type system encapsulated both hardware requirements and limitations; but the kinds of functions you write are specifically targeted at hardware shading (you implement a vertex shader and a pixel shader). Like RenderMan Shading Language, Cg includes a large number of possible implicit parameters to these functions. However, Cg actually does a better job (IMHO) of being a good general-purpose programming langauge like C than RMSL does.

In a Cg shader, the input parameters and output parameters are both explicitly grouped into structs pointers to which are passed to the shader. This is again in improvement on the RenderMan practice of having a large body of implicit parameters which are not declared in the code. It also makes sense in the Cg environment because there is no separation between light and surface shaders, meaning that the positions of all lights are essentially parameters to each surface shader (in practice, it's unlikely you would bother to use a Cg shader unless you were doing something different than the fixed-function pipeline).

The Cg compiler has back-ends that compile to either OpenGL 1.2 or DX8.1 today. Cg has been made a part of DX9 and they'll release the DX9 compiler as soon as possible. They're also supporting the emerging OpenGL 2.0 standard which unifies the various OpenGL extensions which have been proliferating in the last several years. More interestingly, they open-sourced the Cg compiler itself, so that hopefully back ends for other hardware will be available. I was suspiscious that there are some assumptions in the Cg hardware that might prevent you from taking full advantage of ATI hardware, but I don't know enough about the hardware yet to be sure.

On nVidia hardware, they report that across a range of shaders, most were 5-15% slower than the hand-coded version, which is pretty good for a compiler technology this early in its evolution.


The CgFX Wrapper

The CgFX wrapper technology is designed to allow bundling up the surface and pixel shaders for all passes of a given material into a package which not only holds the resources but alslo describes the properties and parameters of the overall surface effect. CgFX allows users to mix and match between shaders created via the Cg compiler and hand-coded shaders, and is being integrated in some form into DX9.

One of the most important things I saw at the show is that CgFX has been integrated into the new versions of Maya, XSi, and Max. In all three, you can create a CgFx-based material in the program, and have the windows in the tool use the CgFX shader to draw the update. This offers a much tighter connection between shading and display. XSi goes even further in that Cg is used as the way they build their shader networks. All of these suggeset that in a year or two (as NV30-based hardware becomes standard for content creators), high-end shader writers will use Cg as their development environment, because the feedback loop between shader changes and visual updates drops to a fraction of a second.

Cg Development Kit

I brought it home on CD, it's also available online at developer.nvidia.com.

Conclusions

I think getting shaders back into the hands of technical artists so that we don't have the bug the programmers every time we want a variant surface is a tremendous boon. Programmers will still be critical to shading in our environment for the forseeable future, but this kind of technology (especially in the next-gen console round) will make shading a much more responsive and widely-used part of achieving our in-game looks. The GF4 generation of hardware might have high enough resource limits to make this useful, or it might not (in particular, the Cg model can get very squirrelly if you don't have some form of extended-range pixel format). However, the NV30 architecture (see below) definitely is where the Cg-like form of shader hacking will really come into its own.

nVidia Briefing

We had a three-hour briefing for Electronic Arts from nVidia on Wednesday morning. Technical Artists, Art Directors, and rendering programmers from quite a few EA studios were present, definitely including EAC, EARS, Maxis, EALA, Westwood, and EAUK (sorry if I forgot others who were there). This information is all confidential to nVidia, so please respect their sharing it with us.

Product Road Map

Interfaces: They'll begin supporting AGP 8x in the 2nd half of 2002 and cut the product line over to it through the first half of 2003. Then, in second half of 2003, they'll begin shifting over to 3GIO/LH (aka "PCI Express"). This is a 4GB/sec bus that should be the new high-end interconnect standard.

NV30 Rollout

NV30 ships in a ~$399 product in Jan 2003 -- high end. It will then roll out in performance and mainstream categories throughout 2003. By Xmas 2003 they expect there will be 20-30 million NV3x product installed. They will no longer be selling any pre-NV30 products. NV30 has 120M transistors, 1.5GPixels anti-aliased polygon fill rate, 200M tris/sec setup rate.

Product Family history:

NV1X, aka GeForce 2. hardware transform & lighting; hardware environment cube. Produced something like Sega Virtua Fighter: 50K polys/sec, 1M pix ops fill rate.

NV2x, aka GeForce 3, GeForce 2 Ti, or Xbox: corresponds to DX8. hardware shadows, full-scene anti-aliasing, vertex processors. Canonical result is something like Dead or Alive 3: 100M polygons/sec, 1 gigapixel/sec.

NV30 Product Architecture

Emphasis for NV30 was not on massively expanding "quotable" statistics; instead, key features were:
- Advanced Programmability
- High-precision color
- Cg shader language
- Efficient architecture (especially for AA -- make it so you'll never have AA off)
- High bandwidth to system memory

Programmability Features
Vertex shaders expanded from 128 to 256 instructions
Pixel shaders can be 1024 instructions
Added loops, branches, call & return (subroutines)
Sin & Cos, Exp and Log (all instructions are 1 clock cycle)
Support for arbitrary user clip planes

Performance
8 pixels/clock
Setup rate of 200M tris/sec
Performance vs. pixel shader length: 8 instruction or less pixel shaders run at 1.6GPixels/sec. Performance for longer shaders is inversely linear, i.e., 32 instruction shaders runs at 400MPixels/sec, 128 instruction shaders runs at 100MPixels/sec, etc.

Pixel Shading
texture instructions
derivative/LOD instructions
up to a single-precision float per component
pixel kill (don't remember exactly what this was)
Can use new 128-bit-formats (4 floats) as multichannel as well/instead
16 active textures in a pixel shader -- you can implement your own filter kernels as well

Variants:

NV30GL -- authoring-oriented board, lots of anti-aliased line features
NV31 --1st mainstream DX9 GPU, out in late 2003. Supports identical instruction counts and features, data types, etc. - just lower overall performance.

Presentation from Kevin Bjorke, head of shading at nVidia

"Programmable shaders are the gaffer's tape of game development"

Kevin went through how programmable shaders can aid the visual development of a project. His presentation actually spent a lot of time analyzing a Vermeer painting, which was fascinating -- but I've given a talk that was coincidentally very close to that part about analyzing a different Vermeer painting. Here are some other parts of his talk where he analyzed how to use various well-known non-real-time programmable shading tricks to get certain looks in a scene (all of which he's implemented at least once in Cg).

Complementary Lighting a la PDI: Lights that are a different color as they fall off then they are at the highlight area. For instance, it's a warm yellow at the highlight but falls off towards blue as you get towards 90 degrees L.N. This can save you writing a lot of fill lights in your scene. It doesn't necessarily use complementary colors in practice, it's just two colors picked by the lighter.

Wrap Lighting: Make L move a little bit towards the surface normal. This will make back lights "wrap" a little bit around the silhouette edge of the object. That's generally a good thing when you're trying to do rim lighting -- you no longer have to get the rimlight position fussily correct. This can in turn cause problems if you're doing self-shadowing with this light; however, to get around that, you can shrink the object a little in the vertex shader when rendering the shadow pass.

Eye highlights: to simulate area lights, do a single-bounce reflection of a texture. Especially important for eyeballs; we're very used to seeing an area specular highlight in eyeballs.

Fake subsurface scattering: Kind of like the wrap lighting trick, light a subsurface point, and add that as a tint back into the shading of your point. They used this for skin and cloth on Final Fantasy. I assume in this case you move the normal from I? For cloth, they also added a 'lintiness' pass to make the cloth edges look good.

Random post-presentation discussions

What NV30 features aren't supported in DX9? Right now the only thing missing is the conditional codes and predicates. Since branches are very expensive in GPU, the NV30 architecture has a lot of support for setting condition codes which allow optional instructions to occur without messing up the pipeline. They said Microsoft hasn't really grasped how important that is yet and hasn't signed up for it in DX9.

Occlusion queries: They said these are notably fast, and one non-obvious fact is that although they don't work on NV2x hardware under DX8, they do work on NV2x hardware under both OpenGL and DX9.

Electronic Theater

This year the Electronic Theater was notable for having a lot of strong pieces -- but really no single standout pieces like some past years. I brought back the DVD from the ET and the animation theaters, which were relatively complete this year, and we'll be showing those at the next artist's lunch. The ET opened with a well-edited tribute to computer graphics pioneer Robert Abel who passed away in 2001.

My favorite piece of the show was a nonfiction piece from the National Gallery of Art and Interface Media Group called, "Vermeer, Master of Light." It's a detailed dissection of the composition, lighting, and technique involved in Vermeer's masterwork "The Piano Teacher," and a beautiful glimpse into the working methods of a master painter. I've done a talk at the University of Washington that's exactly this kind of dissection of the dense composition in a Vermeer painting (I used "The Allegory of Painting"), but this piece is so nicely done it's a joy to watch.

My other favorite from the show was "Polygon Family: Episode 2" from Polygon Pictures in Japan. This darkly humorous piece about a late-night war between husband and wife riffs on all sorts of Japanese cultural references to great effect.

Overall, it was a French/European year. The Jury Award went to "Le déserteur" for obvious reasons -- a beautiful, emotional anti-war piece, and the Best Animated Short award went to another heavily art-directed French piece, "The Cathedral." A company called Duran Duboi had two really good pieces -- a music video for Super Furry Animals called "It's not the End of the World," and a humorous dancing robot piece called "Number One". BUF Compagnie had a PSA about development pressure called "EDF La Vallée" which used a SimCity theme.

There was a strongly art directed student piece from Ringling ("Passing Moments", a 20s fantasy) and some other humorous stuff including Vinton's reel of the Carl & Ray animations for Blockbuster -- again, a strong show but no one mind-boggling piece.

Animation Theater

I actually got to the animation theater for some time this year. The good stuff in there was mostly some very funny commercials, including Vizzavi's "Tennis" which was almost an homage to "For the Birds" and Flora "Jack Spratt" from England, both animated by Passion Pictures. Games were represented by Tekken 4 and World of Warcraft openings (no game-related pieces made it into the Eletronic Theater this year in contrast to recent years).


Fast Forward and the Symposium on Computer Animation

This year they had a new event: The Papers Fast Forward. This was an evening event where every paper from the conference was previewed in a three minute presentation. I wasn't able to attend the SIGGRAPH fast forward, but apparently it was a great event -- Ken Perlin did his paper as a rap, and another author delivered their paper in auction speak.

However, there was another fast forward I did get to. On Sunday and Monday at the beginning of SIGGRAPH week, there was a co-located (but separate registration) small symposium focus specifcally on research in computer animation. They had a "fast forward" of that symposium afterwards as a panel, and it sounded very interesting (so much so that I immediately went and bought the proceedings of the symposium). Here were a few highlights from the symposium; I have the CD-ROM with PDFs of all the papers on it if anyone wants to see them...

Alpha Wolf -- Bill Tomlinson from the MIT Media Lab presented their character simulation of wolf cubs in a pack. This was an emerging technology demonstration at SIGGRAPH last year and has a really nice soft stroke rendering style. Bill is out of MIT with his PhD now and looking for a job creating synthetic character with personality...

Motion-capture-driven animations that hit and react -- Victor Zordan from Georgia Tech did a paper about how to integrate reaction to external event with motion capture animation. His particular example was boxing, and he looked at real boxing footage to compare his method to (it showed he wasn't quite there yet, but it was clearly the right way to evaluate the footage). It also showed what I call the new-model academic: this was pretty much a paper on "Could you make a better Knockout Kings this way?"; no more the ivory tower interests of academia past!

IK-based Foot Skate Cleanup -- Some guys from Wisconsin did a very complete job of cleaning up foot sliding in mocap. The key to their technique is to let the bone length vary by a tiny percent, which produces much better numerical stability. It made me wonder if you couldn't use the same idea (tiny adjustments to bone length) to reduce IK pop?

Controlled Animation of Video Sprites -- One of the coolest things I saw at the conference was this technique by Arno Schodl from Georgia Tech. They process *lots* of video (e.g., 30,000 frames) of a creature to get images that they then use to re-animate the object. Dancing hamsters! Choreographed flies! This was originally developed for the video textures paper in SIGGRAPH 2000, but I think the video sprite use of the technology is much cooler. See www.videotexture.com.

Multi-weight Enveloping: Least-squares Approximation Techniques for Skin Animation: This was a paper by Corinna Wang from ILM (a friend of David Benson's) which seems like a potentially huge thing for envelope weighting. Instead of determining a weight per bone as in conventional enveloping, this technique determines a separate weight per component of the transform matrix (unfortunately meaning 16 times as many lerps). Because that would be too painful to specify by hand, they instead derive it given a set of 6-10 "training poses" for the character. In other words, you sculpt the character posed in 6-10 poses directly; then specify the range of effect for each bones (i.e., "only these verts are candidates for weighting to the upper arm bones"); then let it derive the details weights for each component of each matrix. The good news is that it appears from their results that doing so allows you to have volume-preserving transformations in areas such as the shoulder and to get around the 'collapsing elbow' problem endemic to skeleton space deformation (aka bone weighting). You don't need to do the kind of work we do now where we e.g. create a separate bicep bone so we can weight the twist axis of the upper arm separately; instead the differing weighting coefficients for different elements of the matrix create a similar effect. Reading this paper made me go back in read John Lewis' SIGGRAPH 2000 paper about Pose Space Deformations, which seem like a more complex (albeit possibly more robust) implementation of a similar procedure. I got the impression from David Benson that the Lewis technique is the more favored one within ILM?

Interactive Animation of Ocean Waves was a paper by the folks at INRIA in France. It's a little too complex to use in the Sims, but it certainly is better -looking than most wave hacks done in real-time. The basic idea was to push a screen-space grid out onto the ocean plane, resulting in geometric sampling appropriate to the view. Then you apply a fairly standard multi-octave wave synthesis to the grid. It just barely was interactive using the whole CPU on current processors.

Synthesizing Sound from Rigid-Body Simulation by James O'Brien from Berkeley was a paper I'd seen before (at GDC), but it's still totally cool. Using the math that's generally used for computing deforming objects (like Jell-O cubes), he actually derives the sound that wind chimes make from the physics (i.e., not from a priori knowledge about wind chimes) in real-time. So, metallic objects make a clang when dropped not because the sound department recorded a clang but because the physics for making a clang-y sound when dropped can now be reasonably smulated in real-time. It's pretty amazing stuff even if I don't think we're going to take skilled sound artists out of the picture anytime soon.

Stylized Video Cubes -- Allison Klein from Princeton did a cool paper about a way of producing many kinds of stylized video by regarding video as definiing a three-dimensional space; tracing three-dimensional 'render solids' through that space; and rendering each solid independently on reconstruction. It sounds quite odd, and many of the images it produces are, but it's still pretty cool.
It looked like this Symposium was really fun, hopefully they'll keep doing it in the future!

Maya 4.5 Demo

We got a fairly lengthy demo of Maya 4.5 from an Alias product specialist. It made me quite a bit more excited about getting upgraded to 4.5. Although there really isn't a tremendous high-tech jump in any area, there's quite a few things -- especially in basic poly editing -- that are a good deal better for daily use. Here were some of the things we saw.

Again, none of these are mind-bogglingly great, but they definitely seem like they'd improve daily work flow. Unfortunately, the new Notes feature does create an attribute named "notes" on the objects you tag with it, which creates a potential conflict with our noteTrackWin. In the Unlimited product, they also have a new fluids simulation tool.

Hardware Rendering Sketches

There was a sketch I didn't think much of -- "Spatial Bi-directional Reflectance Function Distributions," which reproduced the BDRF of a single object using a tremendoues amount of texture memory. Next was a sketch by Johannes Hirche from the University of Tubingen called "Curvature-driven Sampling of Displacement Maps," about a proposed hardware architecture for tesselating displaced surfaces in hardware -- however, it's just a proposed hardware architecture at this point.

However, the last sketch in this sessison was great. John Isidoro and Jason Mitchell from ATI presented their real-time implementation of Jerome Lengyel's "Fin and Shell" approach to real-time hair on the ATI Radeon. It was pretty cool-looking for short hair (no long-hair examples shown, seems like the technique probably doesn't work as well there)... Essentially the idea is that at modeling time you extrude a new polygon out from every edge of the surface and draw each hair strand onto it to get it's texture; then, you do roughly the same for several 'shells' expanded out around the surface. The shells are drawn in a later layer than the fins. For crew cuts and other short, bushy hairstyles it seemed to look very good and was definitely viable in hardware. The original Lengyel papers were in the Eurographic Rendering Workshop 2000 and the ACM SIGGRAPH Symposium on Interactive 3D Graphics 2001 -- I hadn't read those but I'll download them soon.

Electronic Arts Art/Tech Meeting

Henry LaBounta from EAC oraganized a about-half-hour meeting amongst art director, tech art directors, and a few render programmers on Thursday to talk about lighting issues across the company. We also touched on a lot of the process and management issues that all teams at Electronic Arts face; those concerns were fairly common across the company.

Unfortunately for Sims2, most of the discussions about environments are about kinds of pre-baked schemes that aren't very applicable to our user-built world. However, the discussions about character lighting are directly applicable, and at the moment the trend is strongly towards using irradiance or lightmap lighting for characters in the console titles. Irradiance has just been implemented for one of the PS2 titles in Vancouver; they reported that the performance is the same as 3 directional lightings + a spec pass.

Bent Normals

Something that could be very relevant to the Sims2. however, was a new idea for how to pre-bake objects in order to produce more surface details -- and it's independent of what light source scheme you eventually use. At each vertex (at each vertex/face if unshared), you shoot out a number of rays from the vertex and intersect them with the object. There are two results of the computation that are stored back into the *normal* at the vertex:

  1. The normal is *shortened* by the percentage of the rays which intersected the object. This creates self-shadowing within the object because the L dot N calculation that lighting typically does will now have a smaller amplitude.
  2. The normal's direction is warped towards the average of the surviving rays. That's because the surface really will only receive light from the surviving directions, and we want to prejudice the lighting calculation towards such normals (obviously, you now have to be careful about back-face culling).

Sometimes this technique does require that you introduce some extra polygons so there are vertices in the right places to have this technique affect them. However, it seems that right now spending a moderate number of extra polygons to get better shading would be a good adaptation to the hardware we'll be running on. This technique was introduced by ILM in their course on RenderMan, but seems to be relatively well-suited for real-time. In particular, the bond racing game has already implemented this idea for pre-calculating lighting effects on their cars.

Web Graphics -- Games and Community

This year SIGGRAPH had a separate technical track for web-related work, one of which was this community-focused one.

The first paper was by teamchman.com or mutafukas.com (I'm just the reporter here), a French company that built a pretty cute online animated game called Banja in Flash. The game is released in France and just came online in Germany; it hasn't been picked up for (translation and) distribution in the US yet. They were a little unclear on eactly how big their user base are; also, their presentation was greatly diluted because at the end they showed a five-minute trailer for the project that they clearly are *really* interested in: an Akira-meets-Fifth-Element movie done entirely in Flash. The look was cool, but I wasn't convinced they could actually sustain a story for 1.5 hours.

The second paper was on a friendship-network-building system from Japan called TTT (which is the Japanese abbreviation for "my friend's friend is my friend" -- "tomodachi no tomodachi tomodachi desu"). It uses little penguin-like cartoon avatars to mediate building a friendship network that mediates a SMS-style conversation. This is actually being used live by a Tokyo-area ISP for their users, although only very recently. It's also built in Flash.

The last paper was a Web3D implementation of Clue! (also from Keio University in Japan) They have five toys in a child's house. The toys can move around from area to area in the child's room, and one of the five toys is actually the 'criminal' who's trying to 'break' the other toys before we can be accused by one of the other four (literally, the accusation is "The marionette broke the clown in the study with the scissors" -- holy Colonel Mustard!). The game was implemented in Shockwave 8.5 with the back-end programmed in PHP. One of the best parts of their design was that the conversations were structured to be language-free, so that they can have children of different nationalities playing the game without the need for translating chat or SMS interaction.

Gaming as the Dominant Medium of the Future Panel

Way back last year at the December WWGC meeting at Maxis, we strategized about getting more EA content into SIGGRAPH, and one of the things we decided to do was put together a panel about the gaming industry. This panel was the result, and ended up having Bob Nicoll from EARS, J.C. Herz from the NY Times, Ken Perlin from NYU, Patrick Gilmore from DreamWorks, Will Wright, and Glenn Entis from EAC on it debating the truth or falsity of the title.

It was done as a cool panel format, where Ken Perlin moderated and immediately began accepting questions from the audience written on 3x5 cards. He arranged and selected from these cards, and peppered the panelists for opinions. He was very good at it, and it kept things lively.

One thing that became evident is that although few of the panelists really attacked the proposition as directly as we hoped -- mostly debating what it meant for a medium to be dominant -- it became clear that the audience who was writing the questions really did accept the proposition as true. This can only be good for EA's stock price.

I took extensive notes from this panel which I can send around under separate cover; here are a few good pull quotes.

"One of the ways to define dominance is, where are the best and the brightest going? There are a lot of shift we see in the cg industry, and we're losing jobs in some areas, we're gaining them in others, and we're seeing a lot of talent going into the games business, and that's one of the things you need to look at" -- J.C. Herz

"One programmer from a leading game company said to me, "We can market to 15-year-old boys really well, because we know how to make these fascinating environments where you meet new creatures under new circumstances, move around them, and kill them. What we don't know how to do is make these fascinating environments where you meet new creatures under new circumstances and make them feel really bad. When we can figure out how to do that, we'll get the other half of the market [women]." -- Ken Perlin

"We did a couple focus groups for SimCity and asked, "How would you feel if we had real businesses in the game, like McDonald's or Burger King?" and they said it would suck. In the next group, without mentioning this, we asked them what new features would be cool, and they said, "Oh, if we had businesses that were really in our city, like McDonald's and Burger King." That's why I don't trust focus groups." -- Will Wright

"Suppose Moore's law really goes on for another fifteen years, what's it all going to be like when we have ridiculously large computational resources and infinite bandwidth to create these interactive entertainment experiences with?" -- moderator
"To run evolution, to have this world that's a blank slate and I get to determine it's fate, and then get to go in and interact with it. To create a world with alien creatures in it, and go in and interact with it." -- Patrick Gilmore's answer to the question. Will showed no reaction at all...