I suppose the big game industry news of the day is the cancellation of the yearly E3 tradeshow (who gives a crap, it was just a big marketing fest), but more interesting is the announcement of a new technology for digitally capturing super-high resolution models and motion of actors, called Contour. See articles in the NYTimes and Wall Street Journal. It’s developed by entrepeneur and inventor Steve Perlman (veteran Apple guy, General Magic, WebTV) and to be demoed at this week’s Siggraph in Boston. See and read more at his website, Mova.com.
Instead of placing a mesh of glowing dots all over the actor’s face and filming her from various angles to create a moderately hi-res model and motion capture, Contour mixes fluorescent powder into the actor’s makeup, and captures monochromatic shaded images of the actor’s face while she performs under seemingly normal lighting conditions — made possible with modified strobe-like fluorescent lights. The result is an extremely high resolution digital model, photographed textures and motion capture of the actor’s face. (Animators have to manually add detail to places makeup can’t go, like eyeballs and inside the mouth). Effectively each grain of makeup is like a motion-capture dot, allowing for very very hi-res, and low-cost, capture — “volumetric cinematography”. Brilliant! (literally)
This has immediate applications to filmmaking, as the articles describe, as well as to motion-capture oriented videogames. On purely visual terms, Contour does seem to make major progress towards crossing the uncanny valley, for linear (non-interactive) playback of an actor’s performance.
But, it does nothing to cross what one might call the uncanny valley of AI — how to generate believable interactive behavior. Canned motion capture sequences are of little help when implementing highly dynamic, procedurally animated interactive characters.