CS547 Human-Computer Interaction Seminar  (Seminar on People, Computers, and Design)

Fridays 12:30-1:50 · Gates B01 · Open to the public
Previous | Next
Archive
Elaine Chew||Alexandre François
Radcliffe Institute for Advanced Study, University of Southern California||Radcliffe Institute for Advanced Study, University of Southern California
Analytical Listening through Interactive Visualization
February 29, 2008

This talk introduces the project of our research cluster at the Radcliffe Institute for Advanced Study. Our goal is to make discerning listening of music accessible by offering interactive visualizations of musical structures, captured and analyzed from music streams in real-time. There are two components to the project: the mathematical model and algorithms for tonal analysis, and the underlying software architecture for enabling the real-time interaction.

Our tonal analysis and visualization system, MuSA.RT, is based on Chew's Spiral Array model, a geometric model with algorithms to identify and track evolving tonal contexts. The system displays the pitches played, and the closest triad and key, as the music piece unfolds in a performance. The pitch spelling, chord, and key, are computed by a nearest neighbor search in the spiral array, using two centers of effect (CEs), which summarize the current short-term and long-term contexts. The three-dimensional model dances to the rhythm of the music, spinning smoothly so that the current triad forms the background for the CE trails.

A challenge of building a system like MuSA.RT is that a human performer can never play a piece the same way twice. Apart from natural perturbations in timing from one performance to the next, expert performers can deliberately use expressive devices, such as pedaling or tempo variations, to highlight different structures so as to produce different interpretations of the same piece. A system for identifying and tracking evolving tonal structures must be robust to, yet flexible enough to capture, such performance variations.

MuSA.RT was designed using François' Software Architecture for Immersipresence (SAI), a general formalism for the design, analysis and implementation of complex software systems. Based on a concurrent asynchronous processing model, SAI defines primitives and organizing principles that bridge the disconnect between mathematical models and natural interaction. From its underlying principles to its graphical notation and derived tools, SAI embraces a human-centered approach to the design of computing artifacts.




Elaine Chew is an Associate Professor of Industrial and Systems Engineering and of Electrical Engineering at the University of Southern California (USC) Viterbi School of Engineering. She was the first honoree of the Viterbi Early Career Chair. She earned PhD and SM degrees in Operations Research from MIT, and a BAS in Mathematical and Computational Sciences and Music Performance from Stanford University. Professor Chew also holds diplomas and degrees in piano performance from the Trinity College, London, and Stanford University.

Her research interests center on the computational modeling of music and its performance. She founded and heads the Music Computation and Cognition Laboratory at USC, where she conducts and directs research on music and computing. She received the US National Science Foundation Career Award and Presidential Early Career Award for Scientists and Engineers for her research and education activities at the intersection of music and engineering.

Professor Chew is on the founding editorial boards of the Journal of Mathematics and Music, the Journal of Music and Meaning, and ACM Computers in Entertainment. She has served on numerous program committees for conferences in music and computing; this year, she is Program Co-Chair for the International Conference on Music Information Retrieval.

Professor Chew is on sabbatical in 2007-2008, during which she is the Edward, Frances, and Shirley B. Daniels Fellow at the Radcliffe Institute for Advanced Study. At Radcliffe, she and her collaborator Alexandre François form a research cluster on Analytical Listening through Interactive Visualization.