CS547 Human-Computer Interaction Seminar (Seminar on People, Computers, and Design)
Fridays 12:30-1:50 · Gates B01 · Open to the public- 20 years of speakers
- By year
- By speaker
- Videos: iTunesU · YouTube
Elaine Chew||Alexandre François
Radcliffe Institute for Advanced Study, University of Southern California||Radcliffe Institute for Advanced Study, University of Southern California Analytical Listening through Interactive Visualization February 29, 2008 This talk introduces the project of our research cluster at the Radcliffe Institute for Advanced Study. Our goal is to make discerning listening of music accessible by offering interactive visualizations of musical structures, captured and analyzed from music streams in real-time. There are two components to the project: the mathematical model and algorithms for tonal analysis, and the underlying software architecture for enabling the real-time interaction. Our tonal analysis and visualization system, MuSA.RT, is based on Chew's Spiral Array model, a geometric model with algorithms to identify and track evolving tonal contexts. The system displays the pitches played, and the closest triad and key, as the music piece unfolds in a performance. The pitch spelling, chord, and key, are computed by a nearest neighbor search in the spiral array, using two centers of effect (CEs), which summarize the current short-term and long-term contexts. The three-dimensional model dances to the rhythm of the music, spinning smoothly so that the current triad forms the background for the CE trails. A challenge of building a system like MuSA.RT is that a human performer can never play a piece the same way twice. Apart from natural perturbations in timing from one performance to the next, expert performers can deliberately use expressive devices, such as pedaling or tempo variations, to highlight different structures so as to produce different interpretations of the same piece. A system for identifying and tracking evolving tonal structures must be robust to, yet flexible enough to capture, such performance variations. MuSA.RT was designed using François' Software Architecture for Immersipresence (SAI), a general formalism for the design, analysis and implementation of complex software systems. Based on a concurrent asynchronous processing model, SAI defines primitives and organizing principles that bridge the disconnect between mathematical models and natural interaction. From its underlying principles to its graphical notation and derived tools, SAI embraces a human-centered approach to the design of computing artifacts. |
|