Similarity and Feature Learning for EEG Recordings of Music Perception and Imagination
Similarity and Feature Learning for EEG Recordings of Music Perception and Imagination
Sebastian Stober
Brain and Mind Institute, University of Western Ontario. London, Ontario, CANADA
Abstract: Aiming to learn spatio-temporal features from the OpenMIIR dataset of EEG recordings taken during music perception and imagination, two deep learning techniques based on convolutional autoencoding are proposed. Conventionally, autoencoders consist of an encoder that transforms the input into an internal feature representation and a decoder that reconstructs a signal from these features. For learning representative features, the reconstruction error of the inputs is minimized. To learn features that are stable across trials of the same stimulus, cross-trial autoencoding instead pairs trials belonging to the same stimulus for reconstruction. For the second technique, similarity constraint encoding, these paired trials are extended into triplets by adding trials from different stimuli that need to be recognized as less similar than the pair based on the feature representation. This facilitates learning features that allow to distinguish trials from different stimuli and at the same time creates an embedding of the trials into a similarity space. Combining these two techniques and using only three trials per stimulus for training, the paired trials can be correctly recognized as most similar within the triplets for unseen trials with a significant accuracy of 70–75% (above 50% chance baseline) depending on the subject.
Bio: Sebastian Stober is post-doctoral fellow at the Brain and Mind Institute of the University of Western Ontario where he investigates ways to identify perceived and imagined music pieces from electroencephalography (EEG) recordings. He studied computer science with focus on intelligent systems and music information retrieval at the Otto-von-Guericke University Magdeburg where he received his diploma degree in 2005 and his Ph.D. in 2011 for his thesis on adaptive methods for user-centered organization of music collections. He has also been co-organizer for the International Workshops on Learning Semantics of Audio Signals (LSAS) and Adaptive Multimedia Retrieval (AMR). With his current research on music imagery information retrieval, he combines music information retrieval with cognitive neuroscience.