Master 2012 2013
Stages de la spécialité SAR
Multimodal Interaction of musical gesture using EMG and physical sensing


Site :http://www.gold.ac.uk/computing/
Lieu :Department of Computing, Goldsmiths, University of London
Encadrant : Prof Atau Tanaka, Dr Baptiste Caramiaux
Dates :from 15/02/2013 to 15/08/2013
Rémunération :Paid according to standard UK university rates, approximately €1,000/mon
Mots-clés : Parcours ATIAM : Informatique musicale, Parcours ATIAM : Traitement du signal, Parcours ATIAM, acoustique

Description

INTRODUCTION

This internship will take place in the Embodied Audiovisual Interaction group in the Department of Computing, Goldsmiths, University of London. It is funded by the European Research Council project, MetaGesture Music (MGM) (http://www.2020-horizon.com/METAGES...)-s1879.html) and is supervised by Prof Atau Tanaka and MGM researcher Dr Baptiste Caramiaux. The research project will be defined in the context of the larger MGM project, and the internship outputs must contribute, in publications and knowledge generated, to MGM.

CONTEXT

Real time, gestural interaction with computer music systems has increasingly become widespread in musical production and performance. Physical sensing systems are found today on consumer electronics, notably in the gaming industry, with accelerometer based systems seen in the Nintendo Wii and computer vision systems in the Microsoft Kinect. Physiological biosensing such as electomyogram (EMG) has become more practical with dry electrode systems and high precision low cost electronics, enabling applications outside of the medical domain in areas such as rehabilitation and sign language recognition.

Despite the potential demonstrated by these developments, capturing reliable, meaningful gestural information poses significant technical challenges. There is high inter-user variability, as well as intra-user variability. The differences observed might be error to be suppressed, or be “expressive” variation to be highlighted. Gesture execution in real world settings are not isolated as in laboratory conditions. Similar gestures might be performed in different postures, or while in non-stationary contexts. One gesture may lead directly into another, leading to complex and compound gestures. There is a need not only to temporally segment gestures, but also a need to “demix” gesture primitives from a compound multi-modal gesture.

Different sensing technologies detect different aspects of a gesture. Computer vision systems detect limb position and must infer motion. Accelerometers detect dynamics and must infer rotation. Physiological sensors report on the body’s preparation to execute a gesture rather than the physical result of the gesture. By exploiting complementarities and couplings between different input modalities, there is the possibility to identify the elements within sequential or superimposed gestures.

TOPIC

The internship will be focused on the research task of using multimodal data (from EMG and accelerometer sensors) to recognize and segment free-space arm gesture. The system will consist of a supervised learning phase for the segmentation, and a real time run-time phase. The multimodal integration will be first based on feature fusion.

We propose the use of a non-linear dynamical system (see REF) to which the intern will have to investigate the addition of some features deserving the aforementioned goal.

TASKS

- To use an existing database of EMG+accelerometer data previously recorded from arm gesture of a pre-defined gesture vocabulary (A pilot study from our group has recently been done)
- To extend the model to segment gesture in realtime (the interested candidate can read the early work by Black et al.)
- To extend the model to take into account multimodal data input Several schemes could be envisaged : data fusion, feature fusion
- To evaluate on the database its accuracy compared to state of the art methods

The results from the internship should lead to a conference and/or journal publication

REQUIREMENTS

- Theory : Signal Processing and notions of Machine Learning mechanisms
- Software : Matlab, C++, Max/MSP and/or OpenFrameworks
- All work willl take place in English

Bibliographie

Caramiaux B., Bevilacqua F., and Schnell N.."Towards a Gesture-Sound Cross-Modal Analysis." Lecture Notes in Computer Science. Springer-Verlag, 2010.

J Deutscher, A Blake, I Reid. “Articulated body motion capture by annealed particle filtering” in IEEE Conference on Computer Vision and Pattern Recognition. 2000, Vol 2, pages II-126-II-133

Donnarumma, M., Caramiaux B., Tanaka, A. “Body and Space : Combining Modalities for Musical Expression” submitted Tangible Embedded, and Embodied Interaction 2013.

Francoise, J. "Realtime Segmentation and Recognition of Gestures using Hierarchical Markov Models." ATIAM Master’s Thesis 2011.

Tanaka, A. “The Use of Electromyogram Signals (EMG) in Musical Performance” in CEC eContact ! 14.2. 2012