Master 2017 2018
Stages de la spécialité SAR
Live-Coding and Augmented Dance : Interactive Machine Learning for Improvising Movement-Sound Interactions


Site : Page perso
Lieu : LIMSI-CNRS Plateau du Moulon, Bât. 508 91403 ORSAY.
Encadrant : Jules FRANÇOISE, Chargé de Recherches CNRS.
Dates :du 01/03/2018 au 31/07/2018
Rémunération :Gratification légale (570€ environ)
Mots-clés : Parcours ATIAM : Informatique musicale, Parcours ATIAM : Traitement du signal

Description

# Context

The plethora of new technologies for sensing and capturing human movement offers exciting possibilities for interaction with digital media. Yet, beyond purely functional gesture-based control, designing expressive interactions that take into account the complexity of human movement remains a major challenge. This project addresses movement modeling in contemporary dance. In particular, we consider the design of interactive systems for augmented dance, where the dancer’s movements are mapped to continuous sound or music feedback. Such an application requires both computational features that describe some of the expressive aspects of movement, and tools that allow to personalize the motion-sound mapping by example using machine learning. In previous work, we have developed techniques for movement analysis [1] and for real-time movement recognition and mapping [2].

In this project, we consider an extreme case of this scenario : augmented dance performance where all interactions are created in real-time through live-coding [3]. The goal is to allow for real-time improvisation between a coder and a dancer. This situation involves capturing, annotating, analyzing and mapping movement to sound on the fly. This internship will focus on developing new interfaces for live-coding in augmented dance, that make use of interactive machine learning techniques. This work will be conducted in collaboration with dancers. A collaboration with cognitive scientists on the topic of movement observation and anticipation can also be considered.

# Goals

The main goal of this internship is to design, implement and evaluate new interfaces for live-coding movement interaction. While there exists some methods and technologies for rapid prototyping using movement analysis and machine learning, current tools remain limited for the case of real-time improvisation, that can be seen as “extreme prototyping” : all interactions are created in real-time between the coder/designer and the performer.

The student will first review related work on live-coding in music performance, expressive movement analysis, and interactive machine learning for the design of interactive systems. Through a collaboration with contemporary dancers, the student will then develop several prototypes of interfaces for real-time movement interaction, addressing a number of complementary aspects of the live-coding context :

  • How to anticipate and efficiently capture particular gestures of the performer ?
  • How to interactively segment and annotate the performer’s movements ?
  • How to build feature extraction on the fly to analyze expressive movement ?
  • How to automatically discover features and patterns from the performer’s movement (unsupervised learning) ?
  • How to learn machine learning models of movement and sound relationship on the fly and refine them along the performance (online learning) ?

These interfaces will be developed within an open-source live-coding platform using web technologies for movement analysis, gesture recognition and mapping (XMM*) and sound synthesis (WebAudio API). Finally, the student will conduct a user study (design the protocol, conduct the study, analyse the results) aiming to evaluate one or several of the proposed interfaces. The study will compare new interfaces with existing tools in the context of live-coding (for example, through a workshop with dancers). The student will be in charge of specifying the protocol, conducting the study, and analyzing the results.

This project belongs to the emerging research field of Human-Centered Machine Learning**. This internship could be pursued as a doctorate with a broader scope on the topic of machine learning for extreme prototyping and live-coding for movement interaction. According to the results obtained, the internship may result in a publication in an international conference (ACM CHI, ACM TEI, NIME, MOCO).

# Overview of the work

The work will include the following steps :

  • State of the art on live-coding, movement modeling and interactive machine learning.
  • Design, implementation and pilot evaluations of various algorithms and interfaces for movement capture, annotation and modeling in a live-coding context
  • User study evaluating one or several of the proposed interfaces : from the specification of the protocol to the analysis of the results.

# Student’s Profile

We are looking for a Master’s student with strong human-computer interaction / signal processing / machine learning skills, as well as an interest in dance and performing arts. Programming proficiency is required (preferably with JavaScript).

# How to apply

Send a CV, cover letter, and recent grades records to : jules.francoise@limsi.fr

* XMM Library : https://github.com/Ircam-RnD/xmm

** See recent workshops in major HCI conferences, for example : http://hcml2016.goldsmithsdigital.com/

Bibliographie

[1] Fdili Alaoui, S., Françoise, J., Schiphorst, T., Studd, K., & Bevilacqua, F. (2017). Seeing, Sensing and Recognizing Laban Movement Qualities. In Proceedings of the ACM Conference on Human Factors in Computing Systems.

[2] Françoise, J. (2015). Motion-Sound Mapping by Demonstration. PhD Dissertation, Université Pierre et Marie Curie.

[3] Collins, Nick, et al. "Live coding in laptop performance." Organised sound 8.3 (2003) : 321-330.