Intro
Voice-Gesture Skecthing Tool
Short Term Scientific Mission project by Karmen Franinovic
Funded by European Science Foundation, European Cooperation in Science and Technology Action "Sonic Interaction Design"
Collaboration:
Michal Rinott, Holon University of Technology, Tel Aviv, Isreal
Frédéric Bevilacqua, Ircam, Paris, France
For most designers and artists sketching interactive sound concepts is a hard task due to the limitations of their technical knowledge. Even the sound synthesis experts and researchers in musical gesture have not yet developed a quick way to communicate their sonic ideas. The use of voice is one way to generate, to brainstorm and to communicate sonic concepts and has been employed in the past by sound designers. However, there are several issues in using this method and specifically related to self-produced sound embedded in interactive objects. Human voice, and the self-produced sound, is heard differently by the one that produces the sound by the use of the voice and the sound heard. The source of the sound generation situated in the body ie. vocal cords create vibrations and resonances that modify our perception. This phenomenon, named ergo-audition by Michel Chion, may be an obstacle in communicating the sound that we desire to communicate. Everyone have experienced a surprised when hearing ones voice played back. Another problem is the recoding and further sketching with voice. As one sketches an object or a building, the lines can be redrawn, corrected and changed until the right form is found. But how can this be done by sound?
The goal of the VOGST project is to develop a tool for sketching and improvising sonic interaction through voice and gesture. As the core material of sonic interaction design are the relations between sound, artefact and gesture, the major efforts of this project is undertaken in the direction of facilitating the design of the relationships between gesture and sound. VOGST is a simple abstract object with embedded computing and sound technology which facilitates voice-gesture sketching. It has the capability of recording the voice through a microphone and simultaneously capturing the gesture performed. The gesture-sounds can be recorded, replayed and manipulated through the VOGST, but also via an interface developed in MAXmsp software. The tool will be tested with interaction designers in the workshop setting, in order to evaluate the design problems and specify prototype iteration. This project builds on findings from previous research within the Sonic Interaction Design community (Ekman and Rinott, 2010; Franinovic, Gaye and Behrendt, 2008; Bencina, Wilde, and Langley, 2008; Franinovic, Hug and Visell 2007).
References:
Bencina, R., Wilde, D. and Langley,S. (2008) Gesture≈Sound Experiments: Process and Mappings. In Proceedings of the 2008 International Conference on New Interfaces for Musical Expression, Genova, Italy.
Ekman, I. and Rinott, M. (2010) Using Vocal Sketching for Designing Sonic Interactions. In Proceedings of the 2010 Designing Interactive Systems, Aarhus, Denmark.
Franinovic, K., Gaye, L., and Behrendt, F. (2008) Exploring sonic interaction with artifacts in everyday contexts. Proc. 14th International Conference on Auditory Display, Paris, France.
Franinovic, K., Hug, D. and Visell, Y. (2007), Sound embodied: Explorations of sonic interaction design for everyday objects in a workshop setting, in ‘Proceedings of the 13th International Conference on Auditory Display’.