My name is Jason Levine, I am a computational artist and musician. I have been making interactive kinect music experiences since 2011 both as an independent artist, and as a consultant. I am very excited about the new Azure Kinect because of the increase in sensor resolution, the physical design, and most of all, the new features.
I chose to explore Azure Kinect Gestures because they present a robust new modality for interacting with the Kinect. Full body gestures are an intuitive physical interface, which is perfect for creating and interacting with music. While I have created many gesture controlled music systems with previous iterations of the Kinect, it was difficult to create robust gestures and especially difficult to make gestures that would work for everyone.
The gestures that are recognized by Azure Kinect include gestures that can be used as momentary switches, latching switches, and continuous controllers. In other words, buttons, switches and knobs, the mainstays of any piece of audio hardware or software.
Gestures such as Psi pose, T pose, and squat act as momentary switches. When the user assumes one of those positions, an audio effect or a note is turned on. When the user changes positions, the audio effect or note is turned off.
Gestures such as Jump, and swipes act as latching switches. In this demo, the user jumps to turn on the drum sequencers. The user can then swipe left and right to select which of the four drum patterns they would like to hear. Jumping again turns on the synthesizer playing a melody. The user can then swipe up to generate a new melody. One more jump and everything turns off.
Gestures such as zoom, rotate, and lean are particularly interesting because they can act as continuous controllers. In this demo, the user can pan the drums by leaning left and right, and control the resonance and cutoff of the synth using the zoom and rotate gestures.
All together this creates a powerful and fun UI for interacting with a generative music system. Best of all, as the user plays with the system and begins to make gestures to the rhythm, the user is essentially dancing! ;)
This project, fully realized would take on two forms: As an interactive installation open to all participants As a performance, where the system is tailored to the performer, and the performer is an expert with the system.
The scale of this project could vary in the number of users/performers, the number of kinects, and the variety of interactive systems. Some possible configurations include: Multiple people in one kinect interacting with the same system. Multiple people in multiple kinects interacting with the same system (networked) Multiple people in multiple kinects interacting with different systems A single person in multiple kinects interacting with different systems
The visuals in the current demo serve only to inform the user of the changes they are making to the interactive system. In the fully realized version, the visuals would take on a more important role, featuring lush abstract visualizations of the audio being generated, and a scifi psychedelic representation of the user/performer.