The audio is built by using the Kinect to play and loop synths and other
sound generating VSTs with Chris' software "Kinectar". Along with the
Ableton Live instruments there are backing drum loops and buildup sounds
which are triggered the foot switches (FCB1010) to provide an extra
layering to the tracks that can't be controlled live with the Kinect
alone. Scripted sounds are kept to an absolute minimum, with the flow of
the pieces and the layering completely controlled by Chris.
The motion control data as well as audio analysis feed into the visual system which brings to life real time rendered audio reactive abstract visuals. The visuals consist mostly of generative abstract geometry, glitchy shaders and procedural animation produced in Unity.
The audio backing, looped and live content can be seen on screen which is representing FFT data analyzed on the audio-computer and sent via OSC to the visual computer.
There is NO post on this video except editing between the cameras and between the audio captured by the cameras and direct feed. This is literally the performance from start to finish.