This thesis is about analysis of motions for active music applications, where motions control music in real--time.
Motion data is derived from accelerations measured in (Euclidean) 3D by one accelerometer. In order to capture motions on different time-scales, a necessary preprocessing step for analysis is calibration and segmentation on the sensor data streams.
For sensor data analysis, a real-time, configurable motion classifier has been implemented. Datasets for the experiments with this classifier are based on two categories of equally sized pre-captured accelerations. Classification performance has been evaluated on a range of segment lengths (i.e. time-scales of motions) - each length corresponding to a unique dataset.
Regarding postprocessing of the classifications for sound control, two quite different mapping systems have been developed - to different extents. Both control different musical aspects, although at different intervals. The first system is trigger-based and inspired by the concept of hypermusic (Machover, 2004). However, for reasons that will become apparent, further development of this system has been put on hold. The second (and latest) system is for multi-channel continuous normalized parameter control.