Gesture recognition
October 17, 2014 — November 12, 2018
I want to recognise gestures made with generic interface devices for artistic purposes, in real time. Is that so much to ask?
Related: synestizer, time warping, functional data analysis, controller mapping.
1 To Use
Reverse engineer Face the Music.
Gesture variation following has particular algorithms optimised for real-time music and video control using, AFAICT, particle filter. This is a different approach to the other ones, which use off-the-shelf algorithms for the purpose, leading to some difficulties. (source is C++, PureData and MaxMSP interfaces available)
GRT: The Gesture Recognition Toolkit other software for gesture recognition; lower level than Wekinator (default API is raw C++), more powerful algorithms, although a less beguiling demo video. Now also includes a GUI and PureData OpenSoundControl interfaces in addition to the original C++ API.
- Interesting application: generic MYO control
Eyesweb: An inscrutably under-explained GUI(?) for integrating UI stuff somehow or other.
Wekinator: Software for using machine learning to build real-time interactive systems. (Which is to say, a workflow optimised for ad-hoc, slippery, artsy applications of cold, hard, calculating machine learning techniques.)
- See author Rebecca Fiebrink’s supporting resources and instructions.
Beautifully simple “graffiti” letter recogniser (NN-search on normalised characters, neat hack. Why you should always start from the simplest thing.) (via Chr15m)
how the Kinect recognises (spoiler: random forests)
BTW, you can also roll your own with any machine learning library; It’s not clear how much you need all the fancy time-warping tricks.
Likely bottlenecks are constructing a training data set and getting the cursed thing to work in real time. I should make some notes on that theme.
Apropos that, Museplayer can record OpenSoundControl data.