Note about this entry: I actually completed this blog post on Thursday evening, but I accidentally hit the "save draft" button instead of "publish." I hope that won't count against me too harshly with regards to the due date.
I've now had two peer reviews of my instrument design concept, and the feedback has been very helpful. The general design idea is pretty solid, and I've started moving into implementation of the backend code.
The code will need to perform the following functions:
Accept input from a device. In this case, that device will be a Leap Motion controller, but I'd like to script it in such a way that a different device could be used. The input must include two continuous values and two boolean states.
Map the continuous values to pitch and volume. This will require converting the raw numbers to a scalar value from 0 to 1. The volume can then be applied directly, and the pitch value can be mapped to a frequency in Hz within a given range of frequencies.
Normalize the frequency mapping. Frequency values are exponential, with each octave being twice the frequency of the previous octave. In my interface, I want a musical half step to always be the same spatial distance regardless of the frequency.
Generate a tone of a given frequency and volume.
Enable half-step snapping. When one of the booleans is "true," pitches should be limited to half-steps on a musical scale.
Enable "holds." When the other boolean is activated, the current pitch should be held, regardless of changing frequency input, until the boolean is deactivated.
Luckily, Web Audio fits my needs very well. It's stable and functional, and I am not put off by its lack of PureData's drag-and-drop interface. The most recent developer's notes on WebPD say that the next step is to completely rewrite the library to use Web Audio, which I take to mean they planned to convert it to a graphical front-end for Web Audio development. Maybe that will come someday, but in the meantime I don't mind creating my audio by hand.