K-Bow at Maker Faire on CDM
Yann Seznec caught the KMI booth at Maker Faire and shot some video of me explaining the K-Bow. Peter Kirn over at Create Digital Music did a nice writeup with a video and discussing several other precursors to our project.
One point I’d like to clarify is the use of the bow in these situations. Peter opined regarding the video of Jon Rose at STEIM:
I know what you’re thinking – you could also just hook your violin into a pickup and some distortion pedals. I think it’s really the experience of playing it that changes, though I’m just guessing, since I’m not a string player.
This really misses the entire point of what the bow is, and what it makes possible. The first thing to note is that you do need a pickup with your instrument if you want to process its sound with K-Bow control. In the STEIM video Jon is controlling EQ that is sculpting his feedback created from the loop between the speakers and the pickup on his violin. He is controlling these EQ settings via the accelerometers on the bow. As he tilts the bow the accelerometers change their readings relative to earth gravity, and this value is set to the gain of the 3 band Equalizer, one band per accelerometer axis.
The K-Bow lets you control parameters your synthesis or effects processing live in conjunction with your playing, all seamlessly from the instrument. While our software gives you a number of effects, most things in a standard guitar processing setup, the real power of this interface is that you can modify them musically in performance. For example, if you wanted a pitch shift that was modified by where your bow was on the strings, it is easy to set that up. Such a situation would allow you to make your instrument sound like a Bass when the frog was near the strings, or like a…erm…really tiny violin when you were near the tip of the bow.
What our software, coupled with the bow and your instrument, allows you to do is set up a network of interactions that lets you traverse through a set of pre-curated sonic possibilities. By using conditional statements, a hierarchy of presets, and physical gesture a composition can be progressed through with a variety of linear and non-linear possibilities, controlled directly by the performer live during performance. This frees the music from more standard timelines in a variety of ways putting more compositional control back in the hands of the performing musician.
This is only for one player, when you combine multiple musicians, and allow their meta-information generated by their performance interfaces to cross modulate across the entire ensemble; then you create an entirely new kind of electronic ensemble performance where the musicians are linked in many more intimate ways than simply just listening and responding to each others musical vernacular.
Imagine the pressure that a violin player is pressing on her strings with controlling the filter frequency of the viola player. Or, the notes that the cello plays transpose the entire ensemble into that key. These are just to very simple examples, but they illustrate some of the things that are possible with this kind of integrated synthesis control from traditional instruments.
And we have even started talking about video control…