This next project was inspired by a fellow I met in the Atlanta airport. He was running an application that took webcam visuals and turned them into rather manic classical music. After about 5 minutes of watching his screen and hearing the faint tones emanate from his macbook, I asked him what he was doing. From then sparked an hour-long conversation about sonification or in general the conversion of one type of stimulus into another. From that conversation I came up with the idea of allowing the user to draw on a computer, but instead of using a normal boring interface like a mouse or a tablet, I would let them use their voice and all the noises they could muster.
Shown above is a block diagram of the system. It’ll take in live audio information and parse it to get the current loudest frequency and its amplitude. Then those two data points will be scaled and mapped to a set of variables that will control the “brush” that the user will draw with. Lastly, the mapped values will be used to draw the simulated brush and the artwork will be shown live to the user to close the feedback loop.
To pull this off, I’m going to need to figure out:
- how to parse live audio from a microphone
- how to map the live sound data and clean it up [noise is always an issue]
- how to draw the “brush”