Jesse McFadden
7130 Independent Project - Spring 2022
Interactive Sound Design using Arduino, Max MSP, and Modular Synth for Hengemeh fallah’s Thesis Project
Project Goals:
To create a sonically dynamic environment that changes sounds and textures according to the way a distance sensor affixed to a performer is reading other bodies around them and the architect of a performance space.
For the dynamics of the sound to and interaction system to tie in directly with the theme of the piece.
Technical Process:
The first step was getting Max MSP to read data being input via Arduino Hardware and Software. I had already learned how to convert the data to midi range and send it out the midi port to my modular synth from my final project in last semester’s Max class. So for this project, I was excited to get meaningful data from the environment into Max with use of sensors, knowing I could then use them to control my modular synth system.
Initial test control of modular synth w/ Arduino
Screen recording of a Max MSP patch in action receiving arduino accelerometer data and sending it out as MIDI CC where it’s controlling the sound on the modular synth
The next step was to implement the distance sensor on the Arduino Uno board in lieu of what I was working with, since Hengemeh and Arty decided distance sensing would be a good interaction factor for this project.
Distance sensor attached to Arduino Uno
Max MSP Patch for sensing distance sensing data, interpreting it, and sending it out MIDI port
Various sensor test videos:
Sound and Concept:
We decided to use the sound of Arty’s voice, vocals, and body as the main audio source material. We thought using the sound of the body and voice would be appropriate and tie in with the theme of the project regarding power structures such as religion or government controlling women’s bodies. Thematically speaking, the tension, noise, and stress on the system would increase as anyone came close to the performer. With the way it is all patched and programmed, this results in more noisy, manic, bouncing, feedback creating, and shrill sounding audio material being created as the personal space of the performer is encroached upon. The sound becomes more intense and to me less “tolerable.” This serves as the metaphor for autonomy of the body and personal space.
To start to create the palette/set the tone and also offer sounds for Hengameh to use for a teaser video for the performance, I used this system to create this audio.
Interaction System Description:
I have the distance sensor going to the Arduino Uno board. I’m converting that data into inches in the Arduino programming environment. From there it is sent to Max MSP.
In Max MSP, the inches are scaled to 0-127, the numerical scale of midi data. The Max patch also features a smoother for the data to be less jumpy. From this point, increasing and decreasing (inverted) values are created from proximity to the distance sensor, and sent out the midi port. This gives you the option to have certain midi data increase as you get closer to the sensor, or or decrease as you get closer. These are available concurrently (being sent out 2 separate midi channels) and can be used at the same time as well as sent to multiple destinations. In this project, we decided to only use the data in which the values increase towards 127 as you get close the sensor.
In the trigger area, there are 3 different interfaces for creating triggers in different behaviors that react to the manner in which you move closer and further from the distance sensor. These are all available concurrently as well, being sent out on 3 separate midi channels. We ultimately decided to have a metronome in Max creating the triggers with a base tempo. As the distance from the sensor lowered, the tempo increased. So the Max patch is creating these triggers which increase in speed as an object or person is in closer proximity to the sensor.
These values are all available being converted to midi by a Mutantbrain Midi-to-CV eurorack modue in the synth system and case.
On the modular end, there is a sampler always scrolling through our short sound file’s of Arty’s voice and vocals, randomly scrolling through them playing whichever one it us landing on the moment when the sampler module receives a trigger, which is now changing speed according to distance.
Additionally, the modular is patched such that as the values / proximity is increased and closer, the pitch of the samples get higher as they play, the cutoff of a low pass filter is opened, and feedback is increased on a delay. I also have enjoyed using closing proximity to speed up an lfo which is sent to pan and cutoff of a low pass filter. During the performance, I was manually mixing in oscillators (also with increasing pitch linked to proximity) and white noise, which were both being affected by the speeding up and slowing down lfo controlling another filter cutoff, with speed determinate on distance. Also, the lfo is controlling the panning of the sound on speakers, so that when the proximity is closer, it is bouncing back and forth between the speaker at a dizzying speed creating more of an audio effect than it sounds like “panning". At further distances, with the lfo slower, the sound does meander back and forth, but it speeds up quickly as proximity is closed.
The Max Patch is the centerpiece of a grander set up where it plays the role of interpreter, translator, and communicator of data that is linked between the distance sensor via the Arduino to my modular synth system.
Software Interface:
Arduino IDE programming data into inches
Max MSP Patch interpreting distance into MIDI, also including trigger section, creating triggers from changing distances
System Diagram:
More test videos, with samples chosen: