MAX I

FINAL MAX / MSP PROJECT - Interaction Interface

Written Description:

 

My final project is an interaction interface for taking data in from various sources, scaling it to Midi (0-127), and sending it out a midi port as Midi CC values. After that, for my purposes, I’ll be sending that data to a Midi-to-CV converter, where the values are turned into control voltage for use with a modular synthesizer.

The patcher has 2 interaction modes, which are selectable by a button on each side:

The first mode uses a video camera for face and motion tracking. 4 Midi CC values and 2 trigger/gates are generated from this system. When a face is detected, or when it stops being detected, the system sends out a signal that is being interpreted as a gate. When the system suddenly detects motion, or continuous motion is stopped, a trigger is generated. When continuous motion is detected, it will keep on sending triggers in various natural bursts. 2 values are being generated from the face’s place along an x-y axis, 1 is being generated out of how many faces are detected, and 1 value is being generated out of an interpretation of the intensity/ velocity of motion. This velocity detection component also has a sensitivity slider for dialing in for effective use at different distances, or according to the lighting or movement in different environments. In this system- where you are, where your face is, how fast you move, how many people are present, and what direction you’re all facing- these all generate data that could be applied to different performances or environments.

In the second interaction mode, there is an x-y-z value slider scaled to midi for reading interactions from a TOUCH OSC accelerometer installed on a smartphone. In this system, holding the smart phone in different positions, performing different gestures with it, or putting in your pocket and walking around all create different types of movement, speed, and position interactions.

In my case, I’m using the triggers to start different samples of sounds I like (and that I use in my music) at random, and I’m using the other values from my position and movement to affect the pitch of these samples, the cutoff on filters the samples are going through, the feedback rate of a delay the samples are going through, and more.

 
 

Presentation Mode

 

BACK END FULL PATCHER

 
 

System Diagram:

 

Video Examples:

 
 
 
 
 
 
 
 


References:

 

Face Tracking Patch: youtube.com/watch?v=o7xs-aqRwzk

Future Plans:

 

The possibilities are endless, and while one could get carried away, in just starting to experiment with this, I think more precise and clear interactions may be more powerful, in a “less is more” manner. When too many values from this interaction are being sent to too many different parameters controlling the sound, it’s easy for it to be chaos with movement and then the sound fades out and distills with stillness. This is powerful in it’s own way, but now I’m curious to see what the most effective use of these systems may be.

The content will end up mattering most, as this is just the interaction interface. With knowledge of what interactions are possible and are readily available in this interface, and with custom routing available in the presentation mode, it will be easy to use this system for different project prototypes and experiments. Now that this tool has been built, I can see what I’d like to use it for and build more interactions into it as I go.

I’m thinking of creating interactive different sonic environments with carefully selected samples and premeditated ideas of how to purposefully choose which modular parameters are most interesting, dynamic, and conceptually appropriate to receive data. Another idea is recording words to make disjointed poems in which text is recorded in, and each word is triggered at random, with the data also controlling panning, filters, fx, and more.

On the tech side, maybe a color tracking interface is next to be built in? : )

 
 
 
 

Final Project Log 3

 

Final Project Log 2

 

4 Channel MIDI CC out Patch. Will go to MIDI-to-CV Eurorack Module.

 
 

HexInverter Mutant Brain Midi-to-CV Eurorack Module

 
 

MAX MSP MIDI Ctlout to Mutant Brain MIDI to CV Converter

Using the MAX MSP Ctlout object to send Midi to the Hexinverter Électronique Mutant Brain (Midi to CV Module) Eurorack module.

Currently just 4 sliders of MIDI CC manually controlled via mouse. In this instance they are being sent to bass filter cutoff, lead filter cutoff, delay feedback, and filter resonance lfo speed on the lead.

Lots of potential with multing out CV and all those open trigger/gate outputs.

 

Final Project Log 1

Inspiration, resources, links etc:


Final Project Description:

 

Below is a a sketch-up image and description of my final project. Though made in max, the image is to be read as a flow chart in it’s current iteration, and so it will easy to build the actual components into and to replace the pieces with the actual patches, objects, messages, etc.

My final project is a performance system patch with two components that can act to feed into each other as a sort of audio/video interaction system. Ideally I make the whole system for the final project, but it’s possible I end up focusing on one. It is all based on a system that could be developed over time as I learn more.

1st component: I’d like to use what we learn in the upcoming classes about video tracking to create a system that turns this video tracking data into midi. I would like to send this data as midi cc data out of MAX to a Midi-to-CV module I’ve recently acquired to design a system in which the video tracking data would control a modular synthesizer. The module I have has 4 midi cc outs and up to 12 triggers and gates, so it would be great to use all 4 midi cc out and a bunch of gates/triggers too. The video from the webcam would be tracked and data used such that if affects pitches and filters of oscillators as well as modulation, and gates and triggers to set off events and open envelopes. These would all be informed by and controlled by the video tracking data. I’ve long been interested to use this kind of technology with hardware synthesizers. This is shown on the left side of the flow chart below.

2nd component: The other component is the “VJ” system. It is basically a reactive video system that takes inputs from an interface to affect video fx, while also having an either manual or automated video switching or fading mechanism that would switch between pre-recorded video, video synthesis, and even the same video that is being used for tracking. It would be great for it to be able to send up to 8 audio channels to different video fx in vizzle or potentially V Synth. This is shown on the middle and right side of the flow chart below.

Interaction between components: The video data turned into midi affects the modular synth sounds. The modular synth will be fed into the audio interface, and the amplitudes of different inputs would affect different video fx. The same video that is being used for tracking could then be fed into the VJ system and included among other things such as pre-recorded material and/or video synthesis (tbd).

Depending on the scope of this project, I could end up focusing in on only one component. The reactive video FX and Video Tracking detail could continue to be fine tuned into a well-crafted system as I learn more.

 
 
 

Max Class 5 Homework:

WARNING- SOUND GETS LOUD, DO NOT LISTEN TO AT LOUD VOLUME


 

Max Class 4 Homework:

Yes, I see now that much of this could be consolidated 😅

I’ll do a revision soon.


Max Class 3 Homework:

Coming Soon 😬


Max Class 2 Homework:


Max Class 1 Homework: