Tutorial
On February 17, 2014 I will give a tutorial along with Professor Michael Casey on Processing. My portion of the tutorial focuses on building a virtual implementation of the Four Step Sequencer Synthesizer depicted here.
What is a “Four Step Sequencer Synthesizer”?
- A synthesizer generates musical tones. In this example we use square wave oscillators, which sound bright compared to smooth-sounding pure sine tones.
- A sequencer plays a series of samples or sounds in sequence.
- The Four Step Sequencer Synthesizer plays four sine wave oscillators in sequence. The frequencies of the oscillators are independent of each other. At any given time, only one of the four oscillators is turned on.
- The bottom four knobs of the interface control the frequencies of the four oscillators.
- The top row has respective controls for sequencer rate, on/off switch, low pass filter cut-off frequency, and gain.
Instructions
The tutorial is broken up into 5 steps. For each step, there is a link to a page having an embedded Java applet and all source code. Please read the notes below regarding Java applets and browsers.
Notes on Java applets and browsers
The pages linked below have embedded java applets. You must have Java 7 on your computer to run the applets. Getting these applets to run is not always simple. On your computer, you must first set your Java security level to “Medium”. (On a Mac, this can be found at System Preferences | Java | Security.) There may be additional browser dependent problems:
- Chrome 32.0.1700.107: Google Chrome is a 32-bit browser and Java 7 is 64-bit. Therefore java applets will not run in Google Chrome.
- Firefox 27.0.1: The applets should run without issue.
- Safari 7.0.1: Navigate to a page having one of the applets embedded. The applet will probably not run. You must then proceed to Preferences | Security | Manage Website Settings… . Select “Java” on the left. Under “Currently Open Websites” you should see “www.cs.dartmouth.edu”. Select “Allow Always” from the drop down menu.
Step-by-Step
There are 5 phases to this tutorial. I want to emphasize that the source code does not illustrate the best coding technique. In particular there is virtually no error-checking in the code. I encourage anyone working with this code to make it better by adding error handling!
1. Visual Interface
The basics of coding for Processing are illustrated here. We build a static visual interface. It loads a bunch of images and places them on the screen. The interface has no user interactions. Not too exciting, but it looks pretty cool!
2. Classes and User Interaction
We take all the knobs and controls and generalize them as classes. We add user interaction so that knobs can be turned, switches can be flipped, and LED lights can turn on and off. This step illustrates some important concepts concerning arrays, Object Oriented coding, and setting up a grid of interaction “zones” for monitoring mouse location.
3. Let There be Sound!
In some ways, this step of the tutorial is the most advanced. We import an external Java sound synthesis library called Beads. We modify the classes so that the knobs can control some parameters of the Beads class instances. There is a master clock that Beads instantiates. We use this clock to trigger the oscillators. There are several advanced concepts that this step of the tutorial depicts such as external libraries, class extension, method overriding, and multithreading. At this point, the applet looks and behaves like the hardware synthesizer that we are modeling.
4. Let’s Draw Some Stuff and Make it Look Cooler
Now that we are finished modeling the hardware, we start thinking about adding additional features. In this step, we add a window that draws the waveform of the sound we are hearing. There aren’t many new concepts introduced here, except perhaps reading from buffers.
5. More (Musical) Interaction
It’s all fine and dandy to use a mouse to control this musical instrument. But to actually play it, we might want to use a more a more appropriate input device. This step of the tutorial illustrates how to detect a MIDI input device and use the device to control the user interface. We also show how to wait for user interaction via keyboard input. There are surprisingly few modifications necessary to accomplish these additional features.
Note: This step assumes that we are using the Korg nanoKontrol2 as an input device. We have hard-coded the mapping between MIDI controller and graphical user interface. This is very bad coding practice!
What’s up, just wanted to mention, I enjoyed this post. It was practical.
Keep on posting!