This is Part II in a series of articles describing my C# Synth Toolkit. Part I gave us an overview of the toolkit. In this part, we'll take a more hands-on approach and create a very simple synthesizer. This synthesizer will have a single oscillator per voice. The oscillator will be capable of synthesizing one of three waveforms: sawtooth, square or triangle. It will have the ability to be panned left or right. In addition, the synthesizer will use the toolkit's chorus effect. Even though this example synthesizer will have a simple architecture, it will help us understand how to use the toolkit to create our own synthesizers.
The first step in creating our synthesizer is to write the oscillator component. Oscillators are responsible for creating a synthesizer's waveforms. We'll call it SimpleOscillator
. It will derive from the StereoSynthComponent
class, since its output will be in stereo, and implement the IProgramable
and IBendable
interfaces.
It's helpful, but not required, to create a public enumeration representing the component's parameters. This lets clients of the component know what parameters it has. Our SimpleOscillator
class only has two parameters: pan position and waveform type. In addition to having an enumeration representing the parameters, we will also add one representing the waveform types:
public class SimpleOscillator : StereoSynthComponent, IProgramable, IBendable
{
#region Enumerations
public enum ParameterId
{
Panning,
WaveformType
}
public enum WaveformType
{
Sawtooth,
Square,
Triangle
}
#endregion
}
Next comes SimpleOscillator
's fields. Each field is accompanied by a comment that explains what it represents:
public class SimpleOscillator : StereoSynthComponent, IProgramable, IBendable
{
#region Fields
#region Constants
public const int WaveformTypeCount = (int)WaveformType.Triangle + 1;
#endregion
private double panning = 0.5;
private WaveformType waveType = WaveformType.Sawtooth;
private int currentNote;
private double accumulator = 0;
private double pitchBendModulation = 0;
private bool playing = false;
private bool synthesizeReplaceEnabled = true;
#endregion
}
Let's look at SimpleOscillator
's constructors. There are two:
public class SimpleOscillator : StereoSynthComponent, IProgramable, IBendable
{
#region Construction
public SimpleOscillator(SampleRate sampleRate, StereoBuffer buffer)
: base(sampleRate, buffer)
{
Initialize();
}
public SimpleOscillator(SampleRate sampleRate, StereoBuffer buffer,
string name)
: base(sampleRate, buffer, name)
{
Initialize();
}
private void Initialize()
{
currentNote = A440NoteNumber;
}
#endregion
}
The only difference between the two constructors is that the second one has a name
parameter. It can sometimes be helpful to give a synthesizer component a name, e.g. "Amplitude Envelope," but it's optional; it's not required.
Notice the SampleRate
object. I found it helpful in designing the toolkit to create a SampleRate
class dedicated to representing the sample rate. All components within a synthesizer share the same sample rate. Rather than giving each component a SampleRate
property that would have to be updated any time a synthesizer's sample rate changes, all components share the same SampleRate
object.
When the sample rate changes, the synthesizer updates its SampleRate
object's SamplesPerSecond
property with the new sample rate value. This object in turn raises the SampleRateChanged
event. An individual component that needs to be notified when the sample rate changes registers with this event and makes whatever updates to its internal values are necessary when the event is raised. Our SimpleOscillator
class doesn't need this notification, so it simply passes the SampleRate
object to its base class without registering with the event.
The SimpleOscillator
has a stereo output, so it uses a StereoBuffer
object passed to its constructors. When it synthesizes its output, it will write the results to its StereoBuffer
. The StereoBuffer
is a lite wrapper around a multidimensional array (two dimensions for left and right stereo channels). The MonoBuffer
is used by synth components with a monophonic output. It is a lite wrapper around a single dimensional array. Both the SteroBuffer
and MonoBuffer
classes make it easier to manage buffers. Like the sample rate value, the buffer size value is shared by all components within a synthesizer. The buffer classes make it easier to update buffer size because each Voice
keeps track of all buffers used by its components. When the buffer size changes, a Voice
can change the size of all buffers in use without the components having to worry about it.
Both constructors call the Initialize
method. This method contains the initialization code factored out of the constructors; it's just a helper method. It initializes the currentNote
field to the MIDI note number that represents A 440Hz (69). At this point, SimpleOscillator
is in a valid state and is ready to be used.
The SimpleOscillator
class is derived from the abstract StereoSynthComponent
class. This class is in turn derived from the SynthComponent
class. The SynthComponent
class has a number of methods and properties that we must override.
- Methods
Synthesize
Trigger
Release
- Properties
SynthesizeReplaceEnabled
Ordinal
The Synthesize
method is the heart of any synth component. It is where the component synthesizes its output. The Synthesize
method takes two parameters, both integers: offset
and count
. The offset
parameter is zero-based indexed offset value into the buffer. It indicates where in the buffer the component should begin synthesizing its output. The count
value indicates how many samples should be synthesized. The SimpleOscillator
's Synthesize
method is a little involved and, quite frankly, not very pretty. So for the sake of brevity, I'll skip showing it here. Please see the SimpleSynthDemo
project included in the download for more details.
The SimpleOscillator
's Trigger
and Release
method look like this:
public override void Trigger(int previousNote, int note, int velocity)
{
currentNote = note;
playing = true;
}
public override void Release(int velocity)
{
playing = false;
}
The Trigger
method is called when the Synthesizer
triggers a Voice
in response to a MIDI note-on event. The Voice
in turn triggers its components. Here, SimpleOscillator
keeps track of the note that is being played. In addition, it sets a flag indicating that it is currently "playing." It ignores both the previousNote
and velocity
arguments.
The previousNote
argument indicates which note was previously playing. This can be useful to some components. For example, a component that is responsible for portamento needs to know what note was previously playing so that it can sweep the pitch from the previous note to the current one. The velocity
argument indicates the velocity with which the current note was played. A component that is capable of responding to velocity would use this value to adjust its output.
Here are SimpleOscillator
's overridden properties:
public override bool SynthesizeReplaceEnabled
{
get
{
return synthesizeReplaceEnabled;
}
set
{
synthesizeReplaceEnabled = value;
}
}
public override int Ordinal
{
get
{
return 1;
}
}
The SynthesizeReplaceEnabled
property indicates whether a component will overwrite the values in its buffer each time it synthesizes its output. For some components, it's useful for the buffer NOT to be overwritten. For example, you can have several components share the same buffer. If they ADD their output to the buffer rather than overwrite the buffer, each component's output is mixed in place with the other components. This can help efficiency.
The Ordinal
value indicates the order in which the component should be sorted. As described in Part I, each Voice
keeps track of its components and sorts them according to their Ordinal
value. Essentially, components are organized in a directed acyclic graph. The Ordinal
value of a component is the Ordinal
value of all of its inputs summed together plus 1. Since the SimpleOscillator
component does not have any inputs, its Ordinal
value is 1.
The IProgramable
interface represents functionality for getting and setting parameter values. Typically, a synthesizer component implements this interface
to allow manipulating its parameters. The interface
looks like this:
public interface IProgramable
{
string GetParameterName(int index);
string GetParameterLabel(int index);
string GetParameterDisplay(int index);
double GetParameterValue(int index);
void SetParameterValue(int index, double value);
int ParameterCount
{
get;
}
}
The index
argument for each method is a zero-based index into an object's parameters. The GetParameterName
method, not surprisingly, gets a parameter's name. GetParameterLabel
gets a string representing how the parameter should be labeled. Think of knob or switch labels on a hardware synthesizer.
For example, the label for the SimpleOscillator
's panning parameter is "Left/Right." The GetParameterDisplay
method gets a text representation of a parameter's value. All parameters have a value in the range of [0, 1]. However, this value may not be the most intuitive to display to a user. It may be useful to give a parameter a more understandable text representation. For example, the SimpleOscillator
's panning parameter is displayed as having the range of [-1, 1] with 0 being center, -1 being hard left and 1 being hard right.
The GetParameterValue
and SetParameterValue
methods get and set a parameter's value, respectively. The important thing to remember here is that, as mentioned above, all parameters have the range of [0, 1]. The ParameterCount
property gets the number of parameters provided by the implementing class. All of this probably looks familiar to those who have programmed using VST. The IProgramable
interface was based on the VST approach for handling parameters.
Let's take a look at SimpleOscillator
's implementation of the GetParameterName
method:
public string GetParameterName(int index)
{
#region Require
if(index < 0 || index >= ParameterCount)
{
throw new ArgumentOutOfRangeException("index");
}
#endregion
string result = string.Empty;
string name = Name;
if(!string.IsNullOrEmpty(name))
{
name = name + " ";
}
switch((ParameterId)index)
{
case ParameterId.Panning:
result = name + "Panning";
break;
case ParameterId.WaveformType:
result = name + "Waveform";
break;
default:
Debug.Fail("Unhandled parameter.");
break;
}
return result;
}
}
First, the method makes sure that index
is in range. Second, it retrieves the SimpleOscillator
's name. It's possible that the name is empty, if no name was given to the component. If the name isn't empty (or null), a space is appended to the name. Finally, the method casts index
to the ParameterId
enumeration type we created to represent the parameters and switches on that value.
The synth component's name is prepended to the name of each parameter. This helps distinguish parameters belonging to different instances of the same synth component. For example, say that you have two ADSR envelope objects, one for modulating the synth's amplitude and another for modulating its filter. The envelopes are given the names "Amplitude Envelope" and "Filter Envelope" respectively. Both envelopes will have an attack time parameter. However, the amplitude envelope's attack time parameter will have the name "Amplitude Envelope Attack Time" and the filter envelope's attack time parameter will have the name "Filter Envelope Attack Time."
Next is the GetParameterLabel
method:
public string GetParameterLabel(int index)
{
#region Require
if(index < 0 || index >= ParameterCount)
{
throw new ArgumentOutOfRangeException("index");
}
#endregion
string result = string.Empty;
switch((ParameterId)index)
{
case ParameterId.Panning:
result = "Left/Right";
break;
case ParameterId.WaveformType:
result = "Type";
break;
default:
Debug.Fail("Unhandled parameter.");
break;
}
return result;
}
This method simply returns a text representation of how a parameter should be labeled. Again, it's helpful to think of knob and switch labels typically found on a hardware synthesizers.
Next is the GetParameterDisplay
method:
public string GetParameterDisplay(int index)
{
#region Require
if(index < 0 || index >= ParameterCount)
{
throw new ArgumentOutOfRangeException("index");
}
#endregion
string result = string.Empty;
switch((ParameterId)index)
{
case ParameterId.Panning:
{
double position = panning * 2 - 1;
result = position.ToString("F");
}
break;
case ParameterId.WaveformType:
result = waveType.ToString();
break;
default:
Debug.Fail("Unhandled parameter.");
break;
}
return result;
}
Note the panning
parameter and how it is converted to a string
. All parameters are in the range of [0, 1], but displaying the panning
parameter using its actual value isn't very useful. So it's converted here to an intermediate value in the range of [-1, 1]. This value is then converted into a string
.
Finally, we have the GetParameterValue
and SetParameterValue
methods for getting and setting a parameter's value respectively:
public double GetParameterValue(int index)
{
#region Require
if(index < 0 || index >= ParameterCount)
{
throw new ArgumentOutOfRangeException("index");
}
#endregion
double result = 0;
switch((ParameterId)index)
{
case ParameterId.Panning:
result = panning;
break;
case ParameterId.WaveformType:
result = (double)(int)waveType / (WaveformTypeCount - 1);
break;
default:
Debug.Fail("Unhandled parameter.");
break;
}
return result;
}
public void SetParameterValue(int index, double value)
{
#region Require
if(index < 0 || index >= ParameterCount)
{
throw new ArgumentOutOfRangeException("index");
}
else if(value < 0 || value > 1)
{
throw new ArgumentOutOfRangeException("value");
}
#endregion
switch((ParameterId)index)
{
case ParameterId.Panning:
panning = value;
break;
case ParameterId.WaveformType:
waveType = (WaveformType)(int)Math.Round(value *
(WaveformTypeCount - 1));
break;
default:
Debug.Fail("Unhandled parameter.");
break;
}
}
Notice the conversions taking place for the waveform type parameter. In the GetParameterValue
method, it is converted to a value in the range of [0, 1] (as required by the toolkit). And in the SetParameterValue
method, it is converted to an enumeration representing the waveform type.
The IBendable
interface represents functionality for responding to modulation generated by the pitch bend wheel. The Synthesizer
receives pitch bend messages and converts them to modulation values based on its pitch bend range setting. These values are passed on to all objects that implement the IBendable
interface.
Here is the IBendable
interface; it's very simple:
public interface IBendable
{
double PitchBendModulation
{
get;
set;
}
}
The SimpleOscillator
implements this interface by simply keeping track of the pitch bend modulation. It later uses this value in its Synthesize
method for modulating its pitch.
#region IBendable Members
public double PitchBendModulation
{
get
{
return pitchBendModulation;
}
set
{
pitchBendModulation = value;
}
}
#endregion
Before leaving the SimpleOscillator
class, we'll give it an additional property that indicates whether it is currently playing
:
public bool IsPlaying
{
get
{
return playing;
}
}
As you can see, quite a bit of work went into writing this class. Fortunately, the hardest part is over. The majority of work you'll do when using the toolkit will be in writing the synthesizer's components.
The next step is to create a class that is derived from the Voice
class. It will represent a single voice within our synthesizer. Voice
derived classes orchestrate a set of synthesizer components that work together to synthesize the voice's output. Fortunately for this example, our voice class is much easier to implement than our oscillator class.
An important note about the Voice
base class follows. It is itself a synthesizer component and is derived from the StereoSynthCompoment
class, which is derived from the SynthComponent
class. The Voice
class overrides some of the methods and properties in the SynthComponent
class while leaving others to be overridden in derived classes.
Here is the implementation of the SimpleVoice
class:
public class SimpleVoice : Voice
{
#region SimpleVoice Members
#region Fields
private SimpleOscillator oscillator;
#endregion
#region Construction
public SimpleVoice(SampleRate sampleRate, StereoBuffer buffer)
: base(sampleRate, buffer)
{
Initialize(buffer);
}
public SimpleVoice
(SampleRate sampleRate, StereoBuffer buffer, string name)
: base(sampleRate, buffer, name)
{
Initialize(buffer);
}
#endregion
#region Methods
private void Initialize(StereoBuffer buffer)
{
oscillator = new SimpleOscillator(SampleRate, buffer);
AddComponent(oscillator);
AddParameters(oscillator);
AddBendable(oscillator);
}
public override void ProcessControllerMessage
(ControllerType controllerType, double value)
{
}
#endregion
#region Properties
protected override bool IsPlaying
{
get
{
return oscillator.IsPlaying;
}
}
public override bool SynthesizeReplaceEnabled
{
get
{
return oscillator.SynthesizeReplaceEnabled;
}
set
{
oscillator.SynthesizeReplaceEnabled = value;
}
}
#endregion
#endregion
}
Take a look at the Initialize
helper method. It creates an instance of the SimpleOscillator
class, passing it the voice's StereoBuffer
. The SimpleOscillator
will write its output to this buffer. It then calls the AddComponent
method. This method belongs to the Voice
base class. Derived classes call this method to add components to the Voice
's component collection. This collection is sorted according to the Ordinal
value of each component. When the Voice
's Synthesize
method is called, it iterates through its collection of components, calling Synthesize
on each component. This ensures that components synthesize their output in the right order.
For example, say that an LFO component is modulating the frequency of an oscillator. The oscillator component will have a higher Ordinal
value than the LFO, so it will come after the LFO in the Voice
's collection of components. In other words, the LFO's Synthesize
method will be called before the oscillator's, ensuring that the LFO's output is ready for the oscillator to use.
The AddParameters
method is passed the SimpleOscillator
object. This enables the Voice
base class to automate parameter handling. Rather than having to orchestrate this yourself, you simply add all objects in your voice that implement the IProgramable
interface. The Voice
base class as well as the Synthesizer
class take care of the rest. Also, the SimpleOscillator
object is passed to the AddBendable
method. Whenever the pitch bend wheel is moved, its position value is converted and passed along to all IBendable
objects.
We are almost finished with our synthesizer. The only thing that remains is to derive a class from the SynthHostForm
class. When the application instantiates an instance of this class, it will provide an environment in which to run our Synthesizer
. All we'll do in this class is override a couple of methods and a property:
public partial class Form1 : SynthHostForm
{
public Form1()
{
InitializeComponent();
}
protected override Synthesizer CreateSynthesizer
(int deviceId, int bufferSize, int sampleRate)
{
VoiceFactory voiceFactory =
delegate(SampleRate sr, StereoBuffer buffer, string name)
{
return new SimpleVoice(sr, buffer, name);
};
EffectFactory effectFactory =
delegate(SampleRate sr, StereoBuffer buffer)
{
return new EffectComponent[] { new Chorus(sr, buffer) };
};
return new Synthesizer(
"Simple Synth",
deviceId,
bufferSize,
sampleRate,
voiceFactory,
8,
effectFactory);
}
protected override Form CreateEditor(Synthesizer synth)
{
throw new NotSupportedException();
}
protected override bool HasEditor
{
get
{
return false;
}
}
}
The Synthesizer
object will be created to use our SimpleVoice
class for its voices. This is done by passing the Synthesizer
a delegate that returns Voice
objects. In other words, all we have to do to create our own Synthesizer
is pass it a delegate that creates custom voices we have written. Also, we pass a second delegate that creates a collection of effects. This creates an effect chain within the synthesizer.
We won't be creating an editor for this synthesizer, so the HasEditor
property returns false
and the CreateEditor
method throws a NotSupportedException
if invoked.
In this part, we've created a simple synthesizer and have become more familiar with the toolkit in the process. In Part III, we'll create a much more sophisticated synthesizer, one that has many of the bells and whistles we associate with traditional subtractive synthesizers. But for now I hope you've enjoyed the ride so far. I look forward to hearing your comments and suggestions. Thanks for your time.
- July 16, 2007 - First Version
- August 16, 2007 - Second Version