A scalable, polyphonic 8-bit AVR synthesizer employing wavetable synthesis - a summer project that exceeded my expectations.
Features:
- PPG Wave 2.2 wavetables
- 2 wavetable oscillators - independent voices or a single richer one
- 2 ASR envelope generators per voice - gain, pitch and waveform modulation
- 1 LFO per voice - triangle or square, with fade, provides pitch and waveform modulation
- Cluster operation to provide greater polyphony (this hasn't been tested yet, but is already implemented in the source code and should work™)
- 1-pole lowpass filter
- All parameters can be adjusted as MIDI controllers
- Presets stored in flash memory
- MIDI pitch bend support
- MIDI velocity sensitivity
MIDI controller mappings can be found here. If you're on Linux, you can load that file directly into midictl and use it to control µsynth right away.
If you want to build your own µsynth, the schematic is available in the hw
directory.
The microcontroller is running at 20MHz (max allowed frequency at 5V). By default it's programmed to work with MCP4921 - a 12-bit SPI DAC and output samples at frequency of 28kHz. MIDI commands are received via UART0 at 31250 baud. Pins PD2, PD3 and PD4 are meant to drive status LEDs. I won't go into much detail here, because I think the source code is documented quite well.
The code is written mostly in C. There are some bits in the AVR assembly language responsible for more sophisticated multiplication, but that's it. The chip is utilized pretty well with ~99% flash and ~77% RAM usage.
The main loop is synced with timer interrupt and generates one sample per iteration. Since there are too many parameters to update all of them every sample, this work is distributed evenly accross each 21 samples. Splitting all the 'slow' code into equal pieces can be quite tricky to get right, but is definitely worth it, since it allows maximal processor time utilization.
Perhaps the most interesting bit is the synthesis process itself. The µC has waveform and wavetable data stored in flash in the same way as they were in PPG Wave (see here). Since the AtMega328 doesn't have enough RAM to hold all the interpolated waveforms at once, everything has to be computed on the fly. In fact, I've already described this process on my blog, so I'll just refer you there.
The block diagram below represents simplified structure of a single voice. The 2 voices can be used independently or combined as a single richer one.
If you like µsynth and want to support my future projects, you can buy me a cup of coffee below. It will be very appreciated :)
Thanks! ❤️