LLMidi is a JUCE-based VST3 plugin that generates musical MIDI patterns using large language models (LLMs).
It can operate in two modes:
- Offline mode (Windows only) – Use a local
GGUFmodel through an embeddedllama.cppbackend. - Online mode (Windows and macOS) – Use any web-based chatbot (ChatGPT, Claude, Gemini, etc.) to generate musical patterns without running a local model.
LLMidi translates natural-language prompts like
“Piano melody on E minor”
into playable MIDI patterns, either through a local LLM (offline) or a chatbot interface (online).
Internally, it parses structured event JSON and schedules MIDI playback in real time.
- Download the compiled VST3 plugin from the releases page.
- Copy
LLMidi.vst3into:C:\Program Files\Common Files\VST3\ - Launch your DAW and rescan plugins.
LLMidi’s online mode works fully on macOS.
You can build the plugin from source (see developer section) or install a .vst3 binary if provided.
Copy LLMidi.vst3 into either:
/Library/Audio/Plug-Ins/VST3/ (system-wide)
~/Library/Audio/Plug-Ins/VST3/ (per user)
Then rescan your plugins in your DAW.
Note: Logic Pro does not support VST3. Use Ableton Live, Reaper, Cubase, Bitwig, or Studio One on macOS.
Offline mode runs a local LLM via the embedded llama.cpp backend.
- Download a compatible
.ggufmodel from Hugging Face.
Recommended example:Mistral-7B-Instruct-v0.3-Q4_K_M.gguf - Create a folder to store models, for example:
C:\Users\<you>\Documents\LLMidi\Models\ - Launch LLMidi and open the Offline tab.
- Click Load Model..., choose your
.ggufmodel. - Wait for the status dot to turn green (model loaded).
- Type your musical prompt and click Generate Pattern.
You’ll see a progress bar while the model generates the pattern.
When done, the plugin outputs a live MIDI pattern inside your DAW.
- The first generation using a new model will take significantly longer , this is normal.
LLMidi builds a persistent cache for that specific model to speed up future generations. - Each model has its own cache file stored in:
(filename ending in
C:\Users\<you>\AppData\Roaming\LLMidi\.session) - Changing models or modifying low-level parameters (e.g., context size or static prompt) will trigger a cache rebuild.
The next generation will again take longer while the cache is created. - Once cached, later generations will be much faster.
The online mode uses your favorite chatbot instead of a local model.
- Open the Online tab.
- Type your description, e.g. “Fast jazz drum groove in 7/8”.
- Click Copy Prompt.
- Paste it into your chatbot (ChatGPT, Claude, Gemini, etc.).
- Copy the chatbot’s pure JSON output (no code blocks or markdown).
- Paste it into the Response box in LLMidi.
- Click To MIDI to import the pattern into your DAW.
- Online mode is platform-independent, requires no local model, and can leverage much larger LLMs (tens of billions of parameters).
- Larger online models usually generate more coherent and musically aware patterns, especially for complex harmonic progressions.
- However, note that cloud LLMs consume significant energy resources. If you’re experimenting heavily, consider using offline mode to reduce environmental impact.
LLMidi is a MIDI-generating plugin. After a sequence is ready, you can record or render its MIDI output.
- Generate the sequence in LLMidi.
- Set the MIDI Output Port (e.g. Port 1).
- Place a dummy note in the piano roll covering the duration of the pattern.
- In the Channel Rack, right-click the plugin and choose:
Burn MIDI to new pattern - The generated notes will appear as editable MIDI in a new pattern.
You can also preview the generated notes before burning them:
- Load any synth plugin.
- Set its MIDI Input Port to the same value as LLMidi’s Output Port.
- Press play , the synth will perform the generated pattern in real time.
- Ableton Live (Windows/macOS) , Create a MIDI track using LLMidi as the source. Arm and record to capture the generated notes.
- Reaper / Cubase / Bitwig / Studio One , Route the MIDI output from LLMidi to another track and record or freeze it.
- Logic Pro (macOS) , Not supported, since Logic uses AU format only (VST3 not supported).
Each LLM model used in offline mode creates its own cache file in:
C:\Users\<you>\AppData\Roaming\LLMidi\
These files are reused automatically to speed up generation.
- After changing model parameters (context size, prompt template, etc.)
- When a model update introduces compatibility issues
- When freeing disk space
You can clear the cache using the Settings tab in the plugin or manually delete the .session files.
To fully remove LLMidi:
- Delete
LLMidi.vst3from your VST3 plugin directory. - Delete the cache folder:
C:\Users\<you>\AppData\Roaming\LLMidi\
LLMidi/
│
├─ Source/ → Core plugin source (processor, UI, pages)
├─ JuceLibraryCode/ → Auto-generated JUCE project code
├─ third_party/ → Local build of llama.cpp (Windows)
├─ Models/ → Optional local GGUF model directory
├─ Builds/ → Platform-specific build folders
├─ LLMidi.jucer → Projucer project definition
└─ .gitignore, .gitmodules, etc.
(image: [placeholder_project_structure.png])
- JUCE framework
- Visual Studio 2022 on Windows
- Xcode on macOS (for online-only build)
- C++17 toolchain
- Clone the repo:
git clone https://github.com/DirtyBeastAfterTheToad/LLMidi.git cd LLMidi - Open
LLMidi.jucerin Projucer. - Add an exporter:
- Visual Studio 2022 for Windows
- Click Save and Open in IDE.
- Build the project (F6 or Build → Build Solution).
- The compiled plugin appears under:
Builds\VisualStudio2022\x64\Release\VST3\LLMidi.vst3\Contents\x86_64-win - Copy the plugin to your system’s VST3 directory.
For mac builds, you can exclude or disable these source files (used only for offline mode):
LlamaRunner.*LlmGenAdapter.*BackgroundGenerator.*- Any references to them in
PluginProcessor.*
This should produce a clean online-only build that runs fully on macOS. (Untested)
For developers customizing the plugin:
-
Model parameters:
Edit defaults in
PluginProcessor.cpp → requestLoadModelFromFile()
(changen_ctx,n_batch, or default seed).Note: Changing these parameters or the built-in static prompt will invalidate the existing cache and trigger a rebuild.
The next generation will take longer while the cache is recreated. -
UI layout:
Adjust layout and controls in
OfflinePage.cpp,OnlinePage.cpp, orSettingsPage.cpp. -
Sequence parsing & scheduling:
The pipeline is implemented in:LlmSequenceParser.*MidiScheduler.*SequenceModel.*
-
Cache directory:
Defined in
PluginEditor.cpp → cacheDirPath().
The integration of AI into music creation raises legitimate ethical questions.
It’s important to acknowledge that this technology is already here and its potential use in creative work is inevitable.
The philosophy behind LLMidi is to empower musicians, not replace them.
This plugin is designed as a creative assistant, not a fully autonomous composer.
It helps you experiment, discover new rhythmic or harmonic ideas, or overcome creative blocks while leaving artistic direction, taste, and emotion firmly in human hands.
LLMidi does not attempt to produce finished songs or copyrighted imitations, and it encourages users to remain intentional and expressive in their craft.
The goal is to keep the artist at the center of creation, using AI as a flexible, transparent tool for inspiration , not as a substitute for creativity itself.
LLMidi is open-source.
You are free to modify or integrate it in your own projects, but please credit the original author if you do so.
This project includes:
© 2025 , LLMidi Project
Author: DirtyBeastAfterTheToad
Free to use and modify with attribution.





