Prompt-driven ElevenLabs examples for text-to-speech and speech-to-text. Each project includes:
PROMPT.md— instructions for agent-driven generationsetup.sh— scaffolds theexample/directory from a shared templateexample/— the generated, runnable example with its ownREADME.md
Shared base templates live in templates/ (Next.js, Python, TypeScript). UI styling rules are in DESIGN.md.
The legacy
examples/folder is being deprecated and can be ignored for new work.
- Text-to-Speech Quickstart (TypeScript) — Generate an MP3 from text with the ElevenLabs JS SDK.
- Text-to-Speech Quickstart (Python) — Generate an MP3 from text with the ElevenLabs Python SDK.
- Speech-to-Text Quickstart (TypeScript) — Transcribe local audio files with Scribe v2.
- Speech-to-Text Quickstart (Python) — Transcribe local audio files with Scribe v2.
- Real-Time Speech-to-Text (Next.js) — Live microphone transcription with VAD in a Next.js app.
The general prompt-runner workflow is in scripts/generate-examples.sh and is exposed as:
pnpm run generatepnpmclaudeCLI
Install root dependencies first:
pnpm installRun all example prompts:
pnpm run generateRun only one example:
pnpm run generate speech-to-text/nextjs/realtimeOptional flags:
pnpm run generate -t 1200 # timeout per prompt in seconds (default: 600)
pnpm run generate -m opus # model selection (default: sonnet)
pnpm run generate -v # verbose output
pnpm run generate -m opus -t 1200 -v # combine flagsEach example has an example/ folder with a README containing setup and run instructions. See the links in Current examples above.
We welcome contributions from the community. Install the pre-commit hook before submitting:
pip install pre-commit
pre-commit installThis project is licensed under the MIT License. See LICENSE for details.
