- go into generate (
cd generate
) and runpython3 -m venv venv
, thensource venv/bin/activate
, and thenpip install -r requirements.txt
- now we can start the flask server. to do this run
python3 transcribe.py
- set all environment variables in
.env.example
in.env
with your own values. - we should now install the js deps with
npm i
. - inside generate/public/background, we must have the background video assets. At the bottom of this readme is the list of all the assets from a s3 bucket to download. Put all the videos in generate/public/background
- now, run
node localBuild.mjs
and boom, a video locally generated in no time (actually in some time, 5-7 minutes typically)!
note: you need to create your own eleven labs voices and copy their voice id's. If you want to use Joe Rogan, Jordan Peterson, Barack Obama, and Ben Shapiro's voice you can go into generate/voice_training_audio
to find the mp3 files to train your eleven labs voices with.
this is probably the most complex api to get set up, so if you want to be able to generate videos with ai images instead of google fetched images, you only need open ai api credentials, and not google credentials
- https://developers.google.com/custom-search/v1/introduction/
- https://programmablesearchengine.google.com/controlpanel/all
I have removed assets for download. If you want your own GTA / Minecraft / etc. bottom half video just find some on youtube.
- FFMPEG is not installed.
- You don't have the flask python server running (or not on port 5000)
- Dalle 3 API rate limit exceeded: this is because each dialogue transition has an image, and it is prompted to have 7 dialogue transitions. However, typical tier 1 open ai accounts can only generate 5 images per minute.
- You don't have folders public/srt and public/voice and src/tmp
- You have concurrency set too high for your computer (check remotion.config.ts)