This is my dirty little stunt to fully automate the flow of creating presentation videos. Its raw, its not safe (shell injection ?!) and its slow. But, it's good enough for now 😄.
The user of this tool provides two inputs:
slides.pdf
: A slide deck, in PDF formatscript.txt
: A script, to be spoken during the slides
Provided with this input, roughly the following happens:
- The script is split into text snippets
- The PDF is rendered to PNG images
- Each snippet is
(sym-)linkedannotated with its corresponding page in the slides - Each snippet is spoken by tts into an audio file
- All audio files are concatenated into one audio file
- The PNGs are concatenated into one video
- The final video is assembled from the outputs of step 5. and step 6.
The script is expected to be in the following format:
--- slide=1 ---
I got something to say about this slide, uhm. Its great! Yeah, uhm. Thats it.
--- slide++ ---
More greatness! Ah no, forgot.
There was something important on the first slide, let's quickly jump back.
--- slide=1 ---
There we go. The end.
Worry not, here is an example from the demo slides:
output.mp4
The snippet separator must be a line conforming to the <snippet_separator>
non-terminal of the following EBNF:
<snippet_separator> ::= <fix> " slide" <op> " " <fix>
<fix> ::= "---"
<int> ::= "-"? [0-9]+
<infix_op> ::= ("=" | "+=" | "-=")
<unary_op> ::= "++" | "--"
<op> ::= <unary_op> | (<infix_op> <int>)
During the talking of a snippet, the current slide page slide
will be shown.
nix shell nixpkgs#pandoc nixpkgs#texlive.combined.scheme-full --command pandoc -t beamer demo/slides.md -o demo/slides.pdf