Uses speech-to-text, natural language processing and Google Images to generate visuals for non-visual media (podcasts, music, etc.). See a video of the working system (v0.1) here.
On the current version we can only listen to audio via a microphone input, should add ability to listen directly to audio output on desktop (i.e. listen directly to what's being played on speakers/headphones). Add buttons on UI for things like 'refresh', 'pause', etc. Add separate thread that listens to the audio so we can still listen while we are searching Google Images (right now it takes about ~5 seconds just to fetch an image).