- Increased accuracy (timing and overall face shows more natural movement overall, brows, squint, cheeks + mouth shapes)
- More smoothness during playback (flappy mouth be gone in most cases, even when speaking quickly)
- Works better with more voices and styles of speaking.
- This preview of the new model is a modest increase in capability that requires both model.pth and model.py to be replaced with the new versions.
Download the model from Hugging Face
The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.
- Real-time facial animation
- Integration with Unreal Engine 5 via LiveLink
- Supports blendshapes generated from audio inputs
To generate facial blendshapes from audio, you'll need the NeuroSync audio-to-face blendshape transformer model. You can:
-To host the model locally, you can set up the NeuroSync Local API.
The player can connect to either the local API or the alpha API depending on your needs. To switch between the two, simply change the boolean value in the utils/neurosync/neurosync_api_connect.py
file:
Realtime AI endpoint server that combines tts and neurosync generations available.
Includes code for various helpful AI endpoints (stt, tts, embedding, vision) to use with the player, or your own projects. Be mindful of licences for your use case.
Demo Build: Download the demo build to test NeuroSync with an Unreal Project (aka, free realistic AI companion when used with llm_to_face.py wink )
Talk to a NeuroSync prototype live on Twitch : Visit Mai