Skip to content

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

License

Notifications You must be signed in to change notification settings

AnimaVR/NeuroSync_Player

Repository files navigation

NeuroSync Player

29/03/2025 Update to model.pth and model.py in api's

  • Increased accuracy (timing and overall face shows more natural movement overall, brows, squint, cheeks + mouth shapes)
  • More smoothness during playback (flappy mouth be gone in most cases, even when speaking quickly)
  • Works better with more voices and styles of speaking.
  • This preview of the new model is a modest increase in capability that requires both model.pth and model.py to be replaced with the new versions.

Download the model from Hugging Face

Overview

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Features:

  • Real-time facial animation
  • Integration with Unreal Engine 5 via LiveLink
  • Supports blendshapes generated from audio inputs

NeuroSync Model

To generate facial blendshapes from audio, you'll need the NeuroSync audio-to-face blendshape transformer model. You can:

-To host the model locally, you can set up the NeuroSync Local API.

Switching Between Local and Non-Local API

The player can connect to either the local API or the alpha API depending on your needs. To switch between the two, simply change the boolean value in the utils/neurosync/neurosync_api_connect.py file:

12/03/2025 Local Real-Time API Toy

Realtime AI endpoint server that combines tts and neurosync generations available.

Includes code for various helpful AI endpoints (stt, tts, embedding, vision) to use with the player, or your own projects. Be mindful of licences for your use case.

Demo Build: Download the demo build to test NeuroSync with an Unreal Project (aka, free realistic AI companion when used with llm_to_face.py wink )

Talk to a NeuroSync prototype live on Twitch : Visit Mai

About

The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

Topics

Resources

License

Stars

Watchers

Forks

Languages