Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Added description of Ollama server for local LLMs
  • Loading branch information
PromtEngineer authored Jun 4, 2024
1 parent 4c025fa commit 88c433a
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,8 @@ Edit config.py to select the models you want to use:
LOCAL_MODEL_PATH = os.getenv("LOCAL_MODEL_PATH")
```

If you are running LLM locally via [Ollama](https://ollama.com/), make sure the Ollama server is runnig before starting verbi.

6. 🔊 **Configure ElevenLabs Jarvis' Voice**
- Voice samples [here](https://github.com/PromtEngineer/Verbi/tree/main/voice_samples).
- Follow this [link](https://elevenlabs.io/app/voice-lab/share/de3746fa51a09e771604d74b5d1ff6797b6b96a5958f9de95cef544dde31dad9/WArWzu0z4mbSyy5BfRKM) to add the Jarvis voice to your ElevenLabs account.
Expand Down

0 comments on commit 88c433a

Please sign in to comment.