This project showcases how to call functions in a sample implementation of Hume's Empathic Voice Interface using Hume's Typescript SDK. Here, we have a simple EVI that calls a function to get the current weather for a given location.
To run this project locally, ensure your development environment meets the following requirements:
To check the versions of pnpm
and Node.js
installed on a Mac via the terminal, you can use the following commands:
- For Node.js, enter the following command and press Enter:
node -v
This command will display the version of Node.js currently installed on your system, for example, v21.6.1
.
- For pnpm, type the following command and press Enter:
pnpm -v
This command will show the version of pnpm
that is installed, like 8.10.0
.
If you haven't installed these tools yet, running these commands will result in a message indicating that the command was not found. In that case, you would need to install them first. Node.js can be installed from its official website or via a package manager like Homebrew, and pnpm
can be installed via npm (which comes with Node.js) by running npm install -g pnpm
in the terminal.
Before running this project, you'll need to set up EVI with the ability to leverage tools or call functions. Follow the steps below for authentication, as well as creating a Tool and adding it to a configuration.
- Create a
.env
file in the root folder of the repo and add your API Key and Secret Key.
There is an example file called
.env.example
with placeholder values, which you can simply rename to.env
.
Note the VITE
prefix to the environment variables. This prefix is required for vite to expose the environment variable to the client. For more information, see the vite documentation on environment variables and modes.
VITE_HUME_API_KEY=<YOUR API KEY>
VITE_HUME_SECRET_KEY=<YOUR SECRET KEY>
See our documentation on Setup for Tool Use for no-code and full-code guides on creating a tool and adding it to a configuration.
- Create a tool with the following payload:
curl -X POST https://api.hume.ai/v0/evi/tools \
-H "X-Hume-Api-Key: <YOUR_HUME_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"name": "get_current_weather",
"parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }",
"version_description": "Fetches current weather and uses celsius or fahrenheit based on location of user.",
"description": "This tool is for getting the current weather.",
"fallback_content": "Unable to fetch current weather."
}'
This will yield a Tool ID, which you can assign to a new EVI configuration.
- Create a configuration equipped with that tool:
curl -X POST https://api.hume.ai/v0/evi/configs \
-H "X-Hume-Api-Key: <YOUR_HUME_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"evi_version": "2",
"name": "Weather Assistant Config",
"voice": {
"provider": "HUME_AI",
"name": "ITO"
},
"language_model": {
"model_provider": "ANTHROPIC",
"model_resource": "claude-3-5-sonnet-20240620",
"temperature": 1
},
"tools": [
{
"id": "<YOUR_TOOL_ID>"
}
]
}'
- Add the Config ID to your environmental variables in your
.env
file:
VITE_HUME_WEATHER_ASSISTANT_CONFIG_ID=<YOUR CONFIG ID>
- Add your Geocoding API key to your environmental variables (free to use from geocode.maps.co).
VITE_GEOCODING_API_KEY=<YOUR GEOCODING API KEY>
Below are the steps to run the project locally:
- Run
pnpm i
to install required dependencies. - Run
pnpm build
to build the project. - Run
pnpm dev
to serve the project atlocalhost:5173
.
This implementation of Hume's Empathic User Interface (EVI) is minimal, using default configurations for the interface and a basic UI to authenticate, connect to, and disconnect from the interface.
- Click the
Start
button to establish an authenticated connection and to begin capturing audio. - Upon clicking
Start
, you will be prompted for permissions to use your microphone. Grant the permission to the application to continue. - Once permission is granted, you can begin speaking with the interface. The transcript of the conversation will be displayed on the webpage in realtime.
- Click
Stop
when finished speaking with the interface to stop audio capture and to disconnect the WebSocket.