VoiceRAG: An Application Pattern for RAG + Voice Using Azure AI Search and the GPT-4o Realtime API for Audio
This repo contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio. We describe the pattern in more detail in this blog post, and you can see this sample app in action in this short video.
- Voice interface: The app uses the browser's microphone to capture voice input, and sends it to the backend where it is processed by the Azure OpenAI GPT-4o Realtime API.
- RAG (Retrieval Augmented Generation): The app uses the Azure AI Search service to answer questions about a knowledge base, and sends the retrieved documents to the GPT-4o Realtime API to generate a response.
- Audio output: The app plays the response from the GPT-4o Realtime API as audio, using the browser's audio capabilities.
- Citations: The app shows the search results that were used to generate the response.
The RTClient
in the frontend receives the audio input, sends that to the Python backend which uses an RTMiddleTier
object to interface with the Azure OpenAI real-time API, and includes a tool for searching Azure AI Search.
This repository includes infrastructure as code and a Dockerfile
to deploy the app to Azure Container Apps, but it can also be run locally as long as Azure AI Search and Azure OpenAI services are configured.
You have a few options for getting started with this template. The quickest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally. You can also use a VS Code dev container
You can run this repo virtually by using GitHub Codespaces, which will open a web-based VS Code in your browser:
Once the codespace opens (this may take several minutes), open a new terminal and proceed to deploy the app.
You can run the project in your local VS Code Dev Container using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed)
-
Open the project:
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a new terminal, and proceed to deploying the app.
-
Install the required tools:
- Azure Developer CLI
- Node.js
- Python >=3.11
- Important: Python and the pip package manager must be in the path in Windows for the setup scripts to work.
- Important: Ensure you can run
python --version
from console. On Ubuntu, you might need to runsudo apt install python-is-python3
to linkpython
topython3
.
- Git
- Powershell - For Windows users only.
-
Clone the repo (
git clone https://github.com/Azure-Samples/aisearch-openai-rag-audio
) -
Proceed to the next section to deploy the app.
The steps below will provision Azure resources and deploy the application code to Azure Container Apps.
-
Login to your Azure account:
azd auth login
For GitHub Codespaces users, if the previous command fails, try:
azd auth login --use-device-code
-
Create a new azd environment:
azd env new
Enter a name that will be used for the resource group. This will create a new folder in the
.azure
folder, and set it as the active environment for any calls toazd
going forward. -
(Optional) This is the point where you can customize the deployment by setting azd environment variables, in order to use existing services or customize the voice choice.
-
Run this single command to provision the resources, deploy the code, and setup integrated vectorization for the sample data:
azd up
- Important: Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run
azd down
or delete the resources manually to avoid unnecessary spending. - You will be prompted to select two locations, one for the majority of resources and one for the OpenAI resource, which is currently a short list. That location list is based on the OpenAI model availability table and may become outdated as availability changes.
- Important: Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run
-
After the application has been successfully deployed you will see a URL printed to the console. Navigate to that URL to interact with the app in your browser. To try out the app, click the "Start conversation button", say "Hello", and then ask a question about your data like "What is the whistleblower policy for Contoso electronics?" You can also now run the app locally by following the instructions in the next section.
You can run this app locally using either the Azure services you provisioned by following the deployment instructions, or by pointing the local app at already existing services.
-
If you deployed with
azd up
, you should see aapp/backend/.env
file with the necessary environment variables. -
If did not use
azd up
, you will need to createapp/backend/.env
file with the following environment variables:AZURE_OPENAI_ENDPOINT=wss://<your instance name>.openai.azure.com AZURE_OPENAI_REALTIME_DEPLOYMENT=gpt-4o-realtime-preview AZURE_OPENAI_REALTIME_VOICE_CHOICE=<choose one: echo, alloy, shimmer> AZURE_OPENAI_API_KEY=<your api key> AZURE_SEARCH_ENDPOINT=https://<your service name>.search.windows.net AZURE_SEARCH_INDEX=<your index name> AZURE_SEARCH_API_KEY=<your api key>
To use Entra ID (your user when running locally, managed identity when deployed) simply don't set the keys.
-
Run this command to start the app:
Windows:
pwsh .\scripts\start.ps1
Linux/Mac:
./scripts/start.sh
-
The app is available on http://localhost:8765.
Once the app is running, when you navigate to the URL above you should see the start screen of the app:
To try out the app, click the "Start conversation button", say "Hello", and then ask a question about your data like "What is the whistleblower policy for Contoso electronics?"
Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can try the Azure pricing calculator for the resources below.
- Azure Container Apps: Consumption plan with 1 CPU core, 2.0 GB RAM. Pricing with Pay-as-You-Go. Pricing
- Azure OpenAI: Standard tier, gpt-4o-realtime and text-embedding-3-large models. Pricing per 1K tokens used. Pricing
- Azure AI Search: Standard tier, 1 replica, free level of semantic search. Pricing per hour. Pricing
- Azure Blob Storage: Standard tier with ZRS (Zone-redundant storage). Pricing per storage and read operations. Pricing
- Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing
To reduce costs, you can switch to free SKUs for various services, but those SKUs have limitations.
azd down
.
This template uses Managed Identity to eliminate the need for developers to manage these credentials. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials.To ensure best practices in your repo we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repos.
Sample data: The PDF documents used in this demo contain information generated using a language model (Azure OpenAI Service). The information contained in these documents is only for demonstration purposes and does not reflect the opinions or beliefs of Microsoft. Microsoft makes no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the information contained in this document. All rights reserved to Microsoft.