![](/Azure-Samples/deepseek-azure-javascript/raw/main/docs/images/icon.png)
⭐ If you like this repo, star it on GitHub — it helps a lot!
Overview • Get started • Run the samples • Next steps • Related samples
DeepSeek-R1 model has been announced on GitHub Models as well as on Azure AI Foundry, and the goal of this collection of samples is to demonstrate how to use it with JavaScript/TypeScript, using either the OpenAI Node.js SDK, LangChain.js, LlamaIndex.TS or Azure AI Inference SDK.
Tip
You can run any of these demos right in your browser for free using GitHub Codespaces and GitHub Models! ✨
Note
The DeepSeek-R1 model focus is on complex reasoning tasks, and it is not designed for general conversation. It is best suited for tasks that require a deep understanding of the context and a complex reasoning process to provide an answer, like the samples/08-reasoning.ts
example.
This also means that you may experience longer response times compared to other models, because it simulates a though process (englobed under the <think>
tag) before providing an actual answer.
There are multiple ways to get started with this project.
The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you, directly from your browser. Alternatively, you can set up your local environment following the instructions below.
You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:
A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.
You will also need to have Docker installed on your machine to run the container.
You need to install following tools to work on your local machine:
- Node.js LTS
- Git
- Ollama (optional) - For using the models locally
Then you can get the project code:
-
Fork the project to create your own copy of this repository.
-
On your forked repository, select the Code button, then the Local tab, and copy the URL of your forked repository.
-
Open a terminal and run this command to clone the repo:
git clone <your-repo-url>
-
Open the cloned project in your favorite IDE, then run this command in a terminal:
npm install
In the samples folder of this repository, you'll find examples of how to use the DeepSeek-R1 models with different use cases and SDKs. You can run them by executing the following command in the terminal:
npx tsx samples/<filename>
Alternatively, you can open a sample file in the editor and run it directly by clicking the "Run" (
The samples are configured by default to run using GitHub models, which should run without any additional configuration if you're using GitHub Codespaces. There are multiple ways to run the samples using either GitHub Models, Azure AI Foundry or even locally using Ollama. Open the samples/config.ts
and change the default export to the desired configuration.
To use GitHub Models, you need to have a GitHub account and a personal access token (PAT).
Once you have created you PAT, create a .env
file in the root of the project and add the following content:
GITHUB_TOKEN=<your-github-token>
Tip
If you're using GitHub Codespaces, you can run the samples using GitHub Models without any additional configuration. Codespaces already sets up the environment variables for you, and you don't need to create a PAT.
Open the samples/config.ts
file and update the default export:
export default GITHUB_MODELS_CONFIG;
To use Azure AI Foundry, you need to have an Azure account. Then follow this quickstart guide to deploy a serverless endpoint with the model. When it's time to choose the model, select the DeepSeek-R1
model in the catalog.
Once your endpoint is deployed, you should be able to see your endpoint details and retrieve the URL and API key:
Then create a .env
file in the root of the project and add the following content:
AZURE_AI_BASE_URL="https://<your-deployment-name>.<region>.models.ai.azure.com/v1"
AZURE_AI_API_KEY="<your-api-key>"
Tip
If you're copying the endpoint from the Azure AI Foundry portal, make sure to add the /v1
at the end of the URL.
Open the samples/config.ts
file and update the default export:
export default AZURE_AI_CONFIG;
To use Ollama, you first need to use a local dev environment and install Ollama. Then, open a terminal and use the Ollama CLI to download the DeepSeek-R1 model:
ollama pull deepseek-r1:14b
Tip
Different model sizes are available, you can pick the one that fits your needs.
Larger models will provide better results but will require more resources to run. You can switch the model used by example by editing the samples/config.ts
file.
Once the model is downloaded, open the samples/config.ts
file and update the default export:
export default OLLAMA_CONFIG;
Here are some additional resources to help you learn more and experiment with generative AI on Azure:
- How to use DeepSeek-R1 reasoning model (Microsoft Learn): a tutorial to learn how to use the DeepSeek-R1 reasoning model.
- Azure AI Foundry (Azure): a web portal to create, train, deploy and experiment with AI models.
- Generative AI with JavaScript (GitHub): code samples and resources to learn Generative AI with JavaScript.
- Fundamentals of Responsible Generative AI (Microsoft Learn): a training module to learn about the responsible use of generative AI.
- Build a serverless AI chat with RAG using LangChain.js (GitHub): a next step code example to build an AI chatbot using Retrieval-Augmented Generation and LangChain.js.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.