Skip to content

Demonstrate how to use DeepSeek-R1 models GitHub Models, Azure or Ollama

License

Notifications You must be signed in to change notification settings

Azure-Samples/deepseek-azure-javascript

Repository files navigation

DeepSeek on Azure - JavaScript demos

Open project in GitHub Codespaces Node version TypeScript License

⭐ If you like this repo, star it on GitHub — it helps a lot!

OverviewGet startedRun the samplesNext stepsRelated samples

Overview

DeepSeek-R1 model has been announced on GitHub Models as well as on Azure AI Foundry, and the goal of this collection of samples is to demonstrate how to use it with JavaScript/TypeScript, using either the OpenAI Node.js SDK, LangChain.js, LlamaIndex.TS or Azure AI Inference SDK.

Tip

You can run any of these demos right in your browser for free using GitHub Codespaces and GitHub Models! ✨

Note

The DeepSeek-R1 model focus is on complex reasoning tasks, and it is not designed for general conversation. It is best suited for tasks that require a deep understanding of the context and a complex reasoning process to provide an answer, like the samples/08-reasoning.ts example. This also means that you may experience longer response times compared to other models, because it simulates a though process (englobed under the <think> tag) before providing an actual answer.

Get started

There are multiple ways to get started with this project.

The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you, directly from your browser. Alternatively, you can set up your local environment following the instructions below.

Use GitHub Codespaces

You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:

Open in GitHub Codespaces

Use a VSCode dev container

A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.

You will also need to have Docker installed on your machine to run the container.

Open in Dev Containers

Use your local environment

You need to install following tools to work on your local machine:

Then you can get the project code:

  1. Fork the project to create your own copy of this repository.

  2. On your forked repository, select the Code button, then the Local tab, and copy the URL of your forked repository.

    Screenshot showing how to copy the repository URL

  3. Open a terminal and run this command to clone the repo: git clone <your-repo-url>

  4. Open the cloned project in your favorite IDE, then run this command in a terminal: npm install

Run the samples

In the samples folder of this repository, you'll find examples of how to use the DeepSeek-R1 models with different use cases and SDKs. You can run them by executing the following command in the terminal:

npx tsx samples/<filename>

Alternatively, you can open a sample file in the editor and run it directly by clicking the "Run" (▶️) button in the top right corner of the editor.

The samples are configured by default to run using GitHub models, which should run without any additional configuration if you're using GitHub Codespaces. There are multiple ways to run the samples using either GitHub Models, Azure AI Foundry or even locally using Ollama. Open the samples/config.ts and change the default export to the desired configuration.

Using GitHub Models

To use GitHub Models, you need to have a GitHub account and a personal access token (PAT).

Once you have created you PAT, create a .env file in the root of the project and add the following content:

GITHUB_TOKEN=<your-github-token>

Tip

If you're using GitHub Codespaces, you can run the samples using GitHub Models without any additional configuration. Codespaces already sets up the environment variables for you, and you don't need to create a PAT.

Open the samples/config.ts file and update the default export:

export default GITHUB_MODELS_CONFIG;

Using Azure AI Foundry

To use Azure AI Foundry, you need to have an Azure account. Then follow this quickstart guide to deploy a serverless endpoint with the model. When it's time to choose the model, select the DeepSeek-R1 model in the catalog.

Once your endpoint is deployed, you should be able to see your endpoint details and retrieve the URL and API key:

Screenshot showing the endpoint details in Azure AI Foundry

Then create a .env file in the root of the project and add the following content:

AZURE_AI_BASE_URL="https://<your-deployment-name>.<region>.models.ai.azure.com/v1"
AZURE_AI_API_KEY="<your-api-key>"

Tip

If you're copying the endpoint from the Azure AI Foundry portal, make sure to add the /v1 at the end of the URL.

Open the samples/config.ts file and update the default export:

export default AZURE_AI_CONFIG;

Using Ollama

To use Ollama, you first need to use a local dev environment and install Ollama. Then, open a terminal and use the Ollama CLI to download the DeepSeek-R1 model:

ollama pull deepseek-r1:14b

Tip

Different model sizes are available, you can pick the one that fits your needs. Larger models will provide better results but will require more resources to run. You can switch the model used by example by editing the samples/config.ts file.

Once the model is downloaded, open the samples/config.ts file and update the default export:

export default OLLAMA_CONFIG;

Next steps

Here are some additional resources to help you learn more and experiment with generative AI on Azure:

Related samples

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.