Skip to content

Commit

Permalink
Python: Add Deepseek service to concept samples (#10306)
Browse files Browse the repository at this point in the history
### Motivation and Context

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Models from [DeepSeek](https://www.deepseek.com/) are rising fast as one
of the most capable and cost-effective open-source models. The community
will be eager to test these models out.

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
It's not hard to use these new models as they are compatible with the
OpenAI API. This PR simply adds an option where people can test out the
DeepSeek models in the chat concept samples.

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄
  • Loading branch information
TaoChenOSU authored Jan 28, 2025
1 parent 62417d8 commit f005058
Show file tree
Hide file tree
Showing 7 changed files with 64 additions and 21 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)

Expand Down
6 changes: 2 additions & 4 deletions python/samples/concepts/chat_completion/simple_chatbot.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@

import asyncio

from samples.concepts.setup.chat_completion_services import (
Services,
get_chat_completion_service_and_request_settings,
)
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
from semantic_kernel.contents import ChatHistory

# This sample shows how to create a chatbot. This sample uses the following two main components:
Expand All @@ -25,6 +22,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.OPENAI)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@

import asyncio

from samples.concepts.setup.chat_completion_services import (
Services,
get_chat_completion_service_and_request_settings,
)
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
from semantic_kernel import Kernel
from semantic_kernel.contents import ChatHistory
from semantic_kernel.functions import KernelArguments
Expand Down Expand Up @@ -33,6 +30,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@

import asyncio

from samples.concepts.setup.chat_completion_services import (
Services,
get_chat_completion_service_and_request_settings,
)
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
from semantic_kernel.contents import ChatHistory, StreamingChatMessageContent

# This sample shows how to create a chatbot that streams responses.
Expand All @@ -26,6 +23,7 @@
# - Services.OLLAMA
# - Services.ONNX
# - Services.VERTEX_AI
# - Services.DEEPSEEK
# Please make sure you have configured your environment correctly for the selected chat completion service.
# Please note that not all models support streaming responses. Make sure to select a model that supports streaming.
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)
Expand Down
64 changes: 55 additions & 9 deletions python/samples/concepts/setup/chat_completion_services.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
from enum import Enum
from typing import TYPE_CHECKING

from semantic_kernel.exceptions.service_exceptions import ServiceInitializationError

if TYPE_CHECKING:
from semantic_kernel.connectors.ai.chat_completion_client_base import ChatCompletionClientBase
from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings
Expand All @@ -25,6 +27,7 @@ class Services(str, Enum):
OLLAMA = "ollama"
ONNX = "onnx"
VERTEX_AI = "vertex_ai"
DEEPSEEK = "deepseek"


service_id = "default"
Expand All @@ -39,7 +42,8 @@ def get_chat_completion_service_and_request_settings(
Args:
service_name (Services): The service name.
instruction_role (str | None): The role to use for 'instruction' messages, for example,
'system' or 'developer'. Defaults to 'system'. Currently only supported for OpenAI reasoning models.
'system' or 'developer'. Defaults to 'system'. Currently only OpenAI reasoning models
support 'developer' role.
"""
# Use lambdas or functions to delay instantiation
chat_services = {
Expand All @@ -59,6 +63,7 @@ def get_chat_completion_service_and_request_settings(
Services.OLLAMA: lambda: get_ollama_chat_completion_service_and_request_settings(),
Services.ONNX: lambda: get_onnx_chat_completion_service_and_request_settings(),
Services.VERTEX_AI: lambda: get_vertex_ai_chat_completion_service_and_request_settings(),
Services.DEEPSEEK: lambda: get_deepseek_chat_completion_service_and_request_settings(),
}

# Call the appropriate lambda or function based on the service name
Expand Down Expand Up @@ -87,10 +92,7 @@ def get_openai_chat_completion_service_and_request_settings(
Please refer to the Semantic Kernel Python documentation for more information:
https://learn.microsoft.com/en-us/python/api/semantic-kernel/semantic_kernel?view=semantic-kernel-python
"""
from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAIChatPromptExecutionSettings

chat_service = OpenAIChatCompletion(service_id=service_id, instruction_role=instruction_role)
request_settings = OpenAIChatPromptExecutionSettings(
Expand Down Expand Up @@ -120,10 +122,7 @@ def get_azure_openai_chat_completion_service_and_request_settings(
Please refer to the Semantic Kernel Python documentation for more information:
https://learn.microsoft.com/en-us/python/api/semantic-kernel/semantic_kernel?view=semantic-kernel
"""
from semantic_kernel.connectors.ai.open_ai import (
AzureChatCompletion,
AzureChatPromptExecutionSettings,
)
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureChatPromptExecutionSettings

chat_service = AzureChatCompletion(service_id=service_id, instruction_role=instruction_role)
request_settings = AzureChatPromptExecutionSettings(service_id=service_id)
Expand Down Expand Up @@ -355,3 +354,50 @@ def get_vertex_ai_chat_completion_service_and_request_settings() -> tuple[
request_settings = VertexAIChatPromptExecutionSettings(service_id=service_id)

return chat_service, request_settings


def get_deepseek_chat_completion_service_and_request_settings() -> tuple[
"ChatCompletionClientBase", "PromptExecutionSettings"
]:
"""Return DeepSeek chat completion service and request settings.
The service credentials can be read by 3 ways:
1. Via the constructor
2. Via the environment variables
3. Via an environment file
The DeepSeek endpoint can be accessed via the OpenAI connector as the DeepSeek API is compatible with OpenAI API.
Set the `OPENAI_API_KEY` environment variable to the DeepSeek API key.
Set the `OPENAI_CHAT_MODEL_ID` environment variable to the DeepSeek model ID (deepseek-chat or deepseek-reasoner).
The request settings control the behavior of the service. The default settings are sufficient to get started.
However, you can adjust the settings to suit your needs.
Note: Some of the settings are NOT meant to be set by the user.
Please refer to the Semantic Kernel Python documentation for more information:
https://learn.microsoft.com/en-us/python/api/semantic-kernel/semantic_kernel?view=semantic-kernel-python
"""
from openai import AsyncOpenAI

from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
OpenAISettings,
)

openai_settings = OpenAISettings.create()
if not openai_settings.api_key:
raise ServiceInitializationError("The DeepSeek API key is required.")
if not openai_settings.chat_model_id:
raise ServiceInitializationError("The DeepSeek model ID is required.")

chat_service = OpenAIChatCompletion(
ai_model_id=openai_settings.chat_model_id,
service_id=service_id,
async_client=AsyncOpenAI(
api_key=openai_settings.api_key.get_secret_value(),
base_url="https://api.deepseek.com",
),
)
request_settings = OpenAIChatPromptExecutionSettings(service_id=service_id)

return chat_service, request_settings

0 comments on commit f005058

Please sign in to comment.