Skip to content

Refactor Azure OpenAI Initialization and Model Handling #510

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

romanlum
Copy link

This PR refactors the Azure OpenAI integration and allows other models than gpt-4o-mini (e.g. gpt-4.1-nano).

Key improvements include:

  • Simplified client initialization: Removed redundant provider checks and streamlined logic to initialize the Azure OpenAI client only when config.aiProvider === 'azure'.

  • Corrected model usage: Replaced hardcoded model names with config.azure.deploymentName in all chat.completions.create calls, as Azure uses deployment names instead of model names at runtime.

  • Code cleanup: Removed unused or outdated initialization blocks for other providers (ollama, custom, etc.) within the Azure service context.

@romanlum romanlum changed the title Azure fixes Refactor Azure OpenAI Initialization and Model Handling May 15, 2025
@clusterzx
Copy link
Owner

This pull request has been marked as stale due to inactivity. Please update it to keep it open.

@romanlum
Copy link
Author

would be cool when you could merge this PR because otherwise azure users are not able to use another model (e.g. gpt-4.1-nano) because of the hardcoded gpt-4o-mini

@@ -135,7 +135,7 @@ class SetupService {
deploymentName: deploymentName,
apiVersion: apiVersion });
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
model: deploymentName, //azure openai uses deployment name as model parameter
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point your suggestion is correct.

@@ -228,7 +228,7 @@ class AzureOpenAIService {

// Make API request
const response = await this.client.chat.completions.create({
model: process.env.AZURE_DEPLOYMENT_NAME,
model: config.azure.deploymentName, //azure openai uses deployment name as model parameter
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It uses the deploymentName already here. Just loaded from ENV

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but to make it consistent i used the configuration value here

@@ -129,7 +129,7 @@ class AzureOpenAIService {
await writePromptToFile(systemPrompt, truncatedContent);

const response = await this.client.chat.completions.create({
model: model,
model: config.azure.deploymentName, //azure openai uses deployment name as model parameter
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model = deploymentName here. See line 65

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but to make it consistent i used the configuration value here

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries. That was not meant in a bad way. You are absolutely right with your improvements.

It were only review comments for documentary reasons.

@clusterzx clusterzx removed the stale label May 24, 2025
@clusterzx
Copy link
Owner

This pull request has been marked as stale due to inactivity. Please update it to keep it open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants