-
-
Notifications
You must be signed in to change notification settings - Fork 128
Refactor Azure OpenAI Initialization and Model Handling #510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This pull request has been marked as stale due to inactivity. Please update it to keep it open. |
would be cool when you could merge this PR because otherwise azure users are not able to use another model (e.g. gpt-4.1-nano) because of the hardcoded gpt-4o-mini |
@@ -135,7 +135,7 @@ class SetupService { | |||
deploymentName: deploymentName, | |||
apiVersion: apiVersion }); | |||
const response = await openai.chat.completions.create({ | |||
model: "gpt-4o-mini", | |||
model: deploymentName, //azure openai uses deployment name as model parameter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point your suggestion is correct.
@@ -228,7 +228,7 @@ class AzureOpenAIService { | |||
|
|||
// Make API request | |||
const response = await this.client.chat.completions.create({ | |||
model: process.env.AZURE_DEPLOYMENT_NAME, | |||
model: config.azure.deploymentName, //azure openai uses deployment name as model parameter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It uses the deploymentName already here. Just loaded from ENV
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes but to make it consistent i used the configuration value here
@@ -129,7 +129,7 @@ class AzureOpenAIService { | |||
await writePromptToFile(systemPrompt, truncatedContent); | |||
|
|||
const response = await this.client.chat.completions.create({ | |||
model: model, | |||
model: config.azure.deploymentName, //azure openai uses deployment name as model parameter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model = deploymentName here. See line 65
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes but to make it consistent i used the configuration value here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. That was not meant in a bad way. You are absolutely right with your improvements.
It were only review comments for documentary reasons.
This pull request has been marked as stale due to inactivity. Please update it to keep it open. |
This PR refactors the Azure OpenAI integration and allows other models than gpt-4o-mini (e.g. gpt-4.1-nano).
Key improvements include:
Simplified client initialization: Removed redundant provider checks and streamlined logic to initialize the Azure OpenAI client only when config.aiProvider === 'azure'.
Corrected model usage: Replaced hardcoded model names with config.azure.deploymentName in all chat.completions.create calls, as Azure uses deployment names instead of model names at runtime.
Code cleanup: Removed unused or outdated initialization blocks for other providers (ollama, custom, etc.) within the Azure service context.