Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ yarn-error.log*
# local env files
.env*.local

# log files
dev_server.log
server.log

Comment on lines +31 to +34
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider anchoring or generalizing log ignore patterns.

dev_server.log / server.log will be ignored anywhere in the repo; if you intended only repo-root logs, use /dev_server.log and /server.log (or adopt *.log if that’s the goal).

🤖 Prompt for AI Agents
In @.gitignore around lines 31 - 34, The .gitignore currently lists
dev_server.log and server.log unanchored so they match files anywhere in the
repo; update the patterns to reflect the intended scope: if you only want to
ignore root-level logs change them to /dev_server.log and /server.log, or if you
intend to ignore all log files use a broader pattern like *.log (or logs/** if
logs are in a directory); modify the entries dev_server.log and server.log
accordingly in .gitignore.

# vercel
.vercel

Expand Down
4 changes: 3 additions & 1 deletion config/model.json
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
{ "selectedModel": "Grok 4.2" }
{
"selectedModel": null
}
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Verify null-selection is handled end-to-end (parse + branching).

Ensure the code that reads config/model.json treats null as “no selection” (and doesn’t stringify/compare it as a model name).

🤖 Prompt for AI Agents
In @config/model.json around lines 1 - 3, The config property selectedModel may
be null to indicate “no selection”; update code that reads/parses
config/model.json (e.g., functions like loadConfig, getSelectedModel,
resolveModelSelection or initializeModelSelection) to treat selectedModel ===
null (or selectedModel == null) as an explicit “no selection” branch, avoid
comparing to the string "null", and ensure any downstream branching uses a null
check before treating it as a model name; also ensure when writing back the
config you serialize actual null (not the string "null") for selectedModel.

15 changes: 9 additions & 6 deletions dev_server.log
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
$ next dev --turbo
⚠ Port 3000 is in use, using available port 3001 instead.
▲ Next.js 15.3.6 (Turbopack)
- Local: http://localhost:3001
- Network: http://192.168.0.2:3001
- Local: http://localhost:3000
- Network: http://192.168.0.2:3000
- Environments: .env

✓ Starting...
○ Compiling middleware ...
✓ Compiled middleware in 528ms
✓ Ready in 2.7s
Attention: Next.js now collects completely anonymous telemetry regarding usage.
This information is used to shape Next.js' roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://nextjs.org/telemetry

✓ Compiled middleware in 388ms
✓ Ready in 1859ms
Comment on lines 1 to +14

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dev_server.log is a runtime artifact and shouldn’t be committed. It also conflicts with this PR’s .gitignore intent (ignoring dev_server.log going forward), which implies it should not live in the repo at all.

Suggestion

Remove dev_server.log from the repository and rely on .gitignore to keep it untracked going forward. If you need to document typical dev output, add a short snippet to README instead.

Reply with "@CharlieHelps yes please" if you’d like me to add a commit that deletes dev_server.log from git history for this branch (i.e., removes it from the PR) and keeps the .gitignore change.

10 changes: 9 additions & 1 deletion lib/actions/users.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// File: lib/actions/users.ts
'use server';

import { revalidatePath } from 'next/cache';
import { revalidatePath, unstable_noStore as noStore } from 'next/cache';
import fs from 'fs/promises';
import path from 'path';

Expand Down Expand Up @@ -124,24 +124,32 @@ export async function updateSettingsAndUsers(
const modelConfigPath = path.resolve(process.cwd(), 'config', 'model.json');

export async function getSelectedModel(): Promise<string | null> {
noStore();
console.log(`[DEBUG] getSelectedModel - Reading from path: "${modelConfigPath}"`);
try {
const data = await fs.readFile(modelConfigPath, 'utf8');
console.log(`[DEBUG] getSelectedModel - Raw file content: "${data}"`);
const config = JSON.parse(data);
return config.selectedModel || null;
} catch (error) {
console.error('Error reading model config:', error);
console.log(`[DEBUG] getSelectedModel - Error reading file:`, error);
return null;
}
}

export async function saveSelectedModel(model: string): Promise<{ success: boolean; error?: string }> {
console.log(`[DEBUG] saveSelectedModel - Received model selection: "${model}"`);
console.log(`[DEBUG] saveSelectedModel - Writing to path: "${modelConfigPath}"`);
try {
const data = JSON.stringify({ selectedModel: model }, null, 2);
await fs.writeFile(modelConfigPath, data, 'utf8');
console.log(`[DEBUG] saveSelectedModel - Successfully wrote to file.`);
revalidatePath('/settings');
return { success: true };
} catch (error) {
console.error('Error saving model config:', error);
console.log(`[DEBUG] saveSelectedModel - Error writing to file:`, error);
return { success: false, error: 'Failed to save selected model.' };
}
}
2 changes: 1 addition & 1 deletion lib/agents/inquire.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ export async function inquire(

let finalInquiry: PartialInquiry = {};
const result = await streamObject({
model: getModel() as LanguageModel,
model: (await getModel()) as LanguageModel,
system: `...`, // Your system prompt remains unchanged
messages,
schema: inquirySchema,
Expand Down
2 changes: 1 addition & 1 deletion lib/agents/query-suggestor.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ export async function querySuggestor(

let finalRelatedQueries: PartialRelated = {}
const result = await streamObject({
model: getModel() as LanguageModel,
model: (await getModel()) as LanguageModel,
system: `As a professional web researcher, your task is to generate a set of three queries that explore the subject matter more deeply, building upon the initial query and the information uncovered in its search results.

For instance, if the original query was "Starship's third test flight key milestones", your output should follow this format:
Expand Down
2 changes: 1 addition & 1 deletion lib/agents/researcher.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ export async function researcher(
)

const result = await nonexperimental_streamText({
model: getModel(hasImage) as LanguageModel,
model: (await getModel(hasImage)) as LanguageModel,
maxTokens: 4096,
system: systemPromptToUse,
messages,
Expand Down
2 changes: 1 addition & 1 deletion lib/agents/resolution-search.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Analyze the user's prompt and the image to provide a holistic understanding of t

// Use generateObject to get the full object at once.
const { object } = await generateObject({
model: getModel(hasImage),
model: await getModel(hasImage),
system: systemPrompt,
messages: filteredMessages,
schema: resolutionSearchSchema,
Expand Down
2 changes: 1 addition & 1 deletion lib/agents/task-manager.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ export async function taskManager(messages: CoreMessage[]) {
}

const result = await generateObject({
model: getModel() as LanguageModel,
model: (await getModel()) as LanguageModel,
system: `As a planet computer, your primary objective is to act as an efficient **Task Manager** for the user's query. Your goal is to minimize unnecessary steps and maximize the efficiency of the subsequent exploration phase (researcher agent).

You must first analyze the user's input and determine the optimal course of action. You have two options at your disposal:
Expand Down
2 changes: 1 addition & 1 deletion lib/agents/writer.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ export async function writer(
const systemToUse = dynamicSystemPrompt && dynamicSystemPrompt.trim() !== '' ? dynamicSystemPrompt : default_system_prompt;

const result = await nonexperimental_streamText({
model: getModel() as LanguageModel,
model: (await getModel()) as LanguageModel,
maxTokens: 2500,
system: systemToUse, // Use the dynamic or default system prompt
messages
Expand Down
97 changes: 71 additions & 26 deletions lib/utils/index.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import { type ClassValue, clsx } from 'clsx'
import { twMerge } from 'tailwind-merge'
import { getSelectedModel } from '@/lib/actions/users'
import { openai } from '@ai-sdk/openai'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Unused import.

The openai import on line 4 appears to be unused since line 5 imports createOpenAI which is the one actually used in the function.

🧹 Remove unused import
 import { type ClassValue, clsx } from 'clsx'
 import { twMerge } from 'tailwind-merge'
 import { getSelectedModel } from '@/lib/actions/users'
-import { openai } from '@ai-sdk/openai'
 import { createOpenAI } from '@ai-sdk/openai'
🤖 Prompt for AI Agents
In @lib/utils/index.ts at line 4, Remove the unused named import `openai` from
the module import list so only the used `createOpenAI` is imported from
'@ai-sdk/openai'; update the import statement in lib/utils/index.ts to eliminate
the unused symbol `openai` and keep `createOpenAI` (no other code changes
required).

import { createOpenAI } from '@ai-sdk/openai'
import { createGoogleGenerativeAI } from '@ai-sdk/google'
Expand All @@ -16,42 +17,87 @@ export function generateUUID(): string {
return uuidv4();
}

export function getModel(requireVision: boolean = false) {
const xaiApiKey = process.env.XAI_API_KEY
const gemini3ProApiKey = process.env.GEMINI_3_PRO_API_KEY
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY
const awsRegion = process.env.AWS_REGION
const bedrockModelId = process.env.BEDROCK_MODEL_ID || 'anthropic.claude-3-5-sonnet-20241022-v2:0'
export async function getModel(requireVision: boolean = false) {
const selectedModel = await getSelectedModel();
Comment on lines +20 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused requireVision parameter.

The requireVision parameter is declared but never referenced in the function body. This parameter was likely intended to influence model selection for vision-capable models (e.g., when hasImage is true in resolution-search.tsx), but currently has no effect.

🛠️ Either remove the parameter or implement the vision logic

If vision support is needed:

 export async function getModel(requireVision: boolean = false) {
   const selectedModel = await getSelectedModel();
+  
+  // If vision is required, ensure selected model supports it
+  if (requireVision) {
+    // Add vision-capable model selection logic
+  }

Or remove if not needed:

-export async function getModel(requireVision: boolean = false) {
+export async function getModel() {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function getModel(requireVision: boolean = false) {
const selectedModel = await getSelectedModel();
export async function getModel() {
const selectedModel = await getSelectedModel();
🤖 Prompt for AI Agents
In @lib/utils/index.ts around lines 20 - 21, The getModel function's
requireVision parameter is unused; either remove it from getModel signature and
from all callers (e.g., resolution-search.tsx where hasImage may pass it) or
implement vision-aware selection: when requireVision is true, query or filter
available models for vision-capable ones (use the same model-list helper or add
a helper like isVisionCapable) and return a vision-capable selection (falling
back to getSelectedModel if none found); update callers so resolution-search.tsx
passes true when hasImage is set if you implement the vision path.


// If vision is required, skip models that don't support it
if (!requireVision && xaiApiKey) {
const xaiApiKey = process.env.XAI_API_KEY;
const gemini3ProApiKey = process.env.GEMINI_3_PRO_API_KEY;
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID;
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
const awsRegion = process.env.AWS_REGION;
const bedrockModelId = process.env.BEDROCK_MODEL_ID || 'anthropic.claude-3-5-sonnet-20241022-v2:0';
const openaiApiKey = process.env.OPENAI_API_KEY;
Comment on lines +20 to +29

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getModel() is called frequently and now performs an async read (getSelectedModel()), likely hitting filesystem/DB on every inference call. That can become a hot-path latency and throughput bottleneck.

This is especially relevant for streaming (nonexperimental_streamText / streamObject) where model selection happens per request. Caching the selected model value (or the parsed config) with a short TTL would keep “UI selection respected” while avoiding repeated I/O.

Suggestion

Add a lightweight cache for getSelectedModel() results (e.g., in-memory module cache with TTL like 1–5s, or memoize per request if you have a request context). This preserves correctness but reduces repeated I/O.

Reply with "@CharlieHelps yes please" if you’d like me to add a commit implementing a minimal TTL cache around getSelectedModel() (no behavioral change beyond performance).


if (selectedModel) {
switch (selectedModel) {
case 'Grok 4.2':
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
try {
return xai('grok-4-fast-non-reasoning');
} catch (error) {
console.error('Selected model "Grok 4.2" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
Comment on lines +39 to +44
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Try/catch block won't catch runtime API errors.

The xai('grok-4-fast-non-reasoning') call is synchronous—it returns a model configuration object, not a Promise. The actual API call happens later when the model is used (e.g., in generateText). This try/catch will only catch synchronous initialization errors, not network or API failures.

The same pattern appears in the Gemini case (lines 54-59) and the fallback blocks (lines 83-87, 94-98).

💡 Clarify error handling scope

Consider updating comments to reflect what errors are actually caught, or remove try/catch if only synchronous errors are expected:

         const xai = createXai({
           apiKey: xaiApiKey,
           baseURL: 'https://api.x.ai/v1',
         });
-        try {
-          return xai('grok-4-fast-non-reasoning');
-        } catch (error) {
-          console.error('Selected model "Grok 4.2" is configured but failed to initialize.', error);
-          throw new Error('Failed to initialize selected model.');
-        }
+        return xai('grok-4-fast-non-reasoning');

} else {
console.error('User selected "Grok 4.2" but XAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'Gemini 3':
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
return google('gemini-3-pro-preview');
} catch (error) {
console.error('Selected model "Gemini 3" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Gemini 3" but GEMINI_3_PRO_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'GPT-5.1':
if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
} else {
console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
Comment on lines +64 to +73
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Inconsistent error handling: GPT-5.1 case lacks try/catch.

The Grok 4.2 and Gemini 3 cases wrap the model creation in try/catch, but GPT-5.1 does not. While the try/catch may be unnecessary (per previous comment), the inconsistency could confuse maintainers.

🤖 Prompt for AI Agents
In @lib/utils/index.ts around lines 64 - 73, The 'GPT-5.1' switch arm is missing
a try/catch around the createOpenAI/openai('gpt-4o') call, causing inconsistent
error handling versus the Grok 4.2 and Gemini 3 cases; wrap the block that
constructs the client and calls openai('gpt-4o') in a try/catch, log the caught
error with console.error (including the error object and a clear message
mentioning "GPT-5.1"), and rethrow or throw a new Error with a descriptive
message so failure behavior matches the other model cases (refer to the case
'GPT-5.1' branch, createOpenAI, openai('gpt-4o') and openaiApiKey).

}
Comment on lines +31 to +74

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the selected-model path, Grok/Gemini initialization is wrapped in try/catch, but OpenAI is not. Also, createXai(...) / createGoogleGenerativeAI(...) generally won’t throw at construction time; failures tend to happen on request. The current try/catch blocks likely don’t add real safety, but they do hide the original error by rethrowing a generic one.

This reduces debuggability when a selected model fails at runtime.

Suggestion

Standardize error handling and preserve root cause:

  • Remove construction-time try/catch unless the SDK is known to throw there, or
  • Keep it but rethrow with cause to preserve context.

Example:

throw new Error('Failed to initialize selected model: Grok 4.2', { cause: error as Error });

Also align behavior across providers (either all wrap similarly or none).

If you’d like, I can add a commit to simplify these blocks and preserve cause; reply with "@CharlieHelps yes please".

Comment on lines +31 to +74
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Switch statement missing default case for unknown model selections.

If selectedModel contains an unexpected value (e.g., a typo in config/model.json or a future model name), the switch silently falls through to the default fallback behavior without any indication that the user's selection was ignored.

🛠️ Add a default case to handle unknown selections
       case 'GPT-5.1':
         if (openaiApiKey) {
           const openai = createOpenAI({
             apiKey: openaiApiKey,
           });
           return openai('gpt-4o');
         } else {
             console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.');
             throw new Error('Selected model is not configured.');
         }
+      default:
+        console.warn(`Unknown model selection "${selectedModel}", falling back to default provider.`);
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (selectedModel) {
switch (selectedModel) {
case 'Grok 4.2':
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
try {
return xai('grok-4-fast-non-reasoning');
} catch (error) {
console.error('Selected model "Grok 4.2" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Grok 4.2" but XAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'Gemini 3':
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
return google('gemini-3-pro-preview');
} catch (error) {
console.error('Selected model "Gemini 3" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Gemini 3" but GEMINI_3_PRO_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'GPT-5.1':
if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
} else {
console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
}
if (selectedModel) {
switch (selectedModel) {
case 'Grok 4.2':
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
try {
return xai('grok-4-fast-non-reasoning');
} catch (error) {
console.error('Selected model "Grok 4.2" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Grok 4.2" but XAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'Gemini 3':
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
return google('gemini-3-pro-preview');
} catch (error) {
console.error('Selected model "Gemini 3" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Gemini 3" but GEMINI_3_PRO_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'GPT-5.1':
if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
} else {
console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
default:
console.warn(`Unknown model selection "${selectedModel}", falling back to default provider.`);
}
}
🤖 Prompt for AI Agents
In @lib/utils/index.ts around lines 31 - 74, The switch over selectedModel in
the function (handling cases 'Grok 4.2', 'Gemini 3', 'GPT-5.1') lacks a default
branch; add a default case that logs the unknown selection (including the
selectedModel value) via console.error or processLogger and then throws a clear
Error (e.g., "Unknown selected model: <selectedModel>") or returns a safe
fallback to ensure unexpected model names don't silently fall through; update
the switch in lib/utils/index.ts where selectedModel is evaluated to include
this default behavior.

}
Comment on lines +31 to +75

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The selected-model mapping is internally inconsistent: selecting 'GPT-5.1' returns openai('gpt-4o'). This violates the PR’s stated goal (“user’s choice is respected”) and will be very confusing in the UI. Either the UI label is wrong, or the backend model id is wrong.

Also, the switch handles only three hard-coded strings; if getSelectedModel() can return anything else, selection will be silently ignored and fall back to defaults.

Suggestion

Align UI names ↔ backend model IDs with a single source of truth.

  • If the intended OpenAI model is gpt-4o, rename the selectable option to something like GPT-4o.
  • If the intended model is actually “GPT-5.1”, return that actual model ID.
  • Add a default case in the switch that throws (or logs + throws) Unknown selected model: ${selectedModel} to avoid silently ignoring unexpected values.

Reply with "@CharlieHelps yes please" if you’d like me to add a commit introducing a MODEL_CATALOG map (UI label → provider + model id) and a safe default branch for unknown selections.


// Default behavior: Grok -> Gemini -> Bedrock -> OpenAI
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
})
// Optionally, add a check for credit status or skip xAI if credits are exhausted
});
try {
return xai('grok-4-fast-non-reasoning')
return xai('grok-4-fast-non-reasoning');
} catch (error) {
console.warn('xAI API unavailable, falling back to OpenAI:')
console.warn('xAI API unavailable, falling back to next provider:');
}
}

// Gemini 3 Pro
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
})
});
try {
return google('gemini-3-pro-preview')
return google('gemini-3-pro-preview');
} catch (error) {
console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error)
console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error);
}
}

// AWS Bedrock

if (awsAccessKeyId && awsSecretAccessKey) {
const bedrock = createAmazonBedrock({
bedrockOptions: {
Expand All @@ -61,16 +107,15 @@ export function getModel(requireVision: boolean = false) {
secretAccessKey: awsSecretAccessKey,
},
},
})
});
const model = bedrock(bedrockModelId, {
additionalModelRequestFields: { top_k: 350 },
})
return model
});
return model;
}

// Default fallback (OpenAI gpt-4o supports vision)
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
return openai('gpt-4o')
apiKey: openaiApiKey,
});
return openai('gpt-4o');
Comment on lines 117 to +120
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Final OpenAI fallback doesn't validate API key presence.

Unlike the explicit GPT-5.1 selection case (lines 64-73) which throws if OPENAI_API_KEY is missing, the final fallback creates an OpenAI client regardless of whether openaiApiKey is defined. If all other providers fail and OPENAI_API_KEY is unset, this will result in a cryptic API error later rather than a clear configuration error upfront.

🛠️ Add validation before final fallback
+  if (!openaiApiKey) {
+    console.error('No model providers configured. Please set at least one API key.');
+    throw new Error('No model providers configured.');
+  }
+
   const openai = createOpenAI({
     apiKey: openaiApiKey,
   });
   return openai('gpt-4o');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
return openai('gpt-4o')
apiKey: openaiApiKey,
});
return openai('gpt-4o');
if (!openaiApiKey) {
console.error('No model providers configured. Please set at least one API key.');
throw new Error('No model providers configured.');
}
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
🤖 Prompt for AI Agents
In @lib/utils/index.ts around lines 117 - 120, The final fallback path builds an
OpenAI client with createOpenAI using openaiApiKey but doesn't check that
openaiApiKey is set, leading to confusing downstream errors; before calling
createOpenAI(...) and returning openai('gpt-4o'), add the same API key
validation used in the GPT-5.1 branch (throw a clear error if openaiApiKey /
OPENAI_API_KEY is undefined), so the function validates presence of the key and
fails fast with a descriptive message instead of producing cryptic API errors
later.

}
2 changes: 1 addition & 1 deletion mapbox_mcp/hooks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ export const useMCPMapClient = () => {
setError(null);
try {
const response = await generateText({
model: getModel(),
model: await getModel(),
Comment on lines 130 to +131
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's examine the hooks.ts file structure
wc -l mapbox_mcp/hooks.ts

Repository: QueueLab/QCX

Length of output: 79


🏁 Script executed:

#!/bin/bash
# Read the relevant sections of hooks.ts
cat -n mapbox_mcp/hooks.ts | sed -n '40,55p'

Repository: QueueLab/QCX

Length of output: 915


🏁 Script executed:

#!/bin/bash
# Read the processLocationQuery and generateText sections
cat -n mapbox_mcp/hooks.ts | sed -n '120,190p'

Repository: QueueLab/QCX

Length of output: 3473


🏁 Script executed:

#!/bin/bash
# Find getModel and getSelectedModel definitions
rg -n "getModel|getSelectedModel" mapbox_mcp/hooks.ts -A 5

Repository: QueueLab/QCX

Length of output: 680


🏁 Script executed:

#!/bin/bash
# Search for getModel/getSelectedModel in other files
rg -n "export.*getModel|export.*getSelectedModel" --type=ts --type=tsx

Repository: QueueLab/QCX

Length of output: 83


🏁 Script executed:

#!/bin/bash
# Check for fs.readFile usage in the codebase
rg -n "fs\.readFile|readFileSync" --type=ts --type=tsx -B 2 -A 2

Repository: QueueLab/QCX

Length of output: 83


🏁 Script executed:

#!/bin/bash
# Verify useMCPMapClient usage and context
rg -n "useMCPMapClient" --type=ts --type=tsx -B 2 -A 5

Repository: QueueLab/QCX

Length of output: 83


🏁 Script executed:

#!/bin/bash
# Find the lib/utils file
fd "utils" --type f | grep -E "(lib|src)" | head -20

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

#!/bin/bash
# Search for getModel function definition more broadly
rg -n "function getModel|const getModel|export.*getModel" -B 2 -A 10

Repository: QueueLab/QCX

Length of output: 884


🏁 Script executed:

#!/bin/bash
# Search for getSelectedModel definition
rg -n "getSelectedModel" -B 2 -A 10

Repository: QueueLab/QCX

Length of output: 6400


🏁 Script executed:

#!/bin/bash
# Search for useMCPMapClient usage in client or page components
rg -n "useMCPMapClient" -B 3 -A 3

Repository: QueueLab/QCX

Length of output: 3695


🏁 Script executed:

#!/bin/bash
# Search for 'use client' files that might import useMCPMapClient
rg -l "use client" --type ts | head -20

Repository: QueueLab/QCX

Length of output: 693


🏁 Script executed:

#!/bin/bash
# Check if any 'use client' component actually imports/uses useMCPMapClient
rg -l "use client" --type ts -A 100 | xargs grep -l "useMCPMapClient" 2>/dev/null | head -10

Repository: QueueLab/QCX

Length of output: 92


🏁 Script executed:

#!/bin/bash
# Check if getSelectedModel is marked as a server action
head -20 lib/actions/users.ts

Repository: QueueLab/QCX

Length of output: 543


This pattern is already mitigated by existing safeguards.

While getModel() does call getSelectedModel() which uses fs.readFile(), the server-side execution is already guaranteed because getSelectedModel() is explicitly marked with 'use server' (in lib/actions/users.ts). This directive ensures the function executes only on the server, regardless of where it's called from.

Additionally, the hook includes an explicit warning comment (lines 45-47) advising against client-side usage, and the codebase shows the hook is currently only used server-side (notably, map-query-handler.tsx has removed the hook in favor of props-based data passing). No instances of direct client-side usage were found.

The code is structurally sound for its intended server-side usage. If further clarity is desired, consider updating the JSDoc to explicitly document that this hook is designed for Next.js server components or API routes only.

🤖 Prompt for AI Agents
In @mapbox_mcp/hooks.ts around lines 130 - 131, The existing concern about
getModel() calling getSelectedModel() (which uses fs.readFile) is already
mitigated by server-only execution, but please make this explicit by
adding/updating the JSDoc for the hook in mapbox_mcp/hooks.ts to state that this
hook is intended for Next.js server components or API routes only (server-side
only), reference getModel() and getSelectedModel() for clarity, and keep or
expand the existing inline warning comment so future readers know not to use the
hook on the client.

tools: toolsRef.current,
system: `You are an expert location data processing engine. Your role is to accurately use the available tools to answer location-based queries and provide structured data.

Expand Down