Skip to content

feat: Add AI SDK v6 support#307

Merged
robert-j-y merged 10 commits intoOpenRouterTeam:mainfrom
pablof7z:feature/ai-sdk-v6-support
Jan 19, 2026
Merged

feat: Add AI SDK v6 support#307
robert-j-y merged 10 commits intoOpenRouterTeam:mainfrom
pablof7z:feature/ai-sdk-v6-support

Conversation

@pablof7z
Copy link
Contributor

Summary

This PR adds support for AI SDK v6 (currently in beta).

Changes

  • Migrate from ProviderV2 to ProviderV3
  • Migrate from LanguageModelV2 to LanguageModelV3
  • Migrate from EmbeddingModelV2 to EmbeddingModelV3
  • Update specificationVersion to 'v3'
  • Update usage structure to nested v6 format:
    {
      inputTokens: { total, noCache, cacheRead, cacheWrite },
      outputTokens: { total, text, reasoning }
    }
  • Update all type imports (SharedV2* -> SharedV3*, etc.)
  • Update dependencies:
    • @ai-sdk/provider: 2.0.0 -> 3.0.0-beta.27
    • @ai-sdk/provider-utils: 3.0.1 -> 4.0.0-beta.53
    • ai peer dep: ^6.0.0

Testing

Tested with free models on OpenRouter:

  • ✅ Basic text generation
  • ✅ Streaming
  • ✅ Tool calling
  • ✅ Reasoning tokens (with DeepSeek R1 chimera)
  • ✅ Multi-turn conversations

Notes

AI SDK v6 is currently in beta. This PR tracks the beta versions. Once v6 is stable, dependency versions can be updated to stable releases.

- Update package dependencies to AI SDK v6 beta versions
- Migrate ProviderV2 → ProviderV3
- Migrate LanguageModelV2 → LanguageModelV3
- Migrate EmbeddingModelV2 → EmbeddingModelV3
- Update all type imports to V3 equivalents
- Add comprehensive implementation documentation

Known issues:
- Usage structures need migration to V3 nested format
- Test files need V3 type updates
- Build currently fails due to usage type mismatches

See AI-SDK-V6-IMPLEMENTATION.md for full details
- Migrate from ProviderV2 to ProviderV3
- Migrate from LanguageModelV2 to LanguageModelV3
- Migrate from EmbeddingModelV2 to EmbeddingModelV3
- Update specificationVersion to 'v3'
- Update usage structure to nested v6 format:
  - inputTokens: { total, noCache, cacheRead, cacheWrite }
  - outputTokens: { total, text, reasoning }
- Update all type imports (SharedV2* -> SharedV3*, etc.)
- Update dependencies:
  - @ai-sdk/provider: 2.0.0 -> 3.0.0-beta.27
  - @ai-sdk/provider-utils: 3.0.1 -> 4.0.0-beta.53
  - ai peer dep: ^6.0.0

Tested with free models on OpenRouter:
- Basic text generation ✅
- Streaming ✅
- Tool calling ✅
- Reasoning tokens ✅
- Multi-turn conversations ✅
@socket-security
Copy link

socket-security bot commented Dec 17, 2025

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Updated@​ai-sdk/​provider@​2.0.0 ⏵ 3.0.0-beta.2710010073 +197 -1100
Updated@​ai-sdk/​provider-utils@​3.0.18 ⏵ 4.0.0-beta.53100 +11007498 +1100
Addedmsw@​2.12.49410010094100
Updatedai@​5.0.104 ⏵ 6.0.0-beta.159100 +1100100100 +1100

View full report

- Remove documentation files from PR
- Fix all V2→V3 type migrations in source files
- Add MSW-based test server utility to replace removed createTestServer
- Migrate all test files to V3 types
- Fix JSONSchema7 type issues in tests
- Fix requestBody unknown type issues
- Typecheck now passes with 0 errors
- Build succeeds
- Live tests pass (basic, streaming, tool calling, reasoning, multi-turn)
@AviVahl
Copy link
Contributor

AviVahl commented Dec 23, 2025

final v6 is released. would be great if this got updated, reviewed, merged, and released.

@seannetlife
Copy link

would this constitute a breaking change, or are the types backwards compatible?

@elijah629
Copy link

https://ai-sdk.dev/docs/migration-guides/migration-guide-6-0 check here, I think they are compatible.

@saeta-eth
Copy link

would be great to see this PR move forward so we can start using AI SDK v6

@ravisojitra
Copy link

Any update on this? Relying totally on open router right now. and using v6 at the same time.

@ZenAlexa
Copy link

Just migrated our app to AI SDK v6 using this PR👍

- Update @ai-sdk/provider from 3.0.0-beta.27 to 3.0.0
- Update @ai-sdk/provider-utils from 4.0.0-beta.53 to 4.0.1
- Update ai from 6.0.0-beta.159 to 6.0.3
- Update tests for v6 usage format (nested inputTokens/outputTokens)
- Update tests for v3 embedding specification version
- Fix header tests for extended user-agent format
@pablof7z
Copy link
Contributor Author

Updated to stable AI SDK v6 releases

I've updated the dependencies to use stable versions since v6 is now released:

  • @ai-sdk/provider: 3.0.0-beta.273.0.0
  • @ai-sdk/provider-utils: 4.0.0-beta.534.0.1
  • ai (dev): 6.0.0-beta.1596.0.3

All tests pass with the stable versions.

Copy link

@The-Best-Codes The-Best-Codes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really hoping this can get finished up & merged soon! Would love to use OpenRouter with AI SDK in my applications! 🙌

@wong2
Copy link

wong2 commented Dec 29, 2025

Yeah this is what's blocking our AI SDK v6 migration.

@ldriss
Copy link

ldriss commented Dec 29, 2025

+1

@fwang2002
Copy link

fwang2002 commented Dec 30, 2025

There is asProviderV3 and asLanguageModelV3 in ai-sdk v6, but they are not exported. Copy these two files may help before ai-sdk-provider is ready.

@vzt7
Copy link

vzt7 commented Dec 31, 2025

Any update? 🙏

@ldriss
Copy link

ldriss commented Dec 31, 2025

No updates as user with enterprise plan i really need this v6 for agents and here we are i know the team is doing a lot of efforts, but 2 weeks and no single commit or update here is crazy

@mattapperson
Copy link
Contributor

mattapperson commented Jan 1, 2026

Tested locally with AI SDK v6. Found a critical issue:

LanguageModelV3FinishReason type mismatch

In AI SDK v6, finishReason is now an object { unified, raw } instead of a string. The PR is still returning strings like 'stop' and 'tool-calls'.

once this is fixed we can merge and work on a release. Thanks for this!

@ldriss
Copy link

ldriss commented Jan 1, 2026

Thank you @mattapperson

In AI SDK v6, the LanguageModelV3FinishReason type changed from a string
to an object with `unified` and `raw` properties. This change ensures
the provider returns the correct format:

- `unified`: The normalized finish reason (stop, length, tool-calls, etc.)
- `raw`: The original finish reason string from the API

Changes:
- Update mapOpenRouterFinishReason to return { unified, raw } object
- Add createFinishReason helper for creating finish reasons
- Update chat and completion streaming to use new format
- Update all tests to expect object format
@pablof7z
Copy link
Contributor Author

pablof7z commented Jan 1, 2026

Fixed: LanguageModelV3FinishReason type

Thanks @mattapperson for catching this issue!

The finishReason is now correctly returned as an object with { unified, raw } format as required by AI SDK v6:

{
  unified: 'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other',
  raw: string | undefined  // Original finish reason from OpenRouter
}

All tests pass with the stable AI SDK v6 releases.

@ldriss
Copy link

ldriss commented Jan 1, 2026

Thanks @pablof7z

@nvti
Copy link

nvti commented Jan 4, 2026

Any update? 🙏

@wlib
Copy link

wlib commented Jan 4, 2026

@nvti Maintainers likely are on vacation mode until tomorrow, so I wouldn't expect to see anything until at least then.

@ldriss
Copy link

ldriss commented Jan 5, 2026

@subtleGradient Thank you so much

@The-Best-Codes
Copy link

Thank youuuu!!

@haowang1013
Copy link

Any update?

@ldriss
Copy link

ldriss commented Jan 7, 2026

waiting

@subtleGradient
Copy link
Contributor

great minds think alike
sorry I've been incommunicado about the progress on v6
will be working on the whole communication skills thing

but I have a gift for you:

https://github.com/OpenRouterTeam/ai-sdk-provider/releases/tag/v6.0.0-alpha.1

@ldriss
Copy link

ldriss commented Jan 7, 2026

@subtleGradient LOVE YOUU LOL

@ZenAlexa
Copy link

ZenAlexa commented Jan 7, 2026

@subtleGradient Thank you so much! Happy 2026❤️

@cpakken
Copy link

cpakken commented Jan 7, 2026

great minds think alike sorry I've been incommunicado about the progress on v6 will be working on the whole communication skills thing

but I have a gift for you:

https://github.com/OpenRouterTeam/ai-sdk-provider/releases/tag/v6.0.0-alpha.1

Thanks, but having trouble enabling reasoning options. google/gemini-3-flash-preview no longer has reasoning even when I include

model: openrouter('google/gemini-3-flash-preview', {
      usage: {
        include: true,
      },
      reasoning: {
        enabled: true,
        effort: 'medium',
      },
    })
    ```

@The-Best-Codes
Copy link

@subtleGradient Thank you so much!!!

@robert-j-y robert-j-y reopened this Jan 19, 2026
@robert-j-y robert-j-y self-requested a review January 19, 2026 18:37
- Bump version to 2.0.0
- Fix zod peer dependency (^3.25.0 || ^4.0.0)
- Make file content part fields optional and add file_id
- Handle LanguageModelV3ToolApprovalResponsePart in tool responses
- Apply biome formatting fixes
@robert-j-y robert-j-y merged commit 6fd68db into OpenRouterTeam:main Jan 19, 2026
1 check passed
@github-actions github-actions bot mentioned this pull request Jan 19, 2026
@robert-j-y
Copy link
Contributor

Thank you @pablof7z and everyone else for your help on this!

kesavan-byte pushed a commit to osm-API/ai-sdk-provider that referenced this pull request Feb 13, 2026
* feat: Initial AI SDK v6 support implementation

- Update package dependencies to AI SDK v6 beta versions
- Migrate ProviderV2 → ProviderV3
- Migrate LanguageModelV2 → LanguageModelV3
- Migrate EmbeddingModelV2 → EmbeddingModelV3
- Update all type imports to V3 equivalents
- Add comprehensive implementation documentation

Known issues:
- Usage structures need migration to V3 nested format
- Test files need V3 type updates
- Build currently fails due to usage type mismatches

See AI-SDK-V6-IMPLEMENTATION.md for full details

* feat: Complete AI SDK v6 support

- Migrate from ProviderV2 to ProviderV3
- Migrate from LanguageModelV2 to LanguageModelV3
- Migrate from EmbeddingModelV2 to EmbeddingModelV3
- Update specificationVersion to 'v3'
- Update usage structure to nested v6 format:
  - inputTokens: { total, noCache, cacheRead, cacheWrite }
  - outputTokens: { total, text, reasoning }
- Update all type imports (SharedV2* -> SharedV3*, etc.)
- Update dependencies:
  - @ai-sdk/provider: 2.0.0 -> 3.0.0-beta.27
  - @ai-sdk/provider-utils: 3.0.1 -> 4.0.0-beta.53
  - ai peer dep: ^6.0.0

Tested with free models on OpenRouter:
- Basic text generation ✅
- Streaming ✅
- Tool calling ✅
- Reasoning tokens ✅
- Multi-turn conversations ✅

* fix: Complete AI SDK v6 type migration

- Remove documentation files from PR
- Fix all V2→V3 type migrations in source files
- Add MSW-based test server utility to replace removed createTestServer
- Migrate all test files to V3 types
- Fix JSONSchema7 type issues in tests
- Fix requestBody unknown type issues
- Typecheck now passes with 0 errors
- Build succeeds
- Live tests pass (basic, streaming, tool calling, reasoning, multi-turn)

* chore: update to stable AI SDK v6 releases

- Update @ai-sdk/provider from 3.0.0-beta.27 to 3.0.0
- Update @ai-sdk/provider-utils from 4.0.0-beta.53 to 4.0.1
- Update ai from 6.0.0-beta.159 to 6.0.3
- Update tests for v6 usage format (nested inputTokens/outputTokens)
- Update tests for v3 embedding specification version
- Fix header tests for extended user-agent format

* fix: Return LanguageModelV3FinishReason as object { unified, raw }

In AI SDK v6, the LanguageModelV3FinishReason type changed from a string
to an object with `unified` and `raw` properties. This change ensures
the provider returns the correct format:

- `unified`: The normalized finish reason (stop, length, tool-calls, etc.)
- `raw`: The original finish reason string from the API

Changes:
- Update mapOpenRouterFinishReason to return { unified, raw } object
- Add createFinishReason helper for creating finish reasons
- Update chat and completion streaming to use new format
- Update all tests to expect object format

* fix: address review feedback and merge main branch fixes

- Bump version to 2.0.0
- Fix zod peer dependency (^3.25.0 || ^4.0.0)
- Make file content part fields optional and add file_id
- Handle LanguageModelV3ToolApprovalResponsePart in tool responses
- Apply biome formatting fixes

* chore: pin devDependency versions

* chore: pin ai peer dep

* docs: update README for AI SDK v6 release

---------

Co-authored-by: Robert Yeakel <212159665+robert-j-y@users.noreply.github.com>
konard added a commit to link-assistant/agent that referenced this pull request Feb 15, 2026
This commit addresses issue #186 where `bun install` shows
"warn: incorrect peer dependency ai@6.0.86" warning.

Root cause: @openrouter/ai-sdk-provider@1.5.4 requires ai@^5.0.0 as
peer dependency, but @link-assistant/agent uses ai@^6.0.1.

Solution:
- Update @openrouter/ai-sdk-provider from ^1.5.4 to ^2.2.3
  (version 2.0.0+ supports AI SDK v6)
- Update @opentui/core from ^0.1.46 to ^0.1.79
- Update @opentui/solid from ^0.1.46 to ^0.1.79

Note: The solid-js peer dependency warning remains due to upstream
issue in @opentui/solid which uses exact version pinning (1.9.9).
Reported at: anomalyco/opentui#689

Added case study documentation in docs/case-studies/issue-186/

References:
- OpenRouterTeam/ai-sdk-provider#307
- https://github.com/OpenRouterTeam/ai-sdk-provider/releases/tag/2.0.0

Fixes #186

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.