-
Notifications
You must be signed in to change notification settings - Fork 5k
feat(provider): add Responses API support for custom OpenAI-compatible providers #7304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(provider): add Responses API support for custom OpenAI-compatible providers #7304
Conversation
|
The following comment was made by an LLM, it may be inaccurate: No duplicate PRs found |
…e providers
- Register @opencode-ai/openai-compatible as bundled provider with Responses API support
- Add omitMaxOutputTokens option for providers that don't support max_output_tokens parameter
- Default to responses() API, with option to fallback to chat() via useResponsesApi: false
This enables custom providers (like OpenRouter, privnode, etc.) that support
OpenAI's /responses endpoint to work with OpenCode.
Usage in opencode.json:
{
"provider": {
"my-provider": {
"npm": "@opencode-ai/openai-compatible",
"api": "https://my-api.com/v1",
"options": {
"omitMaxOutputTokens": true
}
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds Responses API support for custom OpenAI-compatible providers by registering @opencode-ai/openai-compatible as a bundled provider. It introduces two new configuration options: omitMaxOutputTokens to exclude the max_output_tokens parameter for incompatible proxies, and useResponsesApi to control whether the Responses API or Chat Completions API is used.
Key changes:
- Register
@opencode-ai/openai-compatibleprovider with support for both Chat and Responses APIs - Add
omitMaxOutputTokensoption to conditionally omitmax_output_tokensparameter - Default to Responses API with fallback to Chat API via
useResponsesApi: false
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
packages/opencode/src/provider/sdk/openai-compatible/src/responses/openai-responses-language-model.ts |
Modified to conditionally include max_output_tokens parameter based on omitMaxOutputTokens config option |
packages/opencode/src/provider/sdk/openai-compatible/src/responses/openai-config.ts |
Added omitMaxOutputTokens field to OpenAIConfig type; removed documentation for fileIdPrefixes |
packages/opencode/src/provider/sdk/openai-compatible/src/openai-compatible-provider.ts |
Added omitMaxOutputTokens option to provider settings and passed it through to responses model; removed documentation for existing fields |
packages/opencode/src/provider/provider.ts |
Registered @opencode-ai/openai-compatible as bundled provider and added API selection logic based on useResponsesApi option |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
packages/opencode/src/provider/sdk/openai-compatible/src/responses/openai-config.ts
Show resolved
Hide resolved
packages/opencode/src/provider/sdk/openai-compatible/src/openai-compatible-provider.ts
Show resolved
Hide resolved
| } else if (model.api.npm === "@opencode-ai/openai-compatible") { | ||
| language = | ||
| provider.options?.useResponsesApi === false | ||
| ? (sdk as any).chat(model.api.id) | ||
| : (sdk as any).responses(model.api.id) | ||
| } else { | ||
| language = sdk.languageModel(model.api.id) | ||
| } |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new API selection logic for @opencode-ai/openai-compatible providers lacks test coverage. The conditional logic that determines whether to use sdk.chat() or sdk.responses() based on the useResponsesApi option should be tested. Consider adding tests that verify: 1) the default behavior uses responses(), 2) setting useResponsesApi to false uses chat(), and 3) the omitMaxOutputTokens option is properly passed through to the underlying model.
| * - OpenAI: ['file-'] for IDs like 'file-abc123' | ||
| * - Azure OpenAI: ['assistant-'] for IDs like 'assistant-abc123' | ||
| */ | ||
| fileIdPrefixes?: readonly string[] |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The newly added omitMaxOutputTokens field lacks documentation explaining its purpose and usage. Consider adding a JSDoc comment to describe when and why this option should be used, similar to the documentation that was removed for the fileIdPrefixes field.
| fileIdPrefixes?: readonly string[] | |
| fileIdPrefixes?: readonly string[] | |
| /** | |
| * If true, do not send a `max_output_tokens` (or equivalent) parameter to the | |
| * OpenAI-compatible API, even when a max output value is available. | |
| * | |
| * Use this when targeting backends that either do not support a | |
| * `max_output_tokens` setting or have their own fixed token limits, to avoid | |
| * request validation errors or conflicting limits. | |
| */ |
packages/opencode/src/provider/sdk/openai-compatible/src/openai-compatible-provider.ts
Show resolved
Hide resolved
b68ccc1 to
acfc895
Compare
Summary
@opencode-ai/openai-compatibleas bundled provider with Responses API supportomitMaxOutputTokensoption for providers that don't supportmax_output_tokensparameterresponses()API, with option to fallback tochat()viauseResponsesApi: falseMotivation
Many OpenAI-compatible API proxies (OpenRouter, NewAPI? etc.) now support OpenAI's
/responsesendpoint. However,@ai-sdk/openai-compatibleonly supports Chat Completions API.This PR exposes the existing internal Responses API implementation (currently only used for GitHub Copilot) to custom providers.
Related: vercel/ai#9723 (stale, 795 commits behind)
Usage
{ "provider": { "my-provider": { "npm": "@opencode-ai/openai-compatible", "api": "https://my-api.com/v1", "env": ["MY_API_KEY"], "options": { "omitMaxOutputTokens": true }, "models": { "gpt-5.2": {} } } } }Options
omitMaxOutputTokensmax_output_tokensparameter (some proxies don't support it)useResponsesApifalseto use Chat Completions API