Skip to content

Conversation

@GreyElaina
Copy link

@GreyElaina GreyElaina commented Jan 8, 2026

Summary

  • Register @opencode-ai/openai-compatible as bundled provider with Responses API support
  • Add omitMaxOutputTokens option for providers that don't support max_output_tokens parameter
  • Default to responses() API, with option to fallback to chat() via useResponsesApi: false

Motivation

Many OpenAI-compatible API proxies (OpenRouter, NewAPI? etc.) now support OpenAI's /responses endpoint. However, @ai-sdk/openai-compatible only supports Chat Completions API.

This PR exposes the existing internal Responses API implementation (currently only used for GitHub Copilot) to custom providers.

Related: vercel/ai#9723 (stale, 795 commits behind)

Usage

{
  "provider": {
    "my-provider": {
      "npm": "@opencode-ai/openai-compatible",
      "api": "https://my-api.com/v1",
      "env": ["MY_API_KEY"],
      "options": {
        "omitMaxOutputTokens": true
      },
      "models": {
        "gpt-5.2": {}
      }
    }
  }
}

Options

Option Type Default Description
omitMaxOutputTokens boolean false Don't send max_output_tokens parameter (some proxies don't support it)
useResponsesApi boolean true Use Responses API; set to false to use Chat Completions API

Copilot AI review requested due to automatic review settings January 8, 2026 09:20
@github-actions
Copy link
Contributor

github-actions bot commented Jan 8, 2026

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

…e providers

- Register @opencode-ai/openai-compatible as bundled provider with Responses API support
- Add omitMaxOutputTokens option for providers that don't support max_output_tokens parameter
- Default to responses() API, with option to fallback to chat() via useResponsesApi: false

This enables custom providers (like OpenRouter, privnode, etc.) that support
OpenAI's /responses endpoint to work with OpenCode.

Usage in opencode.json:
{
  "provider": {
    "my-provider": {
      "npm": "@opencode-ai/openai-compatible",
      "api": "https://my-api.com/v1",
      "options": {
        "omitMaxOutputTokens": true
      }
    }
  }
}
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Responses API support for custom OpenAI-compatible providers by registering @opencode-ai/openai-compatible as a bundled provider. It introduces two new configuration options: omitMaxOutputTokens to exclude the max_output_tokens parameter for incompatible proxies, and useResponsesApi to control whether the Responses API or Chat Completions API is used.

Key changes:

  • Register @opencode-ai/openai-compatible provider with support for both Chat and Responses APIs
  • Add omitMaxOutputTokens option to conditionally omit max_output_tokens parameter
  • Default to Responses API with fallback to Chat API via useResponsesApi: false

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
packages/opencode/src/provider/sdk/openai-compatible/src/responses/openai-responses-language-model.ts Modified to conditionally include max_output_tokens parameter based on omitMaxOutputTokens config option
packages/opencode/src/provider/sdk/openai-compatible/src/responses/openai-config.ts Added omitMaxOutputTokens field to OpenAIConfig type; removed documentation for fileIdPrefixes
packages/opencode/src/provider/sdk/openai-compatible/src/openai-compatible-provider.ts Added omitMaxOutputTokens option to provider settings and passed it through to responses model; removed documentation for existing fields
packages/opencode/src/provider/provider.ts Registered @opencode-ai/openai-compatible as bundled provider and added API selection logic based on useResponsesApi option

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +982 to +989
} else if (model.api.npm === "@opencode-ai/openai-compatible") {
language =
provider.options?.useResponsesApi === false
? (sdk as any).chat(model.api.id)
: (sdk as any).responses(model.api.id)
} else {
language = sdk.languageModel(model.api.id)
}
Copy link

Copilot AI Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new API selection logic for @opencode-ai/openai-compatible providers lacks test coverage. The conditional logic that determines whether to use sdk.chat() or sdk.responses() based on the useResponsesApi option should be tested. Consider adding tests that verify: 1) the default behavior uses responses(), 2) setting useResponsesApi to false uses chat(), and 3) the omitMaxOutputTokens option is properly passed through to the underlying model.

Copilot uses AI. Check for mistakes.
* - OpenAI: ['file-'] for IDs like 'file-abc123'
* - Azure OpenAI: ['assistant-'] for IDs like 'assistant-abc123'
*/
fileIdPrefixes?: readonly string[]
Copy link

Copilot AI Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly added omitMaxOutputTokens field lacks documentation explaining its purpose and usage. Consider adding a JSDoc comment to describe when and why this option should be used, similar to the documentation that was removed for the fileIdPrefixes field.

Suggested change
fileIdPrefixes?: readonly string[]
fileIdPrefixes?: readonly string[]
/**
* If true, do not send a `max_output_tokens` (or equivalent) parameter to the
* OpenAI-compatible API, even when a max output value is available.
*
* Use this when targeting backends that either do not support a
* `max_output_tokens` setting or have their own fixed token limits, to avoid
* request validation errors or conflicting limits.
*/

Copilot uses AI. Check for mistakes.
@GreyElaina GreyElaina force-pushed the feat/openai-compatible-responses-api branch from b68ccc1 to acfc895 Compare January 8, 2026 09:25
@GreyElaina GreyElaina closed this Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant