-
Notifications
You must be signed in to change notification settings - Fork 528
feat(providers): add Kimi provider via OpenRouter #738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add new AI provider for Moonshot AI's Kimi models via OpenRouter API. Backend: - Create KimiProvider class using OpenAI SDK - Support kimi-k2.5, kimi-k1.5, kimi-k1.5-long models - Register provider in factory with priority 4 Frontend: - Add Kimi settings tab with API key management - Add to provider tabs with OpenRouter icon Types: - Add 'kimi' to ModelProvider and Credentials types
📝 WalkthroughWalkthroughThe changes add Kimi model provider support via OpenRouter integration, introducing a new KimiProvider backend implementation, OpenRouter API credential handling, UI configuration tab, and model detection utilities across the type system. Changes
Sequence DiagramsequenceDiagram
participant User
participant UI as KimiSettingsTab
participant Store as AppStore
participant Service as SettingsService
participant Provider as KimiProvider
participant API as OpenRouter API
User->>UI: Enter API key & select model
UI->>UI: Validate key format
User->>UI: Test connection
UI->>Service: Test API with credentials
Service->>Provider: Create provider with API key
Provider->>API: POST /chat/completions (test)
API-->>Provider: Response
Provider-->>Service: Success/Error
Service-->>UI: Connection status
UI->>User: Display test result
User->>UI: Save configuration
UI->>Store: Update apiKeys.openrouter
Store->>Service: Save credentials
Service->>Service: Merge openrouter into apiKeys
Service-->>Store: Persisted
User->>UI: Execute query with Kimi
UI->>Provider: Execute with selected model
Provider->>API: Stream chat completions
API-->>Provider: Streamed content & tool calls
Provider->>Provider: Convert to ContentBlock format
Provider-->>UI: Yield incremental updates
UI->>User: Display streaming response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @mviner000, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new AI provider for Moonshot AI's Kimi models, accessible via the OpenRouter API. The changes encompass both backend and frontend components, enabling seamless integration and user management of Kimi models. Users can now configure their OpenRouter API key and select preferred Kimi models directly within the application settings. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new AI provider for Moonshot AI's Kimi models, accessed via the OpenRouter API. The changes are comprehensive, spanning the backend, frontend, and shared types. A new KimiProvider is created on the backend using the openai SDK, and it's registered in the provider factory. The frontend is updated with a new settings tab for Kimi, including API key management and model selection. Necessary type definitions for credentials and model providers have been updated accordingly.
The implementation is solid, but I've identified a few areas for improvement in the new KimiProvider, mainly concerning robustness in handling tool calls and some opportunities for code deduplication to enhance maintainability.
| function convertToolCalls( | ||
| toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] | ||
| ): ContentBlock[] { | ||
| return toolCalls | ||
| .filter((tc) => tc.type === 'function') | ||
| .map((tc) => { | ||
| // Type guard: only function tool calls have the function property | ||
| const funcCall = tc as OpenAI.Chat.Completions.ChatCompletionMessageToolCall & { | ||
| type: 'function'; | ||
| function: { name: string; arguments: string }; | ||
| }; | ||
| return { | ||
| type: 'tool_use' as const, | ||
| tool_use_id: funcCall.id, | ||
| name: funcCall.function.name, | ||
| input: JSON.parse(funcCall.function.arguments || '{}'), | ||
| }; | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The convertToolCalls function can be improved for robustness and clarity.
- The type assertion on line 94 is unnecessary. The preceding
.filter((tc) => tc.type === 'function')should already provide sufficient type narrowing for TypeScript to understand thattcis a function tool call. You can directly accesstc.idandtc.function. - The
JSON.parseon line 102 is unsafe and can crash the process if the model returns invalid JSON in theargumentsfield. It's recommended to wrap it in atry...catchblock to handle potential parsing errors gracefully.
Here's a suggested implementation that addresses both points:
function convertToolCalls(
toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
): ContentBlock[] {
return toolCalls
.filter((tc) => tc.type === 'function')
.map((tc) => {
let input = {};
try {
input = JSON.parse(tc.function.arguments || '{}');
} catch (e) {
logger.warn('Failed to parse tool call arguments as JSON', {
tool_call_id: tc.id,
arguments: tc.function.arguments,
error: e,
});
}
return {
type: 'tool_use' as const,
tool_use_id: tc.id,
name: tc.function.name,
input,
};
});
}| const textContent = msg.content | ||
| .filter((block) => block.type === 'text' && block.text) | ||
| .map((block) => block.text) | ||
| .join('\n'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic to extract text content from an array of content blocks is duplicated on lines 214-217. To improve maintainability and reduce code duplication, consider extracting this into a small helper function.
For example, you could create a function like this at the file level:
function getTextFromContentBlocks(blocks: readonly ContentBlock[]): string {
return blocks
.filter((block): block is { type: 'text'; text: string } => block.type === 'text' && !!block.text)
.map((block) => block.text)
.join('\n');
}You could then use this helper function here and on line 214 to process both msg.content and the prompt.
| ); | ||
| } | ||
| currentToolCall = { | ||
| id: tc.id || `call_${Date.now()}_${tc.index}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Date.now() to generate a fallback ID for tool calls is not guaranteed to be unique, especially if multiple tool calls are processed in the same millisecond. This could lead to issues with tracking tool calls correctly.
Since you're on Node.js >= 22, you can use the built-in crypto.randomUUID() for a more robust and truly unique identifier.
First, add this import at the top of the file:
import { randomUUID } from 'crypto';Then, update this line to use it. The call_ prefix and index are not necessary with a UUID.
id: tc.id || randomUUID(),There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
libs/types/src/settings.ts (1)
1052-1065:⚠️ Potential issue | 🟠 MajorBump the credentials schema version after adding
openrouter.
Without a version bump, migrations/backfills may not run and older credentials may miss the new key.🔧 Suggested update
-export const CREDENTIALS_VERSION = 1; +export const CREDENTIALS_VERSION = 2;apps/server/src/services/settings-service.ts (1)
713-738:⚠️ Potential issue | 🟡 MinorAdd OpenRouter to masked credentials return type and payload.
getMaskedCredentials()omits OpenRouter despite it being a fully supported credentials provider (used by Kimi). Frontend API consumers won't see the OpenRouter configuration status. Extend the return type and add the openrouter field to the return payload.
🤖 Fix all issues with AI agents
In `@apps/server/src/providers/kimi-provider.ts`:
- Around line 87-104: In convertToolCalls, guard against malformed JSON in
funcCall.function.arguments by wrapping JSON.parse(funcCall.function.arguments
|| '{}') in a try-catch inside the map over toolCalls; on parse failure return a
safe fallback (e.g., the raw arguments string or an empty object) for the input
field so convertToolCalls (and the returned ContentBlock with type 'tool_use'
and tool_use_id) never throws and downstream processing continues.
- Around line 247-311: The current streaming logic uses a single currentToolCall
which causes interleaved tool calls to be lost; replace that with a Map<number,
Partial<...>> (keyed by each delta.tool_calls[].index) to accumulate tool call
fragments per index: in the loop over stream/delta.tool_calls, look up or create
the map entry for tc.index, append name/arguments into that entry, and when a
tool call signals completion (tc.id present or at stream end) move the finished
entry into accumulatedToolCalls (preserving the original id/type/function
shape); update references to currentToolCall to use the map lookup and ensure
the finalization block flushes any remaining map entries into
accumulatedToolCalls.
In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`:
- Around line 235-243: The visibility toggle Button (rendered with Eye/EyeOff
and using showApiKey and setShowApiKey) lacks an accessible name; update the
Button component to include an aria-label that reflects the action (e.g.,
aria-label={showApiKey ? "Hide API key" : "Show API key"}) and optionally add
aria-pressed={showApiKey} so screen readers receive state; ensure you place
these attributes on the same Button that calls setShowApiKey to keep semantics
and interaction consistent.
- Around line 50-52: selectedModel is only local state and isn’t persisted; wire
it into the app store by adding a kimiDefaultModel field and its setter, include
"kimiDefaultModel" in SETTINGS_FIELDS_TO_SYNC inside use-settings-sync.ts, and
replace/local-update selectedModel usage to call the store setter on change
(mirror how cursor-settings-tab.tsx uses cursorDefaultModel and
setCursorDefaultModel) so the chosen model is persisted across
navigation/reload; look for selectedModel/setSelectedModel in
kimi-settings-tab.tsx and add/dispatch the corresponding kimiDefaultModel and
setKimiDefaultModel store hooks instead of keeping local-only state.
In `@libs/types/src/provider-utils.ts`:
- Around line 295-299: The normalization for OpenRouter Moonshot AI models
currently prepends PROVIDER_PREFIXES.kimi unconditionally, causing double
'kimi-' (e.g., 'kimi-kimi-k2.5'); update the block that handles
model.startsWith('moonshotai/') to strip 'moonshotai/' into kimiModel and then
return kimiModel as-is if it already starts with PROVIDER_PREFIXES.kimi,
otherwise prepend PROVIDER_PREFIXES.kimi; reference the model variable and
PROVIDER_PREFIXES.kimi in your change.
🧹 Nitpick comments (1)
apps/server/src/providers/kimi-provider.ts (1)
141-176: Normalize model aliases before resolving OpenRouter IDs.
Use resolveModelString() before model resolution to comply with the alias-normalization requirement.♻️ Suggested change
-import { validateBareModelId, type Credentials } from '@automaker/types'; +import { validateBareModelId, type Credentials } from '@automaker/types'; +import { resolveModelString } from '@automaker/model-resolver'; @@ - const openRouterModel = getOpenRouterModelId(model); + const resolvedModel = resolveModelString(model); + const openRouterModel = getOpenRouterModelId(resolvedModel);As per coding guidelines, use resolveModelString() from
@automaker/model-resolverto convert model aliases (haiku, sonnet, opus) to full model names before making API calls.
| function convertToolCalls( | ||
| toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] | ||
| ): ContentBlock[] { | ||
| return toolCalls | ||
| .filter((tc) => tc.type === 'function') | ||
| .map((tc) => { | ||
| // Type guard: only function tool calls have the function property | ||
| const funcCall = tc as OpenAI.Chat.Completions.ChatCompletionMessageToolCall & { | ||
| type: 'function'; | ||
| function: { name: string; arguments: string }; | ||
| }; | ||
| return { | ||
| type: 'tool_use' as const, | ||
| tool_use_id: funcCall.id, | ||
| name: funcCall.function.name, | ||
| input: JSON.parse(funcCall.function.arguments || '{}'), | ||
| }; | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenAI SDK v6 streaming tool_calls function arguments partial JSON streaming behavior
💡 Result:
In OpenAI SDK v6 streaming, tool_calls[].function.arguments is streamed as a string in incremental fragments (often not valid JSON until the end)—so you should accumulate deltas and only JSON.parse() once the “done/final” event arrives. Also, even the final string can be invalid or contain extra fields, so validate before executing the tool call. [1][2][3]
What you’ll see while streaming
- Chat Completions streaming (SDK v6 event handlers):
tool_calls.function.arguments.delta→ emits partial argument texttool_calls.function.arguments.done→ emits the complete arguments string [1]
- Responses API streaming (event stream):
response.function_call_arguments.delta→ partial JSON-string fragmentresponse.function_call_arguments.done→ final JSON string for arguments [2]
Correct handling pattern
- Key by tool call index / item_id.
- Append each
deltafragment in order. - Parse only when you receive the corresponding
*.done.
Why parsing early breaks
The model may stream something like:
- delta
#1:{ "city": "San - delta
#2:Francisco", "unit": "c" }
Only after concatenation is it parseable JSON. [2][3]
Sources: SDK v6 streaming event names for tool-call argument deltas/done [1]; Responses streaming response.function_call_arguments.delta/done semantics [2]; arguments is a JSON-formatted string that may be invalid and must be validated [3].
🏁 Script executed:
# Search for how kimi-provider is used in the codebase
rg "kimi-provider\|convertToolCalls" --type ts --type tsx -B 2 -A 5Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Check if there are existing error handling patterns for JSON parsing in the codebase
rg "JSON\.parse" apps/server/src --type ts --type tsx -B 2 -A 2 | head -50Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Look at the full kimi-provider file to understand context
wc -l apps/server/src/providers/kimi-provider.tsRepository: AutoMaker-Org/automaker
Length of output: 113
🏁 Script executed:
cat -n apps/server/src/providers/kimi-provider.ts | head -120Repository: AutoMaker-Org/automaker
Length of output: 4443
🏁 Script executed:
cat -n apps/server/src/providers/kimi-provider.ts | tail -350Repository: AutoMaker-Org/automaker
Length of output: 12478
🏁 Script executed:
# Find where convertToolCalls is used
rg "convertToolCalls" --type ts -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 1039
Add defensive JSON parsing for tool-call arguments.
The accumulated tool arguments string may be invalid or malformed even after streaming completes, causing JSON.parse() to throw and abort processing. Wrap the parsing in a try-catch with a safe fallback:
Suggested fix
+function safeJsonParse(value: string | undefined): Record<string, unknown> {
+ if (!value) return {};
+ try {
+ return JSON.parse(value) as Record<string, unknown>;
+ } catch {
+ return { _raw: value };
+ }
+}
+
function convertToolCalls(
toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
): ContentBlock[] {
@@
return {
type: 'tool_use' as const,
tool_use_id: funcCall.id,
name: funcCall.function.name,
- input: JSON.parse(funcCall.function.arguments || '{}'),
+ input: safeJsonParse(funcCall.function.arguments),
};
});
}🤖 Prompt for AI Agents
In `@apps/server/src/providers/kimi-provider.ts` around lines 87 - 104, In
convertToolCalls, guard against malformed JSON in funcCall.function.arguments by
wrapping JSON.parse(funcCall.function.arguments || '{}') in a try-catch inside
the map over toolCalls; on parse failure return a safe fallback (e.g., the raw
arguments string or an empty object) for the input field so convertToolCalls
(and the returned ContentBlock with type 'tool_use' and tool_use_id) never
throws and downstream processing continues.
| // Accumulate the response | ||
| let accumulatedContent = ''; | ||
| let accumulatedToolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] = []; | ||
| let currentToolCall: Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall> | null = | ||
| null; | ||
|
|
||
| for await (const chunk of stream) { | ||
| const delta = chunk.choices[0]?.delta; | ||
|
|
||
| if (!delta) continue; | ||
|
|
||
| // Handle text content | ||
| if (delta.content) { | ||
| accumulatedContent += delta.content; | ||
|
|
||
| // Yield incremental text updates | ||
| yield { | ||
| type: 'assistant', | ||
| message: { | ||
| role: 'assistant', | ||
| content: [{ type: 'text', text: accumulatedContent }], | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| // Handle tool calls | ||
| if (delta.tool_calls) { | ||
| for (const tc of delta.tool_calls) { | ||
| if (tc.index !== undefined) { | ||
| // New tool call or continuation | ||
| if (!currentToolCall || tc.id) { | ||
| // Start new tool call | ||
| if (currentToolCall && currentToolCall.id) { | ||
| accumulatedToolCalls.push( | ||
| currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall | ||
| ); | ||
| } | ||
| currentToolCall = { | ||
| id: tc.id || `call_${Date.now()}_${tc.index}`, | ||
| type: 'function', | ||
| function: { | ||
| name: tc.function?.name || '', | ||
| arguments: tc.function?.arguments || '', | ||
| }, | ||
| }; | ||
| } else if (currentToolCall.function) { | ||
| // Append to existing tool call | ||
| if (tc.function?.name) { | ||
| currentToolCall.function.name += tc.function.name; | ||
| } | ||
| if (tc.function?.arguments) { | ||
| currentToolCall.function.arguments += tc.function.arguments; | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Finalize any pending tool call | ||
| if (currentToolCall && currentToolCall.id && currentToolCall.function) { | ||
| accumulatedToolCalls.push( | ||
| currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find apps/server/src/providers -name "kimi-provider.ts" -type fRepository: AutoMaker-Org/automaker
Length of output: 109
🏁 Script executed:
cat -n apps/server/src/providers/kimi-provider.ts | head -350 | tail -110Repository: AutoMaker-Org/automaker
Length of output: 4117
🌐 Web query:
OpenAI SDK v6 chat completions streaming tool_calls interleaved multiple indices behavior
💡 Result:
In the Chat Completions streaming response, tool calls arrive on choices[i].delta.tool_calls as an array of partial (“delta”) tool-call objects, and each tool call is identified by tool_calls[j].index (plus id, type, and function{name,arguments} when present). (platform.openai.com)
What “multiple indices” / “interleaved” means
- When the model decides to call multiple tools in one assistant turn, you’ll see multiple distinct
tool_calls[*].indexvalues (typically0, 1, 2, ...). (platform.openai.com) - In streaming, the model can emit argument text in many chunks, and those chunks can be interleaved across indices (e.g., you might receive some arguments for index
0, then some for index1, then back to index0). The SDK’s streaming helpers are designed around this reality and treatindexas the stable key for routing/accumulating deltas. (deepwiki.com) - Practically: you must accumulate per tool-call index, concatenating
function.argumentsdeltas until complete. (deepwiki.com)
What fields repeat (and what often doesn’t)
- The API schema defines
tool_calls[*]entries withindex,id,type, andfunction{name, arguments}. (platform.openai.com) - In real streams, some fields may only appear in the first delta for a given index, while
function.argumentstends to be delivered incrementally—so your accumulator should “fill once, then append arguments”. (This is the motivation behind SDK helper eventing like incremental tool-argument delta events keyed byindex.) (deepwiki.com)
Why the index is important (and required by SDK helpers)
If an OpenAI-compatible provider omits choices[].delta.tool_calls[].index, it breaks compatibility with the OpenAI JS helper client.beta.chat.completions.stream, because the helper relies on index to merge tool-call deltas correctly—especially when multiple tool calls are present and/or interleaved. (github.com)
Minimal accumulator pattern (what you should implement)
Maintain a map keyed by tool_call.index, and for each chunk:
- for each
deltaToolCallinchunk.choices[0].delta.tool_calls || []:- initialize stored tool call object for that
indexif needed (copyid,type,function.namewhen present) - append
deltaToolCall.function.arguments(string-concat) onto the storedfunction.argumentsbuffer
- initialize stored tool call object for that
This matches the API contract (deltas + index) and the SDK’s intended merging strategy for streamed tool calls. (platform.openai.com)
Citations:
- 1: https://platform.openai.com/docs/api-reference/chat-streaming/streaming
- 2: https://platform.openai.com/docs/api-reference/chat-streaming/streaming
- 3: https://deepwiki.com/openai/openai-node/5.1-streaming?utm_source=openai
- 4: https://deepwiki.com/openai/openai-node/5.1-streaming?utm_source=openai
- 5: https://platform.openai.com/docs/api-reference/chat-streaming/streaming
- 6: https://deepwiki.com/openai/openai-node/5.3-function-calling-and-tools?utm_source=openai
- 7: OpenAI-compatible API tool calls have no index ollama/ollama#7881
- 8: https://platform.openai.com/docs/api-reference/chat-streaming/streaming
Handle concurrent streaming tool calls by index.
When OpenAI streams multiple tool calls, they can be interleaved by index (e.g., arguments for index 0, then index 1, then back to index 0). The single currentToolCall variable will lose or merge calls when indices arrive out of sequence.
Use a Map<number, ToolCall> keyed by tool_calls[].index to accumulate each tool call independently, then convert to an array when finalized:
🔧 Fix: accumulation by index
- let accumulatedToolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] = [];
- let currentToolCall: Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall> | null =
- null;
+ const toolCallMap = new Map<
+ number,
+ Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall>
+ >();
@@
- if (delta.tool_calls) {
- for (const tc of delta.tool_calls) {
- if (tc.index !== undefined) {
- // New tool call or continuation
- if (!currentToolCall || tc.id) {
- // Start new tool call
- if (currentToolCall && currentToolCall.id) {
- accumulatedToolCalls.push(
- currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
- );
- }
- currentToolCall = {
- id: tc.id || `call_${Date.now()}_${tc.index}`,
- type: 'function',
- function: {
- name: tc.function?.name || '',
- arguments: tc.function?.arguments || '',
- },
- };
- } else if (currentToolCall.function) {
- // Append to existing tool call
- if (tc.function?.name) {
- currentToolCall.function.name += tc.function.name;
- }
- if (tc.function?.arguments) {
- currentToolCall.function.arguments += tc.function.arguments;
- }
- }
- }
- }
- }
+ if (delta.tool_calls) {
+ for (const tc of delta.tool_calls) {
+ if (tc.index === undefined) continue;
+ const existing =
+ toolCallMap.get(tc.index) ?? {
+ id: tc.id || `call_${Date.now()}_${tc.index}`,
+ type: 'function',
+ function: { name: '', arguments: '' },
+ };
+ if (tc.function?.name) existing.function!.name += tc.function.name;
+ if (tc.function?.arguments) existing.function!.arguments += tc.function.arguments;
+ toolCallMap.set(tc.index, existing);
+ }
+ }
@@
- if (currentToolCall && currentToolCall.id && currentToolCall.function) {
- accumulatedToolCalls.push(
- currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
- );
- }
+ const accumulatedToolCalls =
+ Array.from(toolCallMap.values()) as OpenAI.Chat.Completions.ChatCompletionMessageToolCall[];🤖 Prompt for AI Agents
In `@apps/server/src/providers/kimi-provider.ts` around lines 247 - 311, The
current streaming logic uses a single currentToolCall which causes interleaved
tool calls to be lost; replace that with a Map<number, Partial<...>> (keyed by
each delta.tool_calls[].index) to accumulate tool call fragments per index: in
the loop over stream/delta.tool_calls, look up or create the map entry for
tc.index, append name/arguments into that entry, and when a tool call signals
completion (tc.id present or at stream end) move the finished entry into
accumulatedToolCalls (preserving the original id/type/function shape); update
references to currentToolCall to use the map lookup and ensure the finalization
block flushes any remaining map entries into accumulatedToolCalls.
| // Model selection state | ||
| const [selectedModel, setSelectedModel] = useState<KimiModelId>('kimi-k2.5'); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, explore the settings provider structure
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers/ | head -20Repository: AutoMaker-Org/automaker
Length of output: 450
🏁 Script executed:
# Get the content of the kimi-settings-tab.tsx file to see the full context
wc -l apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsxRepository: AutoMaker-Org/automaker
Length of output: 145
🏁 Script executed:
# Read the kimi-settings-tab.tsx file
cat -n apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsxRepository: AutoMaker-Org/automaker
Length of output: 17322
🏁 Script executed:
# Search for store patterns and model persistence in the store
rg -n "Model|default" apps/ui/src/store --type ts --type tsx | head -40Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Check other provider tabs to see how they handle model selection
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers/ -x basename {} \; | sortRepository: AutoMaker-Org/automaker
Length of output: 185
🏁 Script executed:
# Check claude-settings-tab to see how it handles model persistence
cat -n apps/ui/src/components/views/settings-view/providers/claude-settings-tab.tsx | head -100Repository: AutoMaker-Org/automaker
Length of output: 3113
🏁 Script executed:
# Search for model persistence patterns in other settings tabs
rg -n "selectedModel|Model.*useState|setApiKeys.*model" apps/ui/src/components/views/settings-view/providers/Repository: AutoMaker-Org/automaker
Length of output: 575
🏁 Script executed:
# Check the app-store structure to see what provider settings are persisted
cat apps/ui/src/store/app-store.ts | head -150Repository: AutoMaker-Org/automaker
Length of output: 3759
🏁 Script executed:
# Check how Cursor/Codex models are handled in app-store
rg -n "CursorModelId|CodexModelId|OpencodeModelId|PhaseModelConfig" apps/ui/src/store/app-store.ts | head -20Repository: AutoMaker-Org/automaker
Length of output: 1189
🏁 Script executed:
# Check other provider tabs (cursor, codex, opencode) for model selection patterns
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers -x rg -l "useState.*Model|selectedModel"Repository: AutoMaker-Org/automaker
Length of output: 141
🏁 Script executed:
# Check if Kimi model should be in PhaseModelConfig or DEFAULT_PHASE_MODELS
rg -n "kimi|Kimi" apps/ui/src/store/app-store.tsRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Examine cursor-settings-tab to see how it manages model selection
cat -n apps/ui/src/components/views/settings-view/providers/cursor-settings-tab.tsx | head -120Repository: AutoMaker-Org/automaker
Length of output: 4014
🏁 Script executed:
# Verify there's no Kimi model persistence in the store or settings sync
rg -n "kimi|Kimi" apps/ui/src/store/Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check use-settings-sync.ts to see if it handles Kimi models
cat -n apps/ui/src/hooks/use-settings-sync.ts | head -100Repository: AutoMaker-Org/automaker
Length of output: 3989
🏁 Script executed:
# Check if there's any API integration or settings persistence for Kimi model
rg -n "kimiDefaultModel|kimi.*model|selectedModel.*save|selectedModel.*sync" apps/ui/src --type ts --type tsxRepository: AutoMaker-Org/automaker
Length of output: 94
Model selection isn't persisted, so chosen defaults reset on navigation/reload.
selectedModel is local-only state (line 51) and never synced to the store. Compare with cursor-settings-tab.tsx, which uses cursorDefaultModel from the store and calls setCursorDefaultModel() to persist changes. Wire Kimi model selection into the same persistence mechanism: add kimiDefaultModel to app-store, include it in SETTINGS_FIELDS_TO_SYNC in use-settings-sync.ts, and call the corresponding setter action on model change.
🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`
around lines 50 - 52, selectedModel is only local state and isn’t persisted;
wire it into the app store by adding a kimiDefaultModel field and its setter,
include "kimiDefaultModel" in SETTINGS_FIELDS_TO_SYNC inside
use-settings-sync.ts, and replace/local-update selectedModel usage to call the
store setter on change (mirror how cursor-settings-tab.tsx uses
cursorDefaultModel and setCursorDefaultModel) so the chosen model is persisted
across navigation/reload; look for selectedModel/setSelectedModel in
kimi-settings-tab.tsx and add/dispatch the corresponding kimiDefaultModel and
setKimiDefaultModel store hooks instead of keeping local-only state.
| <Button | ||
| type="button" | ||
| variant="ghost" | ||
| size="icon" | ||
| className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent" | ||
| onClick={() => setShowApiKey(!showApiKey)} | ||
| > | ||
| {showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />} | ||
| </Button> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an accessible label to the visibility toggle.
The icon-only button lacks an accessible name. Add an aria-label (and optionally aria-pressed).
🔧 Suggested fix
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent"
onClick={() => setShowApiKey(!showApiKey)}
+ aria-label={showApiKey ? 'Hide API key' : 'Show API key'}
+ aria-pressed={showApiKey}
>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <Button | |
| type="button" | |
| variant="ghost" | |
| size="icon" | |
| className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent" | |
| onClick={() => setShowApiKey(!showApiKey)} | |
| > | |
| {showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />} | |
| </Button> | |
| <Button | |
| type="button" | |
| variant="ghost" | |
| size="icon" | |
| className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent" | |
| onClick={() => setShowApiKey(!showApiKey)} | |
| aria-label={showApiKey ? 'Hide API key' : 'Show API key'} | |
| aria-pressed={showApiKey} | |
| > | |
| {showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />} | |
| </Button> |
🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`
around lines 235 - 243, The visibility toggle Button (rendered with Eye/EyeOff
and using showApiKey and setShowApiKey) lacks an accessible name; update the
Button component to include an aria-label that reflects the action (e.g.,
aria-label={showApiKey ? "Hide API key" : "Show API key"}) and optionally add
aria-pressed={showApiKey} so screen readers receive state; ensure you place
these attributes on the same Button that calls setShowApiKey to keep semantics
and interaction consistent.
| // OpenRouter Kimi format: moonshotai/model -> kimi-model | ||
| if (model.startsWith('moonshotai/')) { | ||
| const kimiModel = model.replace('moonshotai/', ''); | ||
| return `${PROVIDER_PREFIXES.kimi}${kimiModel}`; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenRouter Moonshot AI Kimi model ID format moonshotai
💡 Result:
On OpenRouter, Moonshot AI’s Kimi models use the provider prefix moonshotai/ followed by the model slug:
- Format:
moonshotai/<model-id>
Examples (valid OpenRouter IDs):
moonshotai/kimi-k2[1]moonshotai/kimi-k2-thinking[2]moonshotai/kimi-vl-a3b-instruct[3]moonshotai/kimi-dev-72b[4]moonshotai/kimi-linear-48b-a3b-instruct[5]
Usage: set your OpenRouter request’s "model" field to one of the IDs above (or another moonshotai/... listing in OpenRouter’s directory). [3]
Sources: [1] [2] [3] [4] [5]
🏁 Script executed:
# Search codebase for any references to moonshotai models or tests
rg "moonshotai" --type ts --type tsx --type js --type jsx -A 3 -B 3Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Look for test files or documentation about model normalization
fd -e test -e spec -e doc | head -20Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check if there are any examples of how models are used in the codebase
rg "kimi-" --type ts --type tsx --type js --type jsx -A 2 -B 2Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Search for PROVIDER_PREFIXES definition and moonshotai references
rg "moonshotai|PROVIDER_PREFIXES" -A 3 -B 1Repository: AutoMaker-Org/automaker
Length of output: 7869
🏁 Script executed:
# Look for tests related to normalizeModelString or provider utils
rg "normalizeModelString|provider-utils" -lRepository: AutoMaker-Org/automaker
Length of output: 123
🏁 Script executed:
# Check the actual definition of PROVIDER_PREFIXES.kimi
rg "PROVIDER_PREFIXES\s*=" -A 20Repository: AutoMaker-Org/automaker
Length of output: 1471
Fix double kimi- prefix when normalizing OpenRouter Moonshot AI model IDs.
OpenRouter's actual model IDs for Kimi are formatted as moonshotai/kimi-k2.5, moonshotai/kimi-k2-thinking, etc.—the model slug already includes the kimi- prefix. The current normalization strips the provider prefix and blindly prepends PROVIDER_PREFIXES.kimi, producing invalid IDs like kimi-kimi-k2.5.
Align with the guard logic used by other providers (cursor, codex, opencode) by checking whether the stripped model already carries the prefix:
🔧 Suggested fix
- if (model.startsWith('moonshotai/')) {
- const kimiModel = model.replace('moonshotai/', '');
- return `${PROVIDER_PREFIXES.kimi}${kimiModel}`;
- }
+ if (model.startsWith('moonshotai/')) {
+ const kimiModel = model.slice('moonshotai/'.length);
+ return kimiModel.startsWith(PROVIDER_PREFIXES.kimi)
+ ? kimiModel
+ : `${PROVIDER_PREFIXES.kimi}${kimiModel}`;
+ }🤖 Prompt for AI Agents
In `@libs/types/src/provider-utils.ts` around lines 295 - 299, The normalization
for OpenRouter Moonshot AI models currently prepends PROVIDER_PREFIXES.kimi
unconditionally, causing double 'kimi-' (e.g., 'kimi-kimi-k2.5'); update the
block that handles model.startsWith('moonshotai/') to strip 'moonshotai/' into
kimiModel and then return kimiModel as-is if it already starts with
PROVIDER_PREFIXES.kimi, otherwise prepend PROVIDER_PREFIXES.kimi; reference the
model variable and PROVIDER_PREFIXES.kimi in your change.
Add new AI provider for Moonshot AI's Kimi models via OpenRouter API.
Backend:
Frontend:
Types:
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.