Skip to content

Conversation

@mviner000
Copy link

@mviner000 mviner000 commented Jan 31, 2026

Add new AI provider for Moonshot AI's Kimi models via OpenRouter API.

Backend:

  • Create KimiProvider class using OpenAI SDK
  • Support kimi-k2.5, kimi-k1.5, kimi-k1.5-long models
  • Register provider in factory with priority 4

Frontend:

  • Add Kimi settings tab with API key management
  • Add to provider tabs with OpenRouter icon

Types:

  • Add 'kimi' to ModelProvider and Credentials types

Summary by CodeRabbit

Release Notes

  • New Features
    • Added support for Kimi AI models via OpenRouter integration
    • New Kimi settings tab for configuring and managing OpenRouter API credentials
    • API key validation with connection testing functionality
    • Model selection dropdown supporting Kimi K2.5, K1.5, and K1.5 Long variants
    • Secure API key storage and management with visibility toggle

✏️ Tip: You can customize this high-level summary in your review settings.

Add new AI provider for Moonshot AI's Kimi models via OpenRouter API.

Backend:
- Create KimiProvider class using OpenAI SDK
- Support kimi-k2.5, kimi-k1.5, kimi-k1.5-long models
- Register provider in factory with priority 4

Frontend:
- Add Kimi settings tab with API key management
- Add to provider tabs with OpenRouter icon

Types:
- Add 'kimi' to ModelProvider and Credentials types
@coderabbitai
Copy link

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

The changes add Kimi model provider support via OpenRouter integration, introducing a new KimiProvider backend implementation, OpenRouter API credential handling, UI configuration tab, and model detection utilities across the type system.

Changes

Cohort / File(s) Summary
Dependency Management
apps/server/package.json
Added openai ^6.17.0 dependency to support OpenAI-compatible API client for OpenRouter integration.
Backend Provider Implementation
apps/server/src/providers/kimi-provider.ts
New KimiProvider class implementing streaming chat completions against Kimi models via OpenRouter, including model resolution, API key retrieval, message construction, tool call handling, and error management.
Provider Registration
apps/server/src/providers/provider-factory.ts
Integrated KimiProvider into provider factory with aliases (moonshot, openrouter-kimi) and priority 4, added Kimi to disconnected markers map.
Credentials & Settings
apps/server/src/services/settings-service.ts, libs/types/src/settings.ts
Extended Credentials type and settings logic to support openrouter API key field, including merge/migration paths for credentials storage.
Type System & Model Detection
libs/types/src/provider-utils.ts, libs/types/src/index.ts
Added isKimiModel detection function and "kimi-" provider prefix support, extended model provider resolution and normalization to recognize Kimi and moonshotai/ formats.
Frontend Type Definitions
apps/ui/src/components/views/settings-view/shared/types.ts, apps/ui/src/store/app-store.ts, libs/types/src/settings.ts
Extended ApiKeys interface and ModelProvider type union to include 'kimi' and 'openrouter' field, initialized openrouter key in app state.
Frontend UI Components
apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx, apps/ui/src/components/views/settings-view/providers/provider-tabs.tsx
Created KimiSettingsTab component with API key input, format validation, connection testing, model selection, and persistence; integrated new tab into ProviderTabs navigation.

Sequence Diagram

sequenceDiagram
    participant User
    participant UI as KimiSettingsTab
    participant Store as AppStore
    participant Service as SettingsService
    participant Provider as KimiProvider
    participant API as OpenRouter API

    User->>UI: Enter API key & select model
    UI->>UI: Validate key format
    User->>UI: Test connection
    UI->>Service: Test API with credentials
    Service->>Provider: Create provider with API key
    Provider->>API: POST /chat/completions (test)
    API-->>Provider: Response
    Provider-->>Service: Success/Error
    Service-->>UI: Connection status
    UI->>User: Display test result

    User->>UI: Save configuration
    UI->>Store: Update apiKeys.openrouter
    Store->>Service: Save credentials
    Service->>Service: Merge openrouter into apiKeys
    Service-->>Store: Persisted

    User->>UI: Execute query with Kimi
    UI->>Provider: Execute with selected model
    Provider->>API: Stream chat completions
    API-->>Provider: Streamed content & tool calls
    Provider->>Provider: Convert to ContentBlock format
    Provider-->>UI: Yield incremental updates
    UI->>User: Display streaming response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

Enhancement

Suggested reviewers

  • Shironex

Poem

🐰 A Kimi arrives via OpenRouter's door,
With streaming and tools, there's so much more!
From settings to types, the pieces align,
Provider and UI in harmony shine!
Hop-hop, review this integration so fine! 🌟

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding a new Kimi provider integrated via OpenRouter API, which is the central focus of all file modifications.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mviner000, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new AI provider for Moonshot AI's Kimi models, accessible via the OpenRouter API. The changes encompass both backend and frontend components, enabling seamless integration and user management of Kimi models. Users can now configure their OpenRouter API key and select preferred Kimi models directly within the application settings.

Highlights

  • New Kimi AI Provider: Introduced support for Moonshot AI's Kimi models (kimi-k2.5, kimi-k1.5, kimi-k1.5-long) through the OpenRouter API.
  • Backend Integration: Implemented a KimiProvider class using the OpenAI SDK, handling model mapping, message processing, tool calls, and error management.
  • Frontend User Interface: Added a dedicated Kimi settings tab for OpenRouter API key configuration (save, test, delete) and default model selection.
  • System-wide Type and Configuration Updates: Extended ModelProvider and Credentials types to include 'kimi' and 'openrouter' API keys, and updated the provider factory for Kimi model recognition and registration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new AI provider for Moonshot AI's Kimi models, accessed via the OpenRouter API. The changes are comprehensive, spanning the backend, frontend, and shared types. A new KimiProvider is created on the backend using the openai SDK, and it's registered in the provider factory. The frontend is updated with a new settings tab for Kimi, including API key management and model selection. Necessary type definitions for credentials and model providers have been updated accordingly.

The implementation is solid, but I've identified a few areas for improvement in the new KimiProvider, mainly concerning robustness in handling tool calls and some opportunities for code deduplication to enhance maintainability.

Comment on lines +87 to +105
function convertToolCalls(
toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
): ContentBlock[] {
return toolCalls
.filter((tc) => tc.type === 'function')
.map((tc) => {
// Type guard: only function tool calls have the function property
const funcCall = tc as OpenAI.Chat.Completions.ChatCompletionMessageToolCall & {
type: 'function';
function: { name: string; arguments: string };
};
return {
type: 'tool_use' as const,
tool_use_id: funcCall.id,
name: funcCall.function.name,
input: JSON.parse(funcCall.function.arguments || '{}'),
};
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The convertToolCalls function can be improved for robustness and clarity.

  1. The type assertion on line 94 is unnecessary. The preceding .filter((tc) => tc.type === 'function') should already provide sufficient type narrowing for TypeScript to understand that tc is a function tool call. You can directly access tc.id and tc.function.
  2. The JSON.parse on line 102 is unsafe and can crash the process if the model returns invalid JSON in the arguments field. It's recommended to wrap it in a try...catch block to handle potential parsing errors gracefully.

Here's a suggested implementation that addresses both points:

function convertToolCalls(
  toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
): ContentBlock[] {
  return toolCalls
    .filter((tc) => tc.type === 'function')
    .map((tc) => {
      let input = {};
      try {
        input = JSON.parse(tc.function.arguments || '{}');
      } catch (e) {
        logger.warn('Failed to parse tool call arguments as JSON', {
          tool_call_id: tc.id,
          arguments: tc.function.arguments,
          error: e,
        });
      }

      return {
        type: 'tool_use' as const,
        tool_use_id: tc.id,
        name: tc.function.name,
        input,
      };
    });
}

Comment on lines +195 to +198
const textContent = msg.content
.filter((block) => block.type === 'text' && block.text)
.map((block) => block.text)
.join('\n');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic to extract text content from an array of content blocks is duplicated on lines 214-217. To improve maintainability and reduce code duplication, consider extracting this into a small helper function.

For example, you could create a function like this at the file level:

function getTextFromContentBlocks(blocks: readonly ContentBlock[]): string {
  return blocks
    .filter((block): block is { type: 'text'; text: string } => block.type === 'text' && !!block.text)
    .map((block) => block.text)
    .join('\n');
}

You could then use this helper function here and on line 214 to process both msg.content and the prompt.

);
}
currentToolCall = {
id: tc.id || `call_${Date.now()}_${tc.index}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Date.now() to generate a fallback ID for tool calls is not guaranteed to be unique, especially if multiple tool calls are processed in the same millisecond. This could lead to issues with tracking tool calls correctly.

Since you're on Node.js >= 22, you can use the built-in crypto.randomUUID() for a more robust and truly unique identifier.

First, add this import at the top of the file:

import { randomUUID } from 'crypto';

Then, update this line to use it. The call_ prefix and index are not necessary with a UUID.

                  id: tc.id || randomUUID(),

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
libs/types/src/settings.ts (1)

1052-1065: ⚠️ Potential issue | 🟠 Major

Bump the credentials schema version after adding openrouter.
Without a version bump, migrations/backfills may not run and older credentials may miss the new key.

🔧 Suggested update
-export const CREDENTIALS_VERSION = 1;
+export const CREDENTIALS_VERSION = 2;
apps/server/src/services/settings-service.ts (1)

713-738: ⚠️ Potential issue | 🟡 Minor

Add OpenRouter to masked credentials return type and payload.

getMaskedCredentials() omits OpenRouter despite it being a fully supported credentials provider (used by Kimi). Frontend API consumers won't see the OpenRouter configuration status. Extend the return type and add the openrouter field to the return payload.

🤖 Fix all issues with AI agents
In `@apps/server/src/providers/kimi-provider.ts`:
- Around line 87-104: In convertToolCalls, guard against malformed JSON in
funcCall.function.arguments by wrapping JSON.parse(funcCall.function.arguments
|| '{}') in a try-catch inside the map over toolCalls; on parse failure return a
safe fallback (e.g., the raw arguments string or an empty object) for the input
field so convertToolCalls (and the returned ContentBlock with type 'tool_use'
and tool_use_id) never throws and downstream processing continues.
- Around line 247-311: The current streaming logic uses a single currentToolCall
which causes interleaved tool calls to be lost; replace that with a Map<number,
Partial<...>> (keyed by each delta.tool_calls[].index) to accumulate tool call
fragments per index: in the loop over stream/delta.tool_calls, look up or create
the map entry for tc.index, append name/arguments into that entry, and when a
tool call signals completion (tc.id present or at stream end) move the finished
entry into accumulatedToolCalls (preserving the original id/type/function
shape); update references to currentToolCall to use the map lookup and ensure
the finalization block flushes any remaining map entries into
accumulatedToolCalls.

In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`:
- Around line 235-243: The visibility toggle Button (rendered with Eye/EyeOff
and using showApiKey and setShowApiKey) lacks an accessible name; update the
Button component to include an aria-label that reflects the action (e.g.,
aria-label={showApiKey ? "Hide API key" : "Show API key"}) and optionally add
aria-pressed={showApiKey} so screen readers receive state; ensure you place
these attributes on the same Button that calls setShowApiKey to keep semantics
and interaction consistent.
- Around line 50-52: selectedModel is only local state and isn’t persisted; wire
it into the app store by adding a kimiDefaultModel field and its setter, include
"kimiDefaultModel" in SETTINGS_FIELDS_TO_SYNC inside use-settings-sync.ts, and
replace/local-update selectedModel usage to call the store setter on change
(mirror how cursor-settings-tab.tsx uses cursorDefaultModel and
setCursorDefaultModel) so the chosen model is persisted across
navigation/reload; look for selectedModel/setSelectedModel in
kimi-settings-tab.tsx and add/dispatch the corresponding kimiDefaultModel and
setKimiDefaultModel store hooks instead of keeping local-only state.

In `@libs/types/src/provider-utils.ts`:
- Around line 295-299: The normalization for OpenRouter Moonshot AI models
currently prepends PROVIDER_PREFIXES.kimi unconditionally, causing double
'kimi-' (e.g., 'kimi-kimi-k2.5'); update the block that handles
model.startsWith('moonshotai/') to strip 'moonshotai/' into kimiModel and then
return kimiModel as-is if it already starts with PROVIDER_PREFIXES.kimi,
otherwise prepend PROVIDER_PREFIXES.kimi; reference the model variable and
PROVIDER_PREFIXES.kimi in your change.
🧹 Nitpick comments (1)
apps/server/src/providers/kimi-provider.ts (1)

141-176: Normalize model aliases before resolving OpenRouter IDs.
Use resolveModelString() before model resolution to comply with the alias-normalization requirement.

♻️ Suggested change
-import { validateBareModelId, type Credentials } from '@automaker/types';
+import { validateBareModelId, type Credentials } from '@automaker/types';
+import { resolveModelString } from '@automaker/model-resolver';
@@
-    const openRouterModel = getOpenRouterModelId(model);
+    const resolvedModel = resolveModelString(model);
+    const openRouterModel = getOpenRouterModelId(resolvedModel);

As per coding guidelines, use resolveModelString() from @automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names before making API calls.

Comment on lines +87 to +104
function convertToolCalls(
toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
): ContentBlock[] {
return toolCalls
.filter((tc) => tc.type === 'function')
.map((tc) => {
// Type guard: only function tool calls have the function property
const funcCall = tc as OpenAI.Chat.Completions.ChatCompletionMessageToolCall & {
type: 'function';
function: { name: string; arguments: string };
};
return {
type: 'tool_use' as const,
tool_use_id: funcCall.id,
name: funcCall.function.name,
input: JSON.parse(funcCall.function.arguments || '{}'),
};
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI SDK v6 streaming tool_calls function arguments partial JSON streaming behavior

💡 Result:

In OpenAI SDK v6 streaming, tool_calls[].function.arguments is streamed as a string in incremental fragments (often not valid JSON until the end)—so you should accumulate deltas and only JSON.parse() once the “done/final” event arrives. Also, even the final string can be invalid or contain extra fields, so validate before executing the tool call. [1][2][3]

What you’ll see while streaming

  • Chat Completions streaming (SDK v6 event handlers):
    • tool_calls.function.arguments.delta → emits partial argument text
    • tool_calls.function.arguments.done → emits the complete arguments string [1]
  • Responses API streaming (event stream):
    • response.function_call_arguments.delta → partial JSON-string fragment
    • response.function_call_arguments.done → final JSON string for arguments [2]

Correct handling pattern

  1. Key by tool call index / item_id.
  2. Append each delta fragment in order.
  3. Parse only when you receive the corresponding *.done.

Why parsing early breaks

The model may stream something like:

  • delta #1: { "city": "San
  • delta #2: Francisco", "unit": "c" }
    Only after concatenation is it parseable JSON. [2][3]

Sources: SDK v6 streaming event names for tool-call argument deltas/done [1]; Responses streaming response.function_call_arguments.delta/done semantics [2]; arguments is a JSON-formatted string that may be invalid and must be validated [3].


🏁 Script executed:

# Search for how kimi-provider is used in the codebase
rg "kimi-provider\|convertToolCalls" --type ts --type tsx -B 2 -A 5

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Check if there are existing error handling patterns for JSON parsing in the codebase
rg "JSON\.parse" apps/server/src --type ts --type tsx -B 2 -A 2 | head -50

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Look at the full kimi-provider file to understand context
wc -l apps/server/src/providers/kimi-provider.ts

Repository: AutoMaker-Org/automaker

Length of output: 113


🏁 Script executed:

cat -n apps/server/src/providers/kimi-provider.ts | head -120

Repository: AutoMaker-Org/automaker

Length of output: 4443


🏁 Script executed:

cat -n apps/server/src/providers/kimi-provider.ts | tail -350

Repository: AutoMaker-Org/automaker

Length of output: 12478


🏁 Script executed:

# Find where convertToolCalls is used
rg "convertToolCalls" --type ts -B 3 -A 3

Repository: AutoMaker-Org/automaker

Length of output: 1039


Add defensive JSON parsing for tool-call arguments.

The accumulated tool arguments string may be invalid or malformed even after streaming completes, causing JSON.parse() to throw and abort processing. Wrap the parsing in a try-catch with a safe fallback:

Suggested fix
+function safeJsonParse(value: string | undefined): Record<string, unknown> {
+  if (!value) return {};
+  try {
+    return JSON.parse(value) as Record<string, unknown>;
+  } catch {
+    return { _raw: value };
+  }
+}
+
 function convertToolCalls(
   toolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[]
 ): ContentBlock[] {
@@
       return {
         type: 'tool_use' as const,
         tool_use_id: funcCall.id,
         name: funcCall.function.name,
-        input: JSON.parse(funcCall.function.arguments || '{}'),
+        input: safeJsonParse(funcCall.function.arguments),
       };
     });
 }
🤖 Prompt for AI Agents
In `@apps/server/src/providers/kimi-provider.ts` around lines 87 - 104, In
convertToolCalls, guard against malformed JSON in funcCall.function.arguments by
wrapping JSON.parse(funcCall.function.arguments || '{}') in a try-catch inside
the map over toolCalls; on parse failure return a safe fallback (e.g., the raw
arguments string or an empty object) for the input field so convertToolCalls
(and the returned ContentBlock with type 'tool_use' and tool_use_id) never
throws and downstream processing continues.

Comment on lines +247 to +311
// Accumulate the response
let accumulatedContent = '';
let accumulatedToolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] = [];
let currentToolCall: Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall> | null =
null;

for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;

if (!delta) continue;

// Handle text content
if (delta.content) {
accumulatedContent += delta.content;

// Yield incremental text updates
yield {
type: 'assistant',
message: {
role: 'assistant',
content: [{ type: 'text', text: accumulatedContent }],
},
};
}

// Handle tool calls
if (delta.tool_calls) {
for (const tc of delta.tool_calls) {
if (tc.index !== undefined) {
// New tool call or continuation
if (!currentToolCall || tc.id) {
// Start new tool call
if (currentToolCall && currentToolCall.id) {
accumulatedToolCalls.push(
currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
);
}
currentToolCall = {
id: tc.id || `call_${Date.now()}_${tc.index}`,
type: 'function',
function: {
name: tc.function?.name || '',
arguments: tc.function?.arguments || '',
},
};
} else if (currentToolCall.function) {
// Append to existing tool call
if (tc.function?.name) {
currentToolCall.function.name += tc.function.name;
}
if (tc.function?.arguments) {
currentToolCall.function.arguments += tc.function.arguments;
}
}
}
}
}
}

// Finalize any pending tool call
if (currentToolCall && currentToolCall.id && currentToolCall.function) {
accumulatedToolCalls.push(
currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find apps/server/src/providers -name "kimi-provider.ts" -type f

Repository: AutoMaker-Org/automaker

Length of output: 109


🏁 Script executed:

cat -n apps/server/src/providers/kimi-provider.ts | head -350 | tail -110

Repository: AutoMaker-Org/automaker

Length of output: 4117


🌐 Web query:

OpenAI SDK v6 chat completions streaming tool_calls interleaved multiple indices behavior

💡 Result:

In the Chat Completions streaming response, tool calls arrive on choices[i].delta.tool_calls as an array of partial (“delta”) tool-call objects, and each tool call is identified by tool_calls[j].index (plus id, type, and function{name,arguments} when present). (platform.openai.com)

What “multiple indices” / “interleaved” means

  • When the model decides to call multiple tools in one assistant turn, you’ll see multiple distinct tool_calls[*].index values (typically 0, 1, 2, ...). (platform.openai.com)
  • In streaming, the model can emit argument text in many chunks, and those chunks can be interleaved across indices (e.g., you might receive some arguments for index 0, then some for index 1, then back to index 0). The SDK’s streaming helpers are designed around this reality and treat index as the stable key for routing/accumulating deltas. (deepwiki.com)
  • Practically: you must accumulate per tool-call index, concatenating function.arguments deltas until complete. (deepwiki.com)

What fields repeat (and what often doesn’t)

  • The API schema defines tool_calls[*] entries with index, id, type, and function{name, arguments}. (platform.openai.com)
  • In real streams, some fields may only appear in the first delta for a given index, while function.arguments tends to be delivered incrementally—so your accumulator should “fill once, then append arguments”. (This is the motivation behind SDK helper eventing like incremental tool-argument delta events keyed by index.) (deepwiki.com)

Why the index is important (and required by SDK helpers)

If an OpenAI-compatible provider omits choices[].delta.tool_calls[].index, it breaks compatibility with the OpenAI JS helper client.beta.chat.completions.stream, because the helper relies on index to merge tool-call deltas correctly—especially when multiple tool calls are present and/or interleaved. (github.com)

Minimal accumulator pattern (what you should implement)

Maintain a map keyed by tool_call.index, and for each chunk:

  • for each deltaToolCall in chunk.choices[0].delta.tool_calls || []:
    • initialize stored tool call object for that index if needed (copy id, type, function.name when present)
    • append deltaToolCall.function.arguments (string-concat) onto the stored function.arguments buffer

This matches the API contract (deltas + index) and the SDK’s intended merging strategy for streamed tool calls. (platform.openai.com)

Citations:


Handle concurrent streaming tool calls by index.
When OpenAI streams multiple tool calls, they can be interleaved by index (e.g., arguments for index 0, then index 1, then back to index 0). The single currentToolCall variable will lose or merge calls when indices arrive out of sequence.

Use a Map<number, ToolCall> keyed by tool_calls[].index to accumulate each tool call independently, then convert to an array when finalized:

🔧 Fix: accumulation by index
-      let accumulatedToolCalls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[] = [];
-      let currentToolCall: Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall> | null =
-        null;
+      const toolCallMap = new Map<
+        number,
+        Partial<OpenAI.Chat.Completions.ChatCompletionMessageToolCall>
+      >();
@@
-        if (delta.tool_calls) {
-          for (const tc of delta.tool_calls) {
-            if (tc.index !== undefined) {
-              // New tool call or continuation
-              if (!currentToolCall || tc.id) {
-                // Start new tool call
-                if (currentToolCall && currentToolCall.id) {
-                  accumulatedToolCalls.push(
-                    currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
-                  );
-                }
-                currentToolCall = {
-                  id: tc.id || `call_${Date.now()}_${tc.index}`,
-                  type: 'function',
-                  function: {
-                    name: tc.function?.name || '',
-                    arguments: tc.function?.arguments || '',
-                  },
-                };
-              } else if (currentToolCall.function) {
-                // Append to existing tool call
-                if (tc.function?.name) {
-                  currentToolCall.function.name += tc.function.name;
-                }
-                if (tc.function?.arguments) {
-                  currentToolCall.function.arguments += tc.function.arguments;
-                }
-              }
-            }
-          }
-        }
+        if (delta.tool_calls) {
+          for (const tc of delta.tool_calls) {
+            if (tc.index === undefined) continue;
+            const existing =
+              toolCallMap.get(tc.index) ?? {
+                id: tc.id || `call_${Date.now()}_${tc.index}`,
+                type: 'function',
+                function: { name: '', arguments: '' },
+              };
+            if (tc.function?.name) existing.function!.name += tc.function.name;
+            if (tc.function?.arguments) existing.function!.arguments += tc.function.arguments;
+            toolCallMap.set(tc.index, existing);
+          }
+        }
@@
-      if (currentToolCall && currentToolCall.id && currentToolCall.function) {
-        accumulatedToolCalls.push(
-          currentToolCall as OpenAI.Chat.Completions.ChatCompletionMessageToolCall
-        );
-      }
+      const accumulatedToolCalls =
+        Array.from(toolCallMap.values()) as OpenAI.Chat.Completions.ChatCompletionMessageToolCall[];
🤖 Prompt for AI Agents
In `@apps/server/src/providers/kimi-provider.ts` around lines 247 - 311, The
current streaming logic uses a single currentToolCall which causes interleaved
tool calls to be lost; replace that with a Map<number, Partial<...>> (keyed by
each delta.tool_calls[].index) to accumulate tool call fragments per index: in
the loop over stream/delta.tool_calls, look up or create the map entry for
tc.index, append name/arguments into that entry, and when a tool call signals
completion (tc.id present or at stream end) move the finished entry into
accumulatedToolCalls (preserving the original id/type/function shape); update
references to currentToolCall to use the map lookup and ensure the finalization
block flushes any remaining map entries into accumulatedToolCalls.

Comment on lines +50 to +52
// Model selection state
const [selectedModel, setSelectedModel] = useState<KimiModelId>('kimi-k2.5');

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, explore the settings provider structure
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers/ | head -20

Repository: AutoMaker-Org/automaker

Length of output: 450


🏁 Script executed:

# Get the content of the kimi-settings-tab.tsx file to see the full context
wc -l apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx

Repository: AutoMaker-Org/automaker

Length of output: 145


🏁 Script executed:

# Read the kimi-settings-tab.tsx file
cat -n apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx

Repository: AutoMaker-Org/automaker

Length of output: 17322


🏁 Script executed:

# Search for store patterns and model persistence in the store
rg -n "Model|default" apps/ui/src/store --type ts --type tsx | head -40

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Check other provider tabs to see how they handle model selection
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers/ -x basename {} \; | sort

Repository: AutoMaker-Org/automaker

Length of output: 185


🏁 Script executed:

# Check claude-settings-tab to see how it handles model persistence
cat -n apps/ui/src/components/views/settings-view/providers/claude-settings-tab.tsx | head -100

Repository: AutoMaker-Org/automaker

Length of output: 3113


🏁 Script executed:

# Search for model persistence patterns in other settings tabs
rg -n "selectedModel|Model.*useState|setApiKeys.*model" apps/ui/src/components/views/settings-view/providers/

Repository: AutoMaker-Org/automaker

Length of output: 575


🏁 Script executed:

# Check the app-store structure to see what provider settings are persisted
cat apps/ui/src/store/app-store.ts | head -150

Repository: AutoMaker-Org/automaker

Length of output: 3759


🏁 Script executed:

# Check how Cursor/Codex models are handled in app-store
rg -n "CursorModelId|CodexModelId|OpencodeModelId|PhaseModelConfig" apps/ui/src/store/app-store.ts | head -20

Repository: AutoMaker-Org/automaker

Length of output: 1189


🏁 Script executed:

# Check other provider tabs (cursor, codex, opencode) for model selection patterns
fd -e tsx 'settings-tab' apps/ui/src/components/views/settings-view/providers -x rg -l "useState.*Model|selectedModel"

Repository: AutoMaker-Org/automaker

Length of output: 141


🏁 Script executed:

# Check if Kimi model should be in PhaseModelConfig or DEFAULT_PHASE_MODELS
rg -n "kimi|Kimi" apps/ui/src/store/app-store.ts

Repository: AutoMaker-Org/automaker

Length of output: 49


🏁 Script executed:

# Examine cursor-settings-tab to see how it manages model selection
cat -n apps/ui/src/components/views/settings-view/providers/cursor-settings-tab.tsx | head -120

Repository: AutoMaker-Org/automaker

Length of output: 4014


🏁 Script executed:

# Verify there's no Kimi model persistence in the store or settings sync
rg -n "kimi|Kimi" apps/ui/src/store/

Repository: AutoMaker-Org/automaker

Length of output: 49


🏁 Script executed:

# Check use-settings-sync.ts to see if it handles Kimi models
cat -n apps/ui/src/hooks/use-settings-sync.ts | head -100

Repository: AutoMaker-Org/automaker

Length of output: 3989


🏁 Script executed:

# Check if there's any API integration or settings persistence for Kimi model
rg -n "kimiDefaultModel|kimi.*model|selectedModel.*save|selectedModel.*sync" apps/ui/src --type ts --type tsx

Repository: AutoMaker-Org/automaker

Length of output: 94


Model selection isn't persisted, so chosen defaults reset on navigation/reload.

selectedModel is local-only state (line 51) and never synced to the store. Compare with cursor-settings-tab.tsx, which uses cursorDefaultModel from the store and calls setCursorDefaultModel() to persist changes. Wire Kimi model selection into the same persistence mechanism: add kimiDefaultModel to app-store, include it in SETTINGS_FIELDS_TO_SYNC in use-settings-sync.ts, and call the corresponding setter action on model change.

🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`
around lines 50 - 52, selectedModel is only local state and isn’t persisted;
wire it into the app store by adding a kimiDefaultModel field and its setter,
include "kimiDefaultModel" in SETTINGS_FIELDS_TO_SYNC inside
use-settings-sync.ts, and replace/local-update selectedModel usage to call the
store setter on change (mirror how cursor-settings-tab.tsx uses
cursorDefaultModel and setCursorDefaultModel) so the chosen model is persisted
across navigation/reload; look for selectedModel/setSelectedModel in
kimi-settings-tab.tsx and add/dispatch the corresponding kimiDefaultModel and
setKimiDefaultModel store hooks instead of keeping local-only state.

Comment on lines +235 to +243
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent"
onClick={() => setShowApiKey(!showApiKey)}
>
{showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />}
</Button>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add an accessible label to the visibility toggle.

The icon-only button lacks an accessible name. Add an aria-label (and optionally aria-pressed).

🔧 Suggested fix
                 <Button
                   type="button"
                   variant="ghost"
                   size="icon"
                   className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent"
                   onClick={() => setShowApiKey(!showApiKey)}
+                  aria-label={showApiKey ? 'Hide API key' : 'Show API key'}
+                  aria-pressed={showApiKey}
                 >
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent"
onClick={() => setShowApiKey(!showApiKey)}
>
{showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />}
</Button>
<Button
type="button"
variant="ghost"
size="icon"
className="absolute right-0 top-0 h-full px-3 text-muted-foreground hover:text-foreground hover:bg-transparent"
onClick={() => setShowApiKey(!showApiKey)}
aria-label={showApiKey ? 'Hide API key' : 'Show API key'}
aria-pressed={showApiKey}
>
{showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />}
</Button>
🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/settings-view/providers/kimi-settings-tab.tsx`
around lines 235 - 243, The visibility toggle Button (rendered with Eye/EyeOff
and using showApiKey and setShowApiKey) lacks an accessible name; update the
Button component to include an aria-label that reflects the action (e.g.,
aria-label={showApiKey ? "Hide API key" : "Show API key"}) and optionally add
aria-pressed={showApiKey} so screen readers receive state; ensure you place
these attributes on the same Button that calls setShowApiKey to keep semantics
and interaction consistent.

Comment on lines +295 to +299
// OpenRouter Kimi format: moonshotai/model -> kimi-model
if (model.startsWith('moonshotai/')) {
const kimiModel = model.replace('moonshotai/', '');
return `${PROVIDER_PREFIXES.kimi}${kimiModel}`;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenRouter Moonshot AI Kimi model ID format moonshotai

💡 Result:

On OpenRouter, Moonshot AI’s Kimi models use the provider prefix moonshotai/ followed by the model slug:

  • Format: moonshotai/<model-id>

Examples (valid OpenRouter IDs):

  • moonshotai/kimi-k2 [1]
  • moonshotai/kimi-k2-thinking [2]
  • moonshotai/kimi-vl-a3b-instruct [3]
  • moonshotai/kimi-dev-72b [4]
  • moonshotai/kimi-linear-48b-a3b-instruct [5]

Usage: set your OpenRouter request’s "model" field to one of the IDs above (or another moonshotai/... listing in OpenRouter’s directory). [3]

Sources: [1] [2] [3] [4] [5]


🏁 Script executed:

# Search codebase for any references to moonshotai models or tests
rg "moonshotai" --type ts --type tsx --type js --type jsx -A 3 -B 3

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Look for test files or documentation about model normalization
fd -e test -e spec -e doc | head -20

Repository: AutoMaker-Org/automaker

Length of output: 49


🏁 Script executed:

# Check if there are any examples of how models are used in the codebase
rg "kimi-" --type ts --type tsx --type js --type jsx -A 2 -B 2

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Search for PROVIDER_PREFIXES definition and moonshotai references
rg "moonshotai|PROVIDER_PREFIXES" -A 3 -B 1

Repository: AutoMaker-Org/automaker

Length of output: 7869


🏁 Script executed:

# Look for tests related to normalizeModelString or provider utils
rg "normalizeModelString|provider-utils" -l

Repository: AutoMaker-Org/automaker

Length of output: 123


🏁 Script executed:

# Check the actual definition of PROVIDER_PREFIXES.kimi
rg "PROVIDER_PREFIXES\s*=" -A 20

Repository: AutoMaker-Org/automaker

Length of output: 1471


Fix double kimi- prefix when normalizing OpenRouter Moonshot AI model IDs.

OpenRouter's actual model IDs for Kimi are formatted as moonshotai/kimi-k2.5, moonshotai/kimi-k2-thinking, etc.—the model slug already includes the kimi- prefix. The current normalization strips the provider prefix and blindly prepends PROVIDER_PREFIXES.kimi, producing invalid IDs like kimi-kimi-k2.5.

Align with the guard logic used by other providers (cursor, codex, opencode) by checking whether the stripped model already carries the prefix:

🔧 Suggested fix
-  if (model.startsWith('moonshotai/')) {
-    const kimiModel = model.replace('moonshotai/', '');
-    return `${PROVIDER_PREFIXES.kimi}${kimiModel}`;
-  }
+  if (model.startsWith('moonshotai/')) {
+    const kimiModel = model.slice('moonshotai/'.length);
+    return kimiModel.startsWith(PROVIDER_PREFIXES.kimi)
+      ? kimiModel
+      : `${PROVIDER_PREFIXES.kimi}${kimiModel}`;
+  }
🤖 Prompt for AI Agents
In `@libs/types/src/provider-utils.ts` around lines 295 - 299, The normalization
for OpenRouter Moonshot AI models currently prepends PROVIDER_PREFIXES.kimi
unconditionally, causing double 'kimi-' (e.g., 'kimi-kimi-k2.5'); update the
block that handles model.startsWith('moonshotai/') to strip 'moonshotai/' into
kimiModel and then return kimiModel as-is if it already starts with
PROVIDER_PREFIXES.kimi, otherwise prepend PROVIDER_PREFIXES.kimi; reference the
model variable and PROVIDER_PREFIXES.kimi in your change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants