Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,10 @@ jobs:
issue-numbers: "3 6"
prompt-path: examples/AutoTriage.prompt
enabled: false
model-fast: ''
model-pro: gemini-flash-latest # use flash for pro model and skip fast pass due to rate limits
model-fast: gemini-2.5-flash
model-fast-temperature: 0.0
model-pro: gemini-3-flash-preview
model-pro-temperature: 1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
Expand Down
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ Keep issues and pull requests moving: reads the latest context, drafts the next
## How it works

- The run starts with a fast AI pass to gather signals, summarize the thread, and draft the intended operations.
- A reviewing AI pass (default: `gemini-2.5-pro`) replays the plan and confirms labels, comments, etc, before anything is written.
- A reviewing AI pass (default: `gemini-3-flash-preview`) replays the plan and confirms labels, comments, etc, before anything is written.
- Defaults use the free-tier models (`gemini-2.5-flash` + `gemini-3-flash-preview`) rather than `gemini-3-pro`.
- The full thought process along with all actions can be inspected in the workflow artifacts.
- It will keep going until it runs out of issues or tokens, or reaches the specified limit.

Expand Down Expand Up @@ -43,10 +44,10 @@ jobs:
| `readme-path` | Extra Markdown context uploaded to the AI prompt. | `README.md` |
| `enabled` | `"true"` applies changes, `"false"` logs the plan only. | `"true"` |
| `db-path` | Persist per-item history between runs. | - |
| `model-fast` | Fast analysis model for the first pass. Leave blank to skip. | bundled fast model |
| `model-pro` | Review model that double-checks uncertain plans. | bundled review model |
| `model-fast` | Fast analysis model for the first pass. Leave blank to skip. | `gemini-2.5-flash` |
| `model-pro` | Review model that double-checks uncertain plans. | `gemini-3-flash-preview` |
| `model-fast-temperature` | Sampling temperature for fast model (`0` deterministic -> `2` exploratory). | `0.0` |
| `model-pro-temperature` | Sampling temperature for pro model (`0` deterministic -> `2` exploratory). Gemini 3.0 recommends `1.0`. | `1.0` |
| `model-pro-temperature` | Sampling temperature for pro model (`0` deterministic -> `2` exploratory). Gemini 3 recommends `1.0`. | `1.0` |
| `max-timeline-events` | Maximum recent timeline events included in the prompt. | `40` |
| `max-triages` | Cap on items that escalate to the review pass per run. | `20` |
| `max-fast-runs` | Cap on items analyzed with the fast model per run. | `100` |
Expand Down
8 changes: 4 additions & 4 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,19 +30,19 @@ inputs:
model-fast:
description: "Identifier of the fast analysis model used for the first pass (default shown below). Leave blank to skip the fast pass."
required: false
default: "gemini-flash-latest"
default: "gemini-2.5-flash"
model-pro:
description: "Identifier of the deeper review model used when the fast pass asks for confirmation."
required: false
default: "gemini-2.5-pro"
default: "gemini-3-flash-preview"
model-fast-temperature:
description: "Sampling temperature for the fast model (0-2); lower keeps responses deterministic while higher explores alternatives."
required: false
default: "0.0"
model-pro-temperature:
description: "Sampling temperature for the pro model (0-2); Gemini 3.0 recommends 1.0 for optimal performance."
description: "Sampling temperature for the pro model (0-2); Gemini 3 recommends 1.0 for optimal performance."
required: false
default: "0.0"
default: "1.0"
max-timeline-events:
description: "Maximum number of recent timeline events injected into the prompt context."
required: false
Expand Down
2 changes: 1 addition & 1 deletion dist/index.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dist/index.js.map

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions src/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ export function getConfig(): Config {
const readmePath = core.getInput('readme-path') || 'README.md';
const dbPath = core.getInput('db-path');
const modelFastInput = core.getInput('model-fast');
const modelFast = modelFastInput || 'gemini-flash-latest';
const modelFast = modelFastInput || 'gemini-2.5-flash';
const skipFastPass = modelFastInput === '';
const modelPro = core.getInput('model-pro') || 'gemini-2.5-pro';
const modelPro = core.getInput('model-pro') || 'gemini-3-flash-preview';
const fastTemperatureInput = core.getInput('model-fast-temperature');
const parsedFastTemperature = Number(
fastTemperatureInput === undefined || fastTemperatureInput === '' ? '0' : fastTemperatureInput
fastTemperatureInput === undefined || fastTemperatureInput === '' ? '0.0' : fastTemperatureInput
);
const modelFastTemperature = Number.isFinite(parsedFastTemperature) ? parsedFastTemperature : 0;
const proTemperatureInput = core.getInput('model-pro-temperature');
const parsedProTemperature = Number(
proTemperatureInput === undefined || proTemperatureInput === '' ? '0' : proTemperatureInput
proTemperatureInput === undefined || proTemperatureInput === '' ? '1.0' : proTemperatureInput
);
const modelProTemperature = Number.isFinite(parsedProTemperature) ? parsedProTemperature : 0;
const thinkingBudget = -1;
Expand Down
2 changes: 1 addition & 1 deletion src/gemini.ts
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ export class GeminiClient {

private async parseJson<T>(response: GenerateContentResponse): Promise<{ data: T; thoughts: string; inputTokens: number; outputTokens: number }> {
// Manually extract text from parts to avoid warnings about non-text parts (e.g., thoughtSignature)
// when using Gemini 3.0 models with thinking enabled
// when using Gemini 3 models with thinking enabled
const thoughts: string[] = [];
const textParts: string[] = [];

Expand Down
2 changes: 1 addition & 1 deletion tests/geminiSdk.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import { GeminiClient, buildJsonPayload } from '../src/gemini';

describe('Gemini (real API)', () => {
const apiKey = process.env.GEMINI_API_KEY;
const model = 'gemini-flash-latest';
const model = 'gemini-2.5-flash';

if (!apiKey) {
console.warn('GEMINI_API_KEY environment variable must be set to run Gemini tests. Skipping Gemini tests.');
Expand Down
Loading