Skip to content

Conversation

@zh30
Copy link
Owner

@zh30 zh30 commented Nov 5, 2025

No description provided.

zh30 added 14 commits October 16, 2025 19:19
- Fix hardcoded 1024 token limit, allow selecting longer models via
config - Persist selected model path in user's cache directory for reuse
- Warn when no models available instead of failing - Improve error
messages related to model selection
- Delay EOS token sampling until meaningful text is generated - Skip
commit when diff is empty instead of failing - Handle initial commit
case where no parent exists
Introduce a GitHub Actions workflow that builds release binaries for
macOS (Intel & Apple Silicon), Linux (x86_64 & ARM64) and Windows.
Update the release workflow to download those artifacts, compute SHA256
checksums, patch `git-ca.rb` with the new version and bottle checksums,
and push the updated formula to the `homebrew-tap` repository.

Refresh documentation (`CLAUDE.md`, `DEPLOY.md`, `HOMEBREW.md`,
`README.md`) to describe the multi‑platform releases and Homebrew
installation steps.

Modify `git-ca.rb` to include a `bottle` block and install the pre‑built
binary instead of building from source.
@changeset-bot
Copy link

changeset-bot bot commented Nov 5, 2025

⚠️ No Changeset found

Latest commit: ff14db5

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@gemini-code-assist
Copy link

Summary of Changes

Hello @zh30, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural shift for the Git Commit Analyzer, migrating from an external Ollama dependency to a fully local llama.cpp inference engine. This change aims to improve performance, reduce external dependencies, and enhance user privacy by enabling offline AI-powered commit message generation directly within the CLI. The refactor touches core AI interaction, dependency management, and updates all relevant documentation and installation scripts to reflect this new, self-contained approach.

Highlights

  • AI Backend Migration: The project has transitioned from using an external Ollama API for AI inference to directly integrating with llama.cpp for local GGUF model inference. This enhances privacy and enables offline functionality.
  • Core Logic Refactor: A new file, src/llama.rs, has been introduced to manage llama.cpp sessions. src/main.rs underwent a significant refactor to integrate the new inference engine, including robust model selection, download, and fallback mechanisms for commit message generation.
  • Dependency Overhaul: The project's dependencies have been updated to reflect the new AI backend. reqwest and serde_json have been replaced with llama-cpp-sys-2, hf-hub, and rand in Cargo.toml and Cargo.lock.
  • Documentation & Installation Update: All user-facing and internal documentation files (README*.md, AGENTS.md, CLAUDE.md, DEPLOY.md, HOMEBREW.md, INSTALL.md) have been thoroughly revised to reflect the new architecture, installation methods (especially Homebrew with pre-built binaries), and local model management.
  • Homebrew Formula Enhancement: The git-ca.rb Homebrew formula has been updated to support multi-platform pre-built binaries (bottles), simplifying the installation process for macOS and Linux users by eliminating the need for local compilation.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/build-binaries.yml
    • .github/workflows/release.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is a significant and impressive refactoring of the project. The move from an Ollama dependency to a self-contained binary using llama.cpp is a major architectural improvement that will greatly enhance the user experience. The accompanying updates to the documentation across all languages, the release process, and the Homebrew formula are thorough and well-executed. The new code is well-structured, with robust features like model auto-downloading, deterministic fallbacks, and a doctor command for diagnostics. I have a couple of minor suggestions to improve Conventional Commits compliance and for general cleanup, but overall this is excellent work.

hf-hub = { version = "0.4.3", default-features = false, features = ["ureq", "native-tls"] }
rand = "0.9"

# Test comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be a leftover from development or testing. It should be removed to keep the Cargo.toml file clean.

Comment on lines +641 to +644
match lower.as_bytes().get(commit_type.len()) {
Some(b'(') | Some(b':') => true,
_ => false,
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function is_commit_subject (and parse_commit_subject which has similar logic) doesn't handle the ! indicator for breaking changes as specified by the Conventional Commits standard (e.g., feat!: ...). To fully support the spec, this should be recognized as a valid format.

Suggested change
match lower.as_bytes().get(commit_type.len()) {
Some(b'(') | Some(b':') => true,
_ => false,
}
match lower.as_bytes().get(commit_type.len()) {
Some(b'(') | Some(b':') | Some(b'!') => true,
_ => false,
}

@zh30 zh30 merged commit 5b0ebac into main Nov 5, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants