Skip to content

High-performance Go tools for LLM Agents. 100x faster than Python. Includes native MCP Server, token-optimized search, and persistent project memory.

License

Notifications You must be signed in to change notification settings

samestrin/llm-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

88 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

llm-tools

The missing standard library for Agentic Workflows. Native Go. Single Binary. 100x Faster than Python.

Go Version License Build Status Release

⚑ Why this exists

LLM Agents need to be fast. Waiting 400ms for a Python script or 100ms for Node.js to spin up just to read a file kills the flow of an autonomous loop.

llm-tools is a suite of high-performance, statically compiled tools designed to be the "hands" of your AI agent. It includes a native MCP Server for instant integration with Claude Desktop and Gemini.

The "Rewrite it in Go" Effect

I benchmarked this against equivalent Python and Node.js implementations on a real-world codebase. The difference is massive.

vs Python (llm-support)

Python vs. Go Speed Comparison

Operation Action Go (llm-support) Python Speedup
MCP Handshake Server Initialization 4ms 408ms πŸš€ 102x
Startup CLI Help 6ms 113ms 19x
Multigrep Search 5 keywords (150k hits) 1.47s 20.7s 14x
Hash SHA256 Verification 6ms 65ms 10.8x

Benchmarks run on M4 Pro 64gb macOS Darwin (arm64), 2025-12-26.

vs Node.js (llm-filesystem)

I ported the popular fast-filesystem-mcp from TypeScript to Go to create llm-filesystem.

Benchmark Go (llm-filesystem) TypeScript (Node) Speedup
Cold Start 5.2ms 85.1ms πŸš€ 16.5x
MCP Handshake 40.8ms 110.4ms 2.7x
File Read 49.5ms 108.2ms 2.2x
Directory Tree 50.9ms 113.7ms 2.2x

Benchmarks run on M4 Pro 64gb macOS Darwin (arm64), 2025-12-31.

🚫 Zero Dependency Hell

Deploying agent tools in Python or Node is painful. You have to manage virtual environments, node_modules, pip install dependencies, and worry about version conflicts. llm-tools is a single static binary. It works instantly on any machineβ€”no setup required.

πŸ€– Standardized LLM Orchestration

llm-tools isn't just for reading files; it's a reliability layer for your agent's cognitive functions.

The prompt command acts as a Universal Adapter for almost any LLM CLI (gemini, claude, ollama, openai, octo). It wraps them with:

  • Retries & Backoff: Automatically retries failed API calls.
  • Caching: Caches expensive results to disk (--cache-ttl 3600).
  • Validation: Ensures output meets criteria (--min-length, --must-contain) or fails fast.
# Reliable, cached, validated prompt execution
llm-support prompt \
  --prompt "Analyze this error log" \
  --llm gemini \
  --retries 3 \
  --cache \
  --min-length 50

⚑ Advanced Workflows

Parallel Batch Processing (foreach)

Run prompts across thousands of files in parallel without writing a loop script. Perfect for migrations, code reviews, or documentation generation.

# Review all Go files in parallel (4 concurrent workers)
llm-support foreach \
  --glob "src/**/*.go" \
  --template templates/code-review.md \
  --llm claude \
  --parallel 4 \
  --output-dir ./reviews

πŸš€ Quick Start

Pre-built Binaries

Recommended: Download the latest binary for your OS. No dependencies required.

Platform Download
macOS (Apple Silicon) llm-tools-darwin-arm64.tar.gz
macOS (Intel) llm-tools-darwin-amd64.tar.gz
Linux (AMD64) llm-tools-linux-amd64.tar.gz
Windows llm-tools-windows-amd64.zip

Installation (Go)

go install github.com/samestrin/llm-tools/cmd/llm-support@latest
go install github.com/samestrin/llm-tools/cmd/llm-clarification@latest
go install github.com/samestrin/llm-tools/cmd/llm-filesystem@latest
go install github.com/samestrin/llm-tools/cmd/llm-semantic@latest

πŸ’‘ Common Recipes

See what's possible with a single line of code:

# Find all TODOs and FIXMEs (Fast grep)
llm-support grep "TODO|FIXME" . -i -n

# Show project structure (3 levels deep)
llm-support tree --path . --depth 3

# Search for multiple definitions in parallel (Token optimized)
llm-support multigrep --path src/ --keywords "handleSubmit,validateForm" -d

# Extract data from JSON without jq
llm-support json query response.json ".users[0]"

# Calculate values safely
llm-support math "round(42/100 * 75, 2)"

# Generate config from template
llm-support template config.tpl --var domain=example.com --var port=8080

# Hash all Go files (Integrity check)
llm-support hash internal/**/*.go -a sha256

# Count completed tasks in a sprint plan
llm-support count --mode checkboxes --path sprint/plan.md -r

# Detect project stack
llm-support detect --path .

# Extract only relevant context (AI-filtered) - works with files, dirs, and URLs
llm-support extract-relevant --path docs/ --context "Authentication Config"
llm-support extract-relevant --path https://docs.example.com --context "API keys"

# Extract and rank links from any webpage
llm-support extract-links --url https://example.com/docs --json

# Summarize directory content for context window (Token optimized)
llm-support summarize-dir src/ --format outline --max-tokens 2000

# Batch process files with a template (LLM-driven)
llm-support foreach --files "*.ts" --template refactor.md --parallel 4

πŸ“š Documentation

Detailed references for all 40+ commands:

🧠 How It Works

The Loop:

  1. Agent receives a task.
  2. llm-support provides fast codebase context (files, structure, search results).
  3. llm-clarification recalls past decisions ("Use Jest, not Mocha") to prevent regression.
  4. Agent generates code with full context.
sequenceDiagram
    participant U as πŸ‘€ User
    participant A as πŸ€– Agent
    participant S as ⚑ Support
    participant M as 🧠 Memory
    participant C as πŸ“‚ Codebase

    U->>A: /execute-sprint
    
    rect rgb(30, 30, 30)
        note right of A: Fast Context
        A->>S: multiexists, count, report
        S-->>A: βœ“ Context Loaded (22ms)
    end
    
    rect rgb(50, 20, 20)
        note right of A: Long-Term Memory
        A->>M: match-clarification
        M-->>A: ⚠ RECALL: "Use Jest"
    end

    A->>C: TDD Implementation (using Jest)
Loading

License

MIT License

About

High-performance Go tools for LLM Agents. 100x faster than Python. Includes native MCP Server, token-optimized search, and persistent project memory.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages