diff --git a/PRESENTATION-ambient-code-reference.md b/PRESENTATION-ambient-code-reference.md index 69af102..8fda29a 100644 --- a/PRESENTATION-ambient-code-reference.md +++ b/PRESENTATION-ambient-code-reference.md @@ -29,7 +29,7 @@ This reference repository provides useful patterns you can review / adopt increm A pre-configured AI agent definition that knows how to work with your codebase safely and consistently. The idea behind a CBA is to use an agent to eventually proxy 100% of the interaction with a codebase through this agent. -**cba = codebase system prompt** +(cba = codebase system prompt) ### The Problem It Solves @@ -107,7 +107,7 @@ What's important is that your team has collected and agrees upon the context sin ### Loading Context On-Demand -Instead of loading everything, you load what's relevant. +Instead of loading everything, you load what's relevant. "Load the security-standards context and help me review this authentication PR" @@ -131,7 +131,7 @@ For relation between elements, load these files into e.g. the local Anthropic Me ### Issue-to-PR Overview A pattern where well-defined GitHub issues can be automatically converted into pull requests by the CBA. -Example: https://github.com/ambient-code/agentready/pull/242 +Example: ### Routine Fix Overhead diff --git a/docs/patterns/codebase-agent.md b/docs/patterns/codebase-agent.md index a9d7019..2c15206 100644 --- a/docs/patterns/codebase-agent.md +++ b/docs/patterns/codebase-agent.md @@ -4,73 +4,83 @@ --- -## Overview - -!!! note "Section Summary" - What a CBA is: a markdown file that defines how AI works in your project. Problem it solves: inconsistent AI behavior across developers. Key benefit: every AI interaction follows the same process. - ---- - ## Quick Start -!!! note "Section Summary" - Copy-paste the CBA definition from `.claude/agents/codebase-agent.md`. Minimal customization: your linting commands, your test commands. Done in 15 minutes. +Create `.claude/agents/codebase-agent.md`: +```markdown +--- +name: codebase-agent +description: Autonomous codebase operations for [your-project] --- -## Agent Definition Structure - -### Capability Boundaries - -!!! note "Section Summary" - What the agent can do autonomously vs what requires human approval. Examples: formatting changes (auto), architecture changes (human approval). How to define your own boundaries. - -### Workflow Definitions +# Codebase Agent -!!! note "Section Summary" - Step-by-step processes for common tasks: issue-to-PR, code review, refactoring. Template workflows provided. How to customize for your process. +## Quality Gates (run before presenting code) +1. Lint: `npm run lint` +2. Test: `npm test` +3. Fix failures before showing code -### Quality Gates +## Safety Rules +- NEVER commit directly to main +- ALWAYS create feature branches +- ASK before breaking changes +``` -!!! note "Section Summary" - Linting, testing, and review requirements. Which tools to run, in what order. What constitutes a passing gate. Error handling. +--- -### Safety Guardrails +## Agent Structure -!!! note "Section Summary" - When to stop and ask for human input. Risk categories: low/medium/high. Examples of each. How to configure alert thresholds. +| Section | Purpose | Example | +|---------|---------|---------| +| **Capability Boundaries** | What agent can do autonomously | Formatting: auto. Architecture: human approval | +| **Workflow Definitions** | Step-by-step processes | Issue→PR, code review steps | +| **Quality Gates** | Tools to run, in order | `black . && isort . && pytest` | +| **Safety Guardrails** | When to stop and ask | >10 files changed, security code, DB schema | --- ## Autonomy Levels -!!! note "Section Summary" - Level 1 (Conservative): PR creation only, wait for human approval. Level 2 (Moderate): Auto-merge for low-risk changes. Level 3 (Aggressive): Auto-deploy after tests pass. How to graduate between levels. +| Level | Behavior | Use When | +|-------|----------|----------| +| **1: Conservative** | Create PRs only, wait for human approval | Starting out, high-risk projects | +| **2: Moderate** | Auto-merge docs/deps/lint fixes after CI passes | Established trust, good test coverage | +| **3: Aggressive** | Auto-deploy after tests pass | Mature codebase, comprehensive CI | ---- +Start at Level 1. Graduate as you build trust. -## Memory System Integration +--- -!!! note "Section Summary" - How CBA uses context files from `.claude/context/`. Loading context on-demand. When to reference which context file. +## Memory System ---- +Context files in `.claude/context/` provide persistent knowledge: -## Real-World Examples +```text +.claude/ +├── agents/ +│ └── codebase-agent.md +└── context/ + ├── architecture.md # Code structure patterns + ├── security-standards.md + └── testing-patterns.md +``` -!!! note "Section Summary" - CBA configurations for different stacks: Python/FastAPI, TypeScript/Express, Go. What's different, what's the same. +Reference in your agent: "Load `.claude/context/architecture.md` for code placement decisions." --- ## Troubleshooting -!!! note "Section Summary" - Common issues: agent ignores boundaries, agent is too conservative, agent makes up conventions. Solutions for each. +| Problem | Fix | +|---------|-----| +| Agent ignores boundaries | Make rules explicit: "NEVER delete files without asking" | +| Agent too conservative | Define allowed autonomous actions explicitly | +| Agent invents conventions | Provide code examples in context files | --- ## Related Patterns -- [Self-Review Reflection](self-review-reflection.md) - Add quality gates to CBA output -- [Memory System](../getting-started/first-cba.md) - Persistent context across sessions +- [Self-Review Reflection](self-review-reflection.md) +- [Autonomous Quality Enforcement](autonomous-quality-enforcement.md) diff --git a/docs/patterns/dependabot-auto-merge.md b/docs/patterns/dependabot-auto-merge.md index d9c8345..7f5364d 100644 --- a/docs/patterns/dependabot-auto-merge.md +++ b/docs/patterns/dependabot-auto-merge.md @@ -4,17 +4,41 @@ --- -## Overview - -!!! note "Section Summary" - When Dependabot creates a PR for a patch version update, auto-merge after CI passes. Keep dependencies current without manual effort. Minor/major updates still require human review. - ---- - ## Quick Start -!!! note "Section Summary" - Copy the workflow YAML. No additional secrets needed. Enable Dependabot in your repo. Watch patch updates auto-merge. +Create `.github/workflows/dependabot-auto-merge.yml`: + +```yaml +name: Dependabot Auto-Merge + +on: + pull_request: + types: [opened, synchronize, reopened] + +permissions: + contents: write + pull-requests: write + +jobs: + auto-merge: + if: github.actor == 'dependabot[bot]' + runs-on: ubuntu-latest + steps: + - name: Fetch metadata + id: metadata + uses: dependabot/fetch-metadata@v2 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + + - name: Auto-merge patch updates + if: steps.metadata.outputs.update-type == 'version-update:semver-patch' + run: gh pr merge --auto --squash "$PR_URL" + env: + PR_URL: ${{ github.event.pull_request.html_url }} + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} +``` + +Enable Dependabot in `.github/dependabot.yml`. Set up branch protection requiring CI to pass. --- @@ -22,36 +46,48 @@ ```mermaid flowchart TD - A[Dependabot PR] --> B{Patch Version?} - B -->|No| C[Require Human Review] - B -->|Yes| D{CI Passes?} - D -->|No| C - D -->|Yes| E[Auto-Merge] + A[Dependabot PR] --> B{Author is dependabot?} + B -->|No| C[Normal Review] + B -->|Yes| D[Fetch Metadata] + D --> E{Patch Version?} + E -->|No| C + E -->|Yes| F{CI Passes?} + F -->|No| C + F -->|Yes| G[Auto-Merge] ``` --- -## Safety Conditions +## Update Types -!!! note "Section Summary" - Only auto-merge when ALL conditions met: author is dependabot[bot], update is patch version, all CI checks pass, no merge conflicts. Why each condition matters. +| Type | Value | Risk | Default Action | +|------|-------|------|----------------| +| Patch | `version-update:semver-patch` | Low | Auto-merge | +| Minor | `version-update:semver-minor` | Medium | Human review | +| Major | `version-update:semver-major` | High | Human review | --- -## Workflow YAML +## Options -!!! note "Section Summary" - Complete workflow file. Dependabot metadata action. Conditional auto-merge. Squash and delete branch. +| Option | Add to workflow | +|--------|-----------------| +| **Also merge minor** | Add step with `if: steps.metadata.outputs.update-type == 'version-update:semver-minor'` | +| **Only dev deps** | Add `&& steps.metadata.outputs.dependency-type == 'direct:development'` | +| **Exclude packages** | Check `steps.metadata.outputs.dependency-names` doesn't contain package | --- -## Customization +## Troubleshooting -!!! note "Section Summary" - Auto-merge minor versions too. Exclude specific packages. Custom merge strategy. Notifications. +| Problem | Fix | +|---------|-----| +| Auto-merge not triggering | Use `pull_request_target` event for fork PRs | +| Permission denied | Add `contents: write` permission | +| Doesn't wait for CI | Enable branch protection with required status checks | --- ## Related Patterns -- [Stale Issue Management](stale-issues.md) - Another proactive cleanup pattern +- [Stale Issue Management](stale-issues.md) diff --git a/docs/patterns/index.md b/docs/patterns/index.md index 53b6ec8..461627d 100644 --- a/docs/patterns/index.md +++ b/docs/patterns/index.md @@ -1,58 +1,50 @@ # Patterns Index -**All AI-assisted development patterns in one place.** +Each pattern is standalone—adopt what you need. --- -## Overview +## Agent Behavior -!!! note "Section Summary" - Patterns are organized into three categories: Agent Behavior (how AI works), GHA Automation (proactive CI/CD), and Foundation (enabling patterns). Each pattern is standalone - adopt what you need. +| Pattern | Effort | Impact | +|---------|--------|--------| +| [Codebase Agent](codebase-agent.md) | Medium | High | +| [Self-Review Reflection](self-review-reflection.md) | Low | High | +| [Autonomous Quality Enforcement](autonomous-quality-enforcement.md) | Medium | High | +| [Multi-Agent Code Review](multi-agent-code-review.md) | High | Very High | --- -## Agent Behavior Patterns +## GHA Automation -How AI agents behave during development. - -| Pattern | Effort | Impact | Description | -|---------|--------|--------|-------------| -| [Codebase Agent](codebase-agent.md) | Medium | High | Single source of truth for AI behavior | -| [Self-Review Reflection](self-review-reflection.md) | Low | High | Agent reviews own work before presenting | -| [Autonomous Quality Enforcement](autonomous-quality-enforcement.md) | Medium | High | Validate code before delivery | -| [Multi-Agent Code Review](multi-agent-code-review.md) | High | Very High | Parallel specialized reviews | - ---- - -## GHA Automation Patterns - -Proactive CI/CD workflows that reduce toil. - -| Pattern | Trigger | Effort | Impact | -|---------|---------|--------|--------| -| [Issue-to-PR](issue-to-pr.md) | `issues.opened` | High | Very High | -| [PR Auto-Review](pr-auto-review.md) | `pull_request` | Medium | High | -| [Dependabot Auto-Merge](dependabot-auto-merge.md) | `pull_request` | Low | Medium | -| [Stale Issue Management](stale-issues.md) | `schedule` | Low | Medium | +| Pattern | Trigger | Effort | +|---------|---------|--------| +| [Issue-to-PR](issue-to-pr.md) | `issues.opened` | High | +| [PR Auto-Review](pr-auto-review.md) | `pull_request` | Medium | +| [Dependabot Auto-Merge](dependabot-auto-merge.md) | `pull_request` | Low | +| [Stale Issue Management](stale-issues.md) | `schedule` | Low | --- -## Foundation Patterns +## Foundation -Patterns that make AI more effective. - -| Pattern | Purpose | Effort | Impact | -|---------|---------|--------|--------| -| [Layered Architecture](layered-architecture.md) | Code structure AI can reason about | Low | Medium | -| [Security Patterns](security-patterns.md) | Practical protection | Low | Medium | -| [Testing Patterns](testing-patterns.md) | Test pyramid approach | Medium | High | +| Pattern | Purpose | +|---------|---------| +| [Layered Architecture](layered-architecture.md) | Code structure AI can reason about | +| [Security Patterns](security-patterns.md) | Validate at boundaries | +| [Testing Patterns](testing-patterns.md) | Test pyramid approach | --- -## Adoption Matrix +## Start Here -!!! note "Section Summary" - Decision tree for which patterns to adopt based on your situation. Pain point → recommended pattern mapping. Effort/impact quadrant visualization. +| Pain Point | Pattern | +|------------|---------| +| AI gives inconsistent answers | [Codebase Agent](codebase-agent.md) | +| AI misses obvious bugs | [Self-Review Reflection](self-review-reflection.md) | +| PRs take forever to create | [Issue-to-PR](issue-to-pr.md) | +| Code reviews are bottleneck | [PR Auto-Review](pr-auto-review.md) | +| Dependency updates pile up | [Dependabot Auto-Merge](dependabot-auto-merge.md) | --- @@ -61,24 +53,29 @@ Patterns that make AI more effective. ```mermaid flowchart TD CBA[Codebase Agent] --> SR[Self-Review] - CBA --> MEM[Memory System] CBA --> AQE[Autonomous Quality] - AQE --> ITP[Issue-to-PR] SR --> ITP - ITP --> PAR[PR Auto-Review] - LA[Layered Architecture] -.-> CBA SEC[Security Patterns] -.-> CBA TEST[Testing Patterns] -.-> AQE ``` -Solid arrows: recommended order. Dashed arrows: optional dependencies. - --- ## Quick Reference -!!! note "Section Summary" - One-page cheat sheet of all patterns with key commands and file paths. Printable format. +| File | Location | +|------|----------| +| CBA definition | `.claude/agents/codebase-agent.md` | +| Context files | `.claude/context/*.md` | +| Issue-to-PR | `.github/workflows/issue-to-pr.yml` | +| PR Review | `.github/workflows/pr-review.yml` | +| Dependabot | `.github/workflows/dependabot-auto-merge.yml` | +| Stale | `.github/workflows/stale.yml` | + +| Secret | Used By | +|--------|---------| +| `ANTHROPIC_API_KEY` | Issue-to-PR, PR Review | +| `GITHUB_TOKEN` | All workflows (auto-provided) | diff --git a/docs/patterns/issue-to-pr.md b/docs/patterns/issue-to-pr.md index da2bfd7..bcece6c 100644 --- a/docs/patterns/issue-to-pr.md +++ b/docs/patterns/issue-to-pr.md @@ -4,17 +4,47 @@ --- -## Overview - -!!! note "Section Summary" - When a well-defined issue is created, AI analyzes it and creates a draft PR. Reduces the 20-minute PR ceremony for 2-minute fixes. Human reviews the draft, not the initial work. - ---- - ## Quick Start -!!! note "Section Summary" - Copy the workflow YAML. Configure secrets (ANTHROPIC_API_KEY). Create a test issue. See the draft PR appear. +Create `.github/workflows/issue-to-pr.yml`: + +```yaml +name: Issue to Draft PR + +on: + issues: + types: [opened, labeled] + +permissions: + contents: write + pull-requests: write + issues: write + +jobs: + create-pr: + if: contains(github.event.issue.labels.*.name, 'ready-for-pr') + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - uses: anthropics/claude-code-action@v1 + with: + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + prompt: | + Analyze issue #${{ github.event.issue.number }}. + Title: ${{ github.event.issue.title }} + Body: ${{ github.event.issue.body }} + + If requirements are clear: + 1. Create branch: feat/issue-${{ github.event.issue.number }} + 2. Implement the fix + 3. Run tests + 4. Create a draft PR linking to the issue + + If unclear: comment asking for clarification. +``` + +Add `ANTHROPIC_API_KEY` to repository secrets. Label an issue `ready-for-pr` to trigger. --- @@ -25,50 +55,49 @@ flowchart TD A[Issue Opened] --> B{Well-defined?} B -->|No| C[Request Clarification] B -->|Yes| D[Analyze Issue] - D --> E[Self-Review Analysis] - E --> F[Create Branch] - F --> G[Create Draft PR] - G --> H[Link to Issue] + D --> E[Create Branch] + E --> F[Implement Changes] + F --> G[Run Tests] + G --> H{Tests Pass?} + H -->|No| I[Fix and Retry] + I --> G + H -->|Yes| J[Create Draft PR] + J --> K[Link to Issue] ``` --- ## Risk Categories -!!! note "Section Summary" - Low risk (auto-fix eligible): formatting, linting, unused imports. Medium risk (PR only): refactoring, test additions. High risk (report only): breaking changes, security. How to configure each. - ---- - -## Workflow YAML - -!!! note "Section Summary" - Complete workflow file with annotations. Trigger conditions. Permissions required. Environment variables. +| Risk | Examples | Action | +|------|----------|--------| +| **Low** | Typo fixes, doc updates, lint errors | Auto-fix, create PR | +| **Medium** | Bug fixes, small features | Draft PR, require review | +| **High** | Breaking changes, security, new APIs | Analyze only, report findings | --- ## Safety Gates -!!! note "Section Summary" - Draft PR only (requires human merge). AI analysis step with self-review. Clarification requests for unclear issues. How to add custom gates. - ---- - -## Customization - -!!! note "Section Summary" - Custom labels for different risk levels. Custom analysis prompts. Integration with project boards. Slack notifications. +| Gate | Implementation | +|------|----------------| +| **Draft PR only** | `gh pr create --draft` | +| **Clarification requests** | Comment on issue if requirements unclear | +| **Size limits** | Skip if issue body >500 words | --- ## Troubleshooting -!!! note "Section Summary" - Common issues: workflow doesn't trigger, AI creates wrong PR, permissions errors. Solutions for each. +| Problem | Fix | +|---------|-----| +| Workflow doesn't trigger | Check label is exactly `ready-for-pr`, workflow on default branch | +| AI creates wrong PR | Add more context in issue template, include acceptance criteria | +| Permission errors | Ensure `contents: write`, `pull-requests: write`, `issues: write` | --- ## Related Patterns -- [Self-Review Reflection](self-review-reflection.md) - AI reviews its own analysis -- [PR Auto-Review](pr-auto-review.md) - AI reviews the resulting PR +- [Self-Review Reflection](self-review-reflection.md) +- [PR Auto-Review](pr-auto-review.md) diff --git a/docs/patterns/layered-architecture.md b/docs/patterns/layered-architecture.md index bedc966..284c190 100644 --- a/docs/patterns/layered-architecture.md +++ b/docs/patterns/layered-architecture.md @@ -1,69 +1,93 @@ # Layered Architecture -**Code structure AI can reason about effectively.** +**Code structure AI can reason about.** --- -## Overview - -!!! note "Section Summary" - AI assistants struggle with spaghetti code. Clear layer boundaries help AI make better decisions. Four layers: API, Service, Model, Core. Dependency rule: higher depends on lower, never reverse. - ---- +## The Four Layers -## Quick Start +```text +┌─────────────────────────────────┐ +│ API Layer (FastAPI) │ Routes, HTTP status codes +├─────────────────────────────────┤ +│ Service Layer (Logic) │ Business rules +├─────────────────────────────────┤ +│ Model Layer (Pydantic) │ Validation, serialization +├─────────────────────────────────┤ +│ Core Layer (Utilities) │ Config, security +└─────────────────────────────────┘ +``` -!!! note "Section Summary" - Example directory structure. What goes in each layer. How to reference in CBA context files. +**Dependency rule**: Higher layers import lower, never reverse. --- -## The Four Layers - -### API Layer +## Directory Structure -!!! note "Section Summary" - Route handlers, request/response models, HTTP status codes, OpenAPI documentation. No business logic here. - -### Service Layer +```text +app/ +├── api/v1/items.py # Routes +├── services/item_service.py # Business logic +├── models/item.py # Pydantic models +└── core/ + ├── config.py # Settings + └── security.py # Utilities +``` -!!! note "Section Summary" - Business logic, CRUD operations, orchestration. No HTTP concerns, no database queries directly. +--- -### Model Layer +## Layer Responsibilities -!!! note "Section Summary" - Pydantic models, field validation, sanitization, serialization. Data structures and their rules. +| Layer | Does | Doesn't | +|-------|------|---------| +| **API** | Routes, HTTP errors, OpenAPI docs | Business logic | +| **Service** | Business rules, CRUD, orchestration | HTTP concerns | +| **Model** | Validation, sanitization | Business logic | +| **Core** | Config, security utils | Domain logic | -### Core Layer +--- -!!! note "Section Summary" - Configuration, security utilities, logging, shared utilities. Cross-cutting concerns. +## Example + +```python +# API Layer - handles HTTP +@router.post("/items", status_code=201) +def create_item(data: ItemCreate): + try: + return item_service.create_item(data) + except ValueError as e: + raise HTTPException(status_code=409, detail=str(e)) + +# Service Layer - business logic +class ItemService: + def create_item(self, data: ItemCreate) -> Item: + if self._slug_exists(data.slug): + raise ValueError("Duplicate slug") + return Item(id=self._next_id, **data.model_dump()) + +# Model Layer - validation +class ItemCreate(BaseModel): + name: str = Field(..., min_length=1, max_length=200) + slug: str = Field(..., pattern=r"^[a-z0-9-]+$") +``` --- -## Dependency Rule +## Dependency Diagram ```mermaid flowchart TD API[API Layer] --> Service[Service Layer] Service --> Model[Model Layer] Model --> Core[Core Layer] + API --> Model + API --> Core + Service --> Core ``` -!!! note "Section Summary" - Why the rule matters. How to enforce it. What to do when you need to break it. - ---- - -## AI Benefits - -!!! note "Section Summary" - Predictable AI outputs. Easier testing. Safer refactoring. How to describe layers in context files. - --- ## Related Patterns -- [Security Patterns](security-patterns.md) - Where validation happens in layers +- [Security Patterns](security-patterns.md) - Where validation happens - [Testing Patterns](testing-patterns.md) - How to test each layer diff --git a/docs/patterns/pr-auto-review.md b/docs/patterns/pr-auto-review.md index e4accfa..a76635b 100644 --- a/docs/patterns/pr-auto-review.md +++ b/docs/patterns/pr-auto-review.md @@ -4,17 +4,46 @@ --- -## Overview +## Quick Start -!!! note "Section Summary" - When any PR is opened or updated, AI reviews the code and posts structured feedback. Catches obvious issues before human time is spent. Severity levels: CRITICAL, WARNING, GOOD. +Create `.github/workflows/pr-review.yml`: ---- +```yaml +name: PR Auto-Review -## Quick Start +on: + pull_request: + types: [opened, synchronize, ready_for_review] + +permissions: + contents: read + pull-requests: write + +jobs: + review: + if: github.event.pull_request.draft == false + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 -!!! note "Section Summary" - Copy the workflow YAML. Configure secrets. Open a test PR. See the AI review comment appear. + - uses: anthropics/claude-code-action@v1 + with: + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + prompt: | + Review PR #${{ github.event.pull_request.number }}. + + Focus on: + - Security (injection, validation, secrets) + - Bugs (edge cases, error handling) + - Code quality (clarity, maintainability) + + Format findings as: + 🔴 CRITICAL: [must fix] + 🟡 WARNING: [should consider] + ✅ GOOD: [positive observation] + + Be concise. Only flag high-confidence issues. +``` --- @@ -22,35 +51,49 @@ ```mermaid flowchart TD - A[PR Opened/Updated] --> B[AI Reviews Code] - B --> C[Self-Review Findings] - C --> D[Post Review Comment] + A[PR Opened/Updated] --> B{Draft?} + B -->|Yes| C[Skip Review] + B -->|No| D[Checkout Code] + D --> E[AI Analyzes Diff] + E --> F[Generate Findings] + F --> G[Post Review Comment] ``` --- -## Review Format +## Review Severity -!!! note "Section Summary" - Structured output format with emojis for quick scanning. CRITICAL (must fix), WARNING (should consider), GOOD (positive observations). Examples of each. +| Level | Icon | Meaning | +|-------|------|---------| +| Critical | 🔴 | Security risk, crash, data loss - must fix | +| Warning | 🟡 | Bug risk, maintainability - should address | +| Info | ℹ️ | Suggestion - optional | +| Good | ✅ | Positive observation | --- -## Workflow YAML +## Options -!!! note "Section Summary" - Complete workflow file. Trigger on opened and synchronize. Review prompt with focus areas. Comment posting. +| Option | Add to workflow | +|--------|-----------------| +| **Inline comments** | `track_progress: true` in action inputs | +| **Skip Dependabot** | `if: github.actor != 'dependabot[bot]'` | +| **Skip by label** | `if: !contains(github.event.pull_request.labels.*.name, 'skip-review')` | +| **Block on critical** | Check output, `exit 1` if CRITICAL found | --- -## Customization +## Troubleshooting -!!! note "Section Summary" - Custom review criteria. Different prompts for different file types. Blocking vs commenting only. Integration with required reviews. +| Problem | Fix | +|---------|-----| +| Review not appearing | Check PR not draft, `pull-requests: write` permission set | +| Too noisy | Add "only flag high-confidence issues" to prompt | +| Misses issues | Increase `--max-turns`, add project-specific review criteria | --- ## Related Patterns -- [Issue-to-PR](issue-to-pr.md) - Source of PRs to review -- [Multi-Agent Code Review](multi-agent-code-review.md) - Multiple specialized reviewers +- [Issue-to-PR](issue-to-pr.md) +- [Multi-Agent Code Review](multi-agent-code-review.md) diff --git a/docs/patterns/security-patterns.md b/docs/patterns/security-patterns.md index 45c2203..fe6157f 100644 --- a/docs/patterns/security-patterns.md +++ b/docs/patterns/security-patterns.md @@ -1,59 +1,100 @@ # Security Patterns -**Practical protection without over-engineering.** +**Validate at boundaries, trust internal code.** --- -## Overview +## Quick Start -!!! note "Section Summary" - Philosophy: "Validate at boundaries, trust internal code." Most security bugs come from unvalidated user input, hardcoded secrets, and injection attacks. Focus on actual attack vectors. +1. Use Pydantic for all API inputs +2. Move secrets to environment variables +3. Add `.env` to `.gitignore` --- -## Quick Start +## Input Validation -!!! note "Section Summary" - Copy security context file. Add Pydantic validation to API models. Move secrets to environment variables. Done. +Validate once at API boundary: ---- +```python +from pydantic import BaseModel, Field, field_validator -## Input Validation +class ItemCreate(BaseModel): + name: str = Field(..., min_length=1, max_length=200) + slug: str = Field(..., pattern=r"^[a-z0-9-]+$") + + @field_validator("name") + @classmethod + def sanitize_name(cls, v: str) -> str: + return sanitize_string(v, max_length=200) +``` -!!! note "Section Summary" - Validate all request payloads with Pydantic. Sanitization in model validators. Internal code trusts validated data. Examples of validation patterns. +Don't re-validate in service layer—data is already clean. --- -## Sanitization Functions +## Sanitization -!!! note "Section Summary" - sanitize_string() - remove control characters, trim whitespace. validate_slug() - ensure URL-safe identifiers. When to use each. +```python +# app/core/security.py +import re + +def sanitize_string(value: str, max_length: int = 1000) -> str: + """Remove control chars, trim, enforce length.""" + cleaned = re.sub(r"[\x00-\x1f\x7f-\x9f]", "", value) + return cleaned.strip()[:max_length] + +def validate_slug(value: str) -> str: + """Validate URL-safe slug.""" + if not re.match(r"^[a-z0-9]+(-[a-z0-9]+)*$", value): + raise ValueError("Invalid slug") + return value +``` --- ## Secrets Management -!!! note "Section Summary" - Environment variables only. .env files never committed. Pydantic Settings for config. Container secrets. +```python +# app/core/config.py +from pydantic_settings import BaseSettings + +class Settings(BaseSettings): + database_url: str + secret_key: str + + class Config: + env_file = ".env" +``` + +| Environment | How to pass secrets | +|-------------|---------------------| +| Local dev | `.env` file (never commit) | +| Container | `-e SECRET_KEY=xxx` | +| CI/CD | GitHub Actions secrets | --- -## What We Don't Do +## Common Vulnerabilities -!!! note "Section Summary" - No security theater. No excessive validation everywhere. No complex encryption for non-sensitive data. Why less is more. +| Risk | Prevention | +|------|------------| +| SQL Injection | Use ORM or parameterized queries | +| Command Injection | `subprocess.run(["cmd", arg])` not `os.system(f"cmd {arg}")` | +| XSS | FastAPI auto-escapes JSON | +| Secrets in code | Environment variables only | --- -## AI Integration +## What NOT to Do -!!! note "Section Summary" - How to describe security rules in context files. What the CBA should check. Automated security review in PRs. +- Don't validate same data in multiple layers +- Don't encrypt non-sensitive config +- Don't commit `.env` files "just for testing" --- ## Related Patterns -- [Layered Architecture](layered-architecture.md) - Where security boundaries live +- [Layered Architecture](layered-architecture.md) - Where validation happens - [PR Auto-Review](pr-auto-review.md) - Automated security checks diff --git a/docs/patterns/stale-issues.md b/docs/patterns/stale-issues.md index 82204ce..06579ef 100644 --- a/docs/patterns/stale-issues.md +++ b/docs/patterns/stale-issues.md @@ -4,17 +4,35 @@ --- -## Overview - -!!! note "Section Summary" - Issues inactive for 30+ days get labeled stale. After 7 more days of inactivity, they close automatically. Exempt labels protect important issues. Keeps backlog clean without manual triage. - ---- - ## Quick Start -!!! note "Section Summary" - Copy the workflow YAML. Configure exempt labels. Run manually to test. Watch stale issues get cleaned up. +Create `.github/workflows/stale.yml`: + +```yaml +name: Close Stale Issues + +on: + schedule: + - cron: '0 0 * * 0' # Weekly + workflow_dispatch: + +permissions: + issues: write + pull-requests: write + +jobs: + stale: + runs-on: ubuntu-latest + steps: + - uses: actions/stale@v9 + with: + days-before-stale: 30 + days-before-close: 7 + stale-issue-label: 'stale' + stale-issue-message: | + Inactive for 30 days. Will close in 7 days unless there's activity. + exempt-issue-labels: 'pinned,security,bug' +``` --- @@ -22,44 +40,52 @@ ```mermaid flowchart TD - A[Weekly Schedule] --> B[Find Inactive Issues] - B --> C[Add Stale Label] - C --> D[Wait 7 Days] - D --> E{Activity?} - E -->|Yes| F[Remove Stale Label] - E -->|No| G[Close Issue] + A[Scheduled Run] --> B{Inactive > 30 days?} + B -->|No| C[Skip] + B -->|Yes| D{Has exempt label?} + D -->|Yes| C + D -->|No| E[Add Stale Label] + E --> F[Wait 7 days] + F --> G{Activity?} + G -->|Yes| H[Remove Stale] + G -->|No| I[Close Issue] ``` --- ## Configuration -!!! note "Section Summary" - Inactivity threshold (default: 30 days). Warning period (default: 7 days). Stale label name. Warning message customization. +| Option | Default | Description | +|--------|---------|-------------| +| `days-before-stale` | 60 | Days inactive before marking stale | +| `days-before-close` | 7 | Days after stale before closing | +| `stale-issue-label` | Stale | Label to apply | +| `exempt-issue-labels` | - | Labels that prevent stale (comma-separated) | +| `days-before-pr-stale` | 60 | Set to -1 to disable for PRs | --- ## Exempt Labels -!!! note "Section Summary" - Which labels prevent closure: pinned, security, bug. How to add custom exempt labels. When to use exempt vs just commenting. - ---- - -## Workflow YAML - -!!! note "Section Summary" - Complete workflow file using actions/stale. Schedule configuration. Issue and PR settings. Exempt labels. +| Label | Purpose | +|-------|---------| +| `pinned` | Long-term tracking | +| `security` | Security issues | +| `bug` | Confirmed bugs | +| `help-wanted` | Seeking contributions | --- -## Customization +## Troubleshooting -!!! note "Section Summary" - Different thresholds for issues vs PRs. Custom messages. Integration with project boards. Metrics tracking. +| Problem | Fix | +|---------|-----| +| Not running | Check cron syntax, workflow on default branch | +| Closing important issues | Add exempt labels | +| Too aggressive | Increase `days-before-stale` | --- ## Related Patterns -- [Dependabot Auto-Merge](dependabot-auto-merge.md) - Another proactive cleanup pattern +- [Dependabot Auto-Merge](dependabot-auto-merge.md) diff --git a/docs/patterns/testing-patterns.md b/docs/patterns/testing-patterns.md index a220b7d..061e1be 100644 --- a/docs/patterns/testing-patterns.md +++ b/docs/patterns/testing-patterns.md @@ -1,59 +1,134 @@ # Testing Patterns -**Test pyramid approach with clear responsibilities.** +**Test pyramid: many unit, some integration, few E2E.** --- -## Overview +## Quick Start + +```bash +mkdir -p tests/unit tests/integration tests/e2e +``` -!!! note "Section Summary" - Three levels: unit (many, fast), integration (some, medium), E2E (few, slow). Each level has clear responsibilities. 80%+ coverage target without chasing 100%. +```ini +# pytest.ini +[pytest] +testpaths = tests +addopts = --cov=app --cov-report=term-missing --cov-fail-under=80 +``` --- -## Quick Start +## Test Levels -!!! note "Section Summary" - Copy testing context file. Set up tests/ directory structure. Configure pytest. Run first tests. +| Level | Location | Purpose | Speed | +|-------|----------|---------|-------| +| **Unit** | `tests/unit/` | Service layer logic | Fast | +| **Integration** | `tests/integration/` | API endpoints | Medium | +| **E2E** | `tests/e2e/` | Critical user journeys | Slow | --- ## Unit Tests -!!! note "Section Summary" - Test service layer in isolation. Mock external dependencies. Arrange-Act-Assert pattern. Location: tests/unit/. What to test, what not to test. +Test business logic in isolation. No HTTP, no database. + +```python +# tests/unit/test_item_service.py +def test_create_item(): + service = ItemService() + item = service.create(ItemCreate(name="Test", slug="test")) + + assert item.name == "Test" + assert item.id is not None + +def test_duplicate_slug_raises(): + service = ItemService() + service.create(ItemCreate(name="A", slug="test")) + + with pytest.raises(ValueError, match="already exists"): + service.create(ItemCreate(name="B", slug="test")) +``` --- ## Integration Tests -!!! note "Section Summary" - Test API endpoints with TestClient. Real request/response cycle. Database fixtures if applicable. Location: tests/integration/. When integration tests are better than unit tests. +Test API endpoints with TestClient. + +```python +# tests/integration/test_api.py +from fastapi.testclient import TestClient +from app.main import app + +client = TestClient(app) + +def test_create_item_returns_201(): + response = client.post("/items", json={"name": "Test", "slug": "test"}) + assert response.status_code == 201 + +def test_not_found_returns_404(): + response = client.get("/items/99999") + assert response.status_code == 404 +``` --- ## E2E Tests -!!! note "Section Summary" - Test complete workflows. CBA automation scenarios. Location: tests/e2e/. Why to keep these minimal. +Test complete user journeys. Keep minimal. + +```python +# tests/e2e/test_item_lifecycle.py +@pytest.mark.e2e +def test_create_read_update_delete(client): + # Create + resp = client.post("/items", json={"name": "Widget", "slug": "widget"}) + item_id = resp.json()["id"] + + # Read + assert client.get(f"/items/{item_id}").status_code == 200 + + # Update + client.patch(f"/items/{item_id}", json={"name": "Updated"}) + + # Delete + assert client.delete(f"/items/{item_id}").status_code == 204 +``` --- -## Coverage Philosophy +## Coverage -!!! note "Section Summary" - Target 80%+ coverage. Focus on critical paths. Don't chase 100% (diminishing returns). How to identify critical paths. +| Priority | What | Target | +|----------|------|--------| +| High | Business logic, security | 100% | +| Medium | API endpoints | 90% | +| Low | Config, utilities | 60% | ---- +Target 80%+ overall. Don't chase 100%. -## AI Test Generation +--- -!!! note "Section Summary" - How CBA generates tests. Test patterns in context files. Review process for AI-generated tests. +## Parametrize + +```python +@pytest.mark.parametrize("slug,valid", [ + ("valid-slug", True), + ("UPPERCASE", False), + ("-starts-hyphen", False), +]) +def test_slug_validation(slug, valid): + if valid: + assert validate_slug(slug) == slug + else: + with pytest.raises(ValueError): + validate_slug(slug) +``` --- ## Related Patterns - [Layered Architecture](layered-architecture.md) - What each test level covers -- [Autonomous Quality Enforcement](autonomous-quality-enforcement.md) - Running tests in CBA +- [Autonomous Quality Enforcement](autonomous-quality-enforcement.md) - Running tests in CI