This repository contains a FastAPI backend (server.py + autoparts.py) and a Vite + React + Tailwind web UI (ui/). The goal is to take one or more .py files and transform them into a modular Python package, generate __main__.py, and produce a downloadable .zip. Optionally, it can use Ollama or OpenAI to suggest a package name and attempt small, safe edits.
- Planning: Analyze input file(s) and propose a module plan (
POST /plan,POST /plan_multi). - Packaging: Reorganize code into a Python package and return a ZIP (
POST /build,POST /build_multi). - AI name suggestion: Suggest a package name via Ollama or OpenAI (optional).
- Compaction controls: Merge tiny modules, set target module count, line thresholds, etc.
- Web UI: Upload files, preview the plan, build, and download the ZIP — all in the browser.
- Input language: Python (
.py). The packager currently accepts Python sources only and outputs a structured Python package (with__init__.pyand__main__.py). - Backend: Python 3.10+, FastAPI, Uvicorn,
python-multipart; optional Black for formatting. - Frontend: TypeScript + React + Vite + Tailwind CSS.
- HTTP/CLI: Examples use cURL, but any HTTP client works.
- AI providers (optional): Ollama (local) and OpenAI (cloud) for name suggestions and small, safe edits.
- Python ≥ 3.10
- Node.js ≥ 18 (compatible with Vite 5)
- (Optional) Ollama local server or an OpenAI API key
git clone <REPO_URL>.git
cd <REPO_NAME>
python -m venv .venv
# Windows
.\.venv\Scripts\activate
# macOS / Linux
source .venv/bin/activate
pip install -r requirements.txtCreate a .env file (you can copy from .env.example):
# .env
# Default AI settings for the CLI
AI_PROVIDER=openai # or 'ollama'
AI_MODEL=gpt-4o-mini
AI_BASE_URL=https://api.openai.com/v1
# OpenAI API key
OPENAI_API_KEY=sk-...
# Ollama defaults for server endpoints
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.1
Both Ollama and OpenAI are optional. If AI naming fails or is disabled, a local naming fallback is used.
uvicorn server:app --reload --port 8000- Health check:
GET http://localhost:8000/health - Root:
GET http://localhost:8000/(lists available endpoints)
cd ui
npm install
npm run dev- The default API base is
http://localhost:8000(seeAPI_BASEinui/src/App.tsx). - Production:
npm run buildcreatesui/distthat you can serve behind any static server or reverse proxy (nginx, etc.).
Analyze a single Python file and return a proposed module structure.
- Form field:
file(single.py)
cURL
curl -X POST http://localhost:8000/plan -F "file=@example.py"Create a plan for multiple files.
- Form fields:
files(multiple.py)
curl -X POST http://localhost:8000/plan_multi -F "files=@a.py" -F "files=@b.py"Package a single file and return a ZIP.
- Form fields:
file:.pyfile (required)package_name: custom name (optional; provide it to disable AI name suggestion, or setai_name=false)compact: 0..3 compaction level (default0)pack_small_lines,max_modules,min_module_lines,target_modules: advanced compaction knobsai_name:true/false(defaulttrue)ollama_base_url,ollama_model: (optional) for AI name suggestion via Ollama
cURL
curl -X POST http://localhost:8000/build -F "file=@example.py" -F "compact=2" -o output.zipPackage multiple files into one package (returns ZIP).
- Form fields:
files(multiple), plus the same compaction/AI options
curl -X POST http://localhost:8000/build_multi -F "files=@a.py" -F "files=@b.py" -F "compact=1" -o output.zipAttempt limited, safe edits to the source using AI (Ollama/OpenAI) when requested.
autoparts.py # Packaging/analysis logic (incl. AI name suggestion)
server.py # FastAPI app and endpoints
ui/ # React + Vite + Tailwind web UI
package.json
src/
...
Contributions are welcome! See CONTRIBUTING.md for guidelines. A list of people who have contributed is available in CONTRIBUTORS.md.

