This repository defines the local development workspace, orchestration layer, and deployment entry point for the OmniBioAI ecosystem.
OmniBioAI is an open, modular, AI-powered bioinformatics platform designed to run consistently across local machines, on-prem servers, HPC environments, and cloud infrastructure, with no mandatory cloud dependencies.
This repository does not embed core application logic. Instead, it acts as a control plane and workspace coordinator, bringing together multiple independently versioned OmniBioAI components into a single runnable ecosystem.
This workspace provides:
- A single-root project layout for OmniBioAI development
- Docker Compose–based orchestration for the full stack
- Shared configuration, data, and working directories
- Optional offline / air-gapped deployment support
- A foundation for HPC (Apptainer/Singularity) and Kubernetes deployments
Think of this repository as the “assembly and runtime layer” for OmniBioAI.
Desktop/machine/
├── omnibioai/ # OmniBioAI Workbench (Django core platform)
├── omnibioai-tool-exec/ # Tool Execution Service (TES)
├── omnibioai-toolserver/ # FastAPI ToolServer (external tools, APIs)
├── omnibioai-lims/ # OmniBioAI LIMS (data & sample management)
├── omnibioai-rag/ # RAG & LLM-based intelligence services
├── omnibioai_sdk/ # Python SDK (thin client for APIs)
├── omnibioai-workflow-bundles/ # Workflow bundles (WDL / Nextflow / Snakemake)
│
├── deploy/ # Deployment definitions & packaging
│ ├── compose/ # Docker Compose (canonical runtime)
│ ├── scripts/ # Bootstrap, bundling, install helpers
│ ├── bundle/ # Offline / air-gapped release artifacts
│ ├── hpc/ # Apptainer / Singularity assets
│ └── k8s/ # Kubernetes / Helm (in progress)
│
├── data/ # Persistent user & project data
├── work/ # Workflow execution workspace
├── tmpdata/ # Temporary / scratch data
├── out/ # Generated outputs
│
├── db-init/ # Database initialization dumps
│ ├── omnibioai.sql
│ └── limsdb.sql
│
├── utils/ # Developer utilities
├── images/ # Architecture & documentation images
├── aws-tools/ # Optional cloud & infra experiments
├── backup/ # Archived / experimental material
│
├── docker-compose.yml # Full local OmniBioAI stack
├── .env.example # Environment variable template
└── README.md
The ecosystem follows a service-oriented, plugin-first architecture, with clear separation between:
- Control plane (UI, registries, APIs, metadata)
- Compute plane (workflow runners, tool execution, HPC adapters)
- Data plane (objects, artifacts, workflow outputs)
- AI plane (RAG, LLM-backed reasoning, agents)
Each OmniBioAI component is developed and versioned independently.
| Component | Repository |
|---|---|
| OmniBioAI Workbench | https://github.com/man4ish/omnibioai |
| Tool Execution Service (TES) | https://github.com/man4ish/omnibioai-tool-exec |
| ToolServer | https://github.com/man4ish/omnibioai-toolserver |
| OmniBioAI LIMS | https://github.com/man4ish/omnibioai-lims |
| RAG Service | https://github.com/man4ish/omnibioai-rag |
| Workflow Bundles | https://github.com/man4ish/omnibioai-workflow-bundles |
| OmniBioAI SDK | https://github.com/man4ish/omnibioai_sdk |
This repository orchestrates these projects; it does not vendor them.
- Docker Engine / Docker Desktop
- Docker Compose v2+
cp .env.example .env
docker compose up -dOptional (LLM backend):
docker compose exec ollama ollama pull llama3:8bAll services are configurable via .env.
No absolute host paths are required.
OmniBioAI supports fully offline deployment using prebuilt bundles.
An offline bundle can include:
- Docker images (
docker save) - Pre-seeded volumes (MySQL, object store, Ollama models)
- Compose configuration and installer scripts
This enables:
- Deployment on secure networks
- HPC head nodes
- Restricted enterprise environments
See deploy/ for details.
HPC environments typically prohibit Docker daemons. OmniBioAI supports HPC by running compute services via Apptainer/Singularity while keeping the control plane external.
Typical pattern:
- Control plane: local server, VM, or cloud
- Compute plane: HPC nodes (tool execution, workflow runners)
OCI images are converted to .sif images and run without root privileges.
-
All services are OCI-compatible
-
Helm charts and manifests are under development
-
Focus areas:
- Stateful services (DB, object store)
- Workflow execution scaling
- GPU-aware AI services
omnibioai_sdk/ provides a thin Python client for OmniBioAI APIs, intended for:
- Jupyter notebooks
- Analysis scripts
- Workflow tooling
- Programmatic access
The SDK does not embed backend logic and is published independently on PyPI.
- Single workspace root
- Relative paths only
- No hardcoded absolute paths
- Clear service boundaries
- Restart-safe orchestration
- Docker ↔ non-Docker parity
- Portable across environments
These principles allow the same ecosystem to run on:
- Laptops
- Servers
- HPC clusters
- Cloud platforms
| Service | Port | Description |
|---|---|---|
| OmniBioAI Workbench | 8000 | UI, plugins, agents |
| Tool Execution Service | 8080 | Workflow & tool execution |
| ToolServer | 9090 | External tool APIs |
| OmniBioAI LIMS | 7000 | LIMS integration |
| MySQL | 3306 | Metadata databases |
| Redis | 6379 | Celery, caching |
All ports are configurable via .env.
- ✅ Clean, modular workspace
- ✅ Multi-service Docker orchestration
- ✅ Offline-capable architecture
- ✅ HPC-friendly execution model
- ✅ Production-oriented structure
This repository represents the local control plane and deployment foundation of the OmniBioAI ecosystem.
