This project lets you deploy and visualize research demos in Docker containers, with a modern web interface.
- Docker & Docker Compose
- webcam (V4L2-compatible is a bonus)
- NVIDIA GPU + drivers + nvidia-docker for CUDA acceleration (optional, but recommended for best performance)
- Clone the repo and go to the
vision-hubdirectory. - Copy
.env.exampleto.envand adjust variables as needed. - Launch the stack:
docker compose up --build
- Access the interfaces:
- Astro frontend: http://localhost:4321
- Web API: http://localhost:8080
- RTSP: rtsp://localhost:8554/cam
See .env.example for the full list. Key variables:
CAMERA_DEVICE: Path to the local camera (e.g./dev/video0or/dev/v4l/by-id/...)RTSP_URL: RTSP stream URL to use (default:rtsp://mediamtx:8554/cam)MODEL: YOLO model to use (e.g.yolov8n.pt)IMG_SIZE: Input resolution for inference (e.g.640)JPEG_QUALITY: JPEG quality for MJPEG stream (e.g.80)FORCE_CPU: Set to1to force CPU usage even if a GPU is availableCONF,IOU,MAX_DET: Detection thresholds
- Besides global variables, there's always default ones, these are they I use on my own computer
- For GPU acceleration, ensure Docker is configured with the correct runtime (
--gpus allordevices: - nvidia.com/gpu=all). - Startup logs will indicate whether GPU or CPU is used.
Add your own models or demos by creating a dedicated folder and adapting the Dockerfile + app.py.
For questions or contributions, ask me directly at tom.burellier@associated.ltu.se