A powerful AI-powered application for disaster mapping and image segmentation using Facebook's Segment Anything Model (SAM). This project provides both API endpoints and video processing capabilities for automated object detection and segmentation in disaster scenarios.
- Real-time Image Segmentation: Process images using the state-of-the-art SAM model
- Video Processing: Batch process video frames for disaster mapping
- REST API: FastAPI-based endpoints for easy integration
- Web Interface: User-friendly interface for image upload and visualization
- Multi-device Support: Automatic GPU/CPU detection and utilization
- Flexible Output: Returns both processed images and segmentation metadata
- Python 3.8+
- CUDA-compatible GPU (optional, but recommended)
- 8GB+ RAM
- 2GB+ free disk space for model weights
Choose one of the following installation methods:
- Docker and Docker Compose installed
- NVIDIA Docker runtime (for GPU support)
# Clone the repository
git clone https://github.com/Shellsight/disaster-mapping-segmentation.git
cd disaster-mapping-segmentation
# Build and run with Docker Compose
docker-compose up --buildThe API will be available at http://localhost:8080
# Build the Docker image
docker build -t disaster-mapping-segmentation .
# Run the container (with GPU support)
docker run --gpus all -p 8080:8080 \
-v $(pwd)/data:/app/data \
-v $(pwd)/app/static:/app/app/static \
disaster-mapping-segmentation
# Run without GPU (CPU only)
docker run -p 8080:8080 \
-v $(pwd)/data:/app/data \
-v $(pwd)/app/static:/app/app/static \
disaster-mapping-segmentationgit clone https://github.com/Shellsight/disaster-mapping-segmentation.git
cd disaster-mapping-segmentationpython -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activatepip install -r requirements.txtThe application expects the SAM model weights to be placed in app/models/. Download the ViT-H SAM model:
mkdir -p app/models
cd app/models
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
cd ../..mkdir -p app/static app/templates dataCreate a .env file in the root directory for custom configuration:
# .env
APP_NAME="AI Disaster Mapping API"
DEBUG=True
MODEL_PATH="app/models/sam_vit_h_4b8939.pth"
MODEL_TYPE="vit_h"# Quick start with Docker Compose
docker-compose up --build
# Or build and run manually
docker build -t disaster-mapping-segmentation .
docker run --gpus all -p 8080:8080 disaster-mapping-segmentation# Activate virtual environment
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Start the API server
uvicorn app.main:app --host 0.0.0.0 --port 8080 --reloadThe API will be available at http://localhost:8080
Once the server is running, access the interactive API documentation:
- Swagger UI:
http://localhost:8080/docs - ReDoc:
http://localhost:8080/redoc
curl http://localhost:8080/healthcurl -X POST "http://localhost:8080/segment" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "image=@path/to/your/image.jpg"import requests
url = "http://localhost:8080/segment"
files = {"image": open("path/to/your/image.jpg", "rb")}
response = requests.post(url, files=files)
if response.status_code == 200:
# Save processed image
with open("output.png", "wb") as f:
f.write(response.content)
# Get metadata from headers
total_objects = response.headers.get('X-Total-Objects')
processing_time = response.headers.get('X-Processing-Time')
print(f"Found {total_objects} objects in {processing_time}s")For batch processing of video files:
python run.pyThis script will:
- Process frames from
demo.mp4 - Send each frame to the segmentation API
- Generate an output video with segmented objects
- Save the result to
app/static/processed_video.mp4
Navigate to http://localhost:8080 in your browser to access the web interface for uploading and processing images.
Segments objects in an uploaded image using SAM.
Request:
- Method:
POST - Content-Type:
multipart/form-data - Body: Image file
Response:
- Content: Processed image with colored segmentation masks
- Headers:
X-Total-Objects: Number of detected objectsX-Processing-Time: Processing time in secondsX-Segmentation-Results: JSON string with detailed results
Example Response Headers:
X-Total-Objects: 15
X-Processing-Time: 2.34
X-Segmentation-Results: {"detected_objects": [], "processing_time": 2.34, "total_objects": 15}
Health check endpoint.
Response:
{
"status": "ok",
"message": "AI Disaster Mapping API is running π"
}disaster-mapping-segmentation/
βββ app/
β βββ api/
β β βββ segment.py # API endpoints
β βββ core/
β β βββ config.py # Configuration settings
β β βββ logger.py # Logging configuration
β βββ models/ # SAM model weights
β βββ services/
β β βββ segment.py # Core segmentation logic
β βββ static/ # Static files
β βββ templates/ # HTML templates
β βββ main.py # FastAPI application
βββ data/ # Data directory
βββ notebooks/ # Jupyter notebooks
βββ requirements.txt # Python dependencies
βββ run.py # Video processing script
βββ README.md # This file
The application uses Pydantic settings for configuration. Key settings include:
MODEL_PATH: Path to SAM model weightsMODEL_TYPE: SAM model variant (vit_h,vit_l,vit_b)DEVICE: Computing device (cudaorcpu)DEBUG: Enable debug mode
- GPU Acceleration: Ensure CUDA is available for faster processing
- Model Selection: Use
vit_hfor best quality,vit_bfor faster processing - Image Preprocessing: Resize large images before processing
- Batch Processing: Use the video processing script for multiple images
pytestblack .
isort .mypy app/- Multi-stage build: Optimized for production deployment
- CUDA support: GPU acceleration for faster inference
- Health checks: Built-in container health monitoring
- Volume mounts: Persistent data and output storage
- Environment configuration: Easy customization via environment variables
# Build the image
docker build -t disaster-mapping-segmentation .
# Run with GPU support (requires nvidia-docker)
docker run --gpus all -p 8080:8080 \
-v $(pwd)/data:/app/data \
-v $(pwd)/app/static:/app/app/static \
disaster-mapping-segmentation
# Run CPU-only
docker run -p 8080:8080 \
-v $(pwd)/data:/app/data \
-v $(pwd)/app/static:/app/app/static \
disaster-mapping-segmentation
# View logs
docker logs disaster-mapping-segmentation
# Access container shell
docker exec -it disaster-mapping-segmentation /bin/bashThe docker-compose.yml includes:
- API Service: Main application with GPU support
- Volume Mounts: For data persistence and output access
- Health Checks: Automatic service monitoring
- Restart Policy: Automatic container restart on failure
Configure the Docker container using environment variables:
# docker-compose.yml or docker run -e
APP_NAME="AI Disaster Mapping API"
DEBUG=true
MODEL_PATH="app/models/sam_vit_h_4b8939.pth"
MODEL_TYPE="vit_h"For GPU acceleration:
- Install NVIDIA Docker runtime
- Ensure CUDA-compatible GPU is available
- Use
--gpus allflag or Docker Compose GPU configuration
This project uses Facebook's Segment Anything Model (SAM):
- Model: ViT-H (Vision Transformer - Huge)
- Parameters: 630M
- Input: RGB images of any size
- Output: Instance segmentation masks
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Facebook Research for the Segment Anything Model
- FastAPI for the web framework
- OpenCV for image processing capabilities
For issues and questions:
- Check the Issues page
- Review the API documentation at
/docs - Ensure all dependencies are correctly installed
Note: Make sure to download the SAM model weights before running the application. The model file is approximately 2.6GB and requires a stable internet connection for download.