A comprehensive video analysis system that combines blur processing with 3D reconstruction analysis. VAPOR integrates deterministic blur generation, multiple deblurring models, comprehensive quality metrics calculation, and Structure from Motion (SfM) reconstruction to provide quantitative analysis of how blur affects both 2D image quality and 3D reconstruction capabilities.
The VAPOR system has been fully organized with comprehensive functionality:
- β Complete blur processing pipeline with 5 blur types and 6 deblurring models
- β Comprehensive metrics system with full-reference, no-reference, and sharpness metrics
- β 3D reconstruction integration with maploc SfM pipeline
- β Organized directory structure with systematic data organization
- β Unified pipelines for complete analysis workflows
- β Comprehensive documentation and setup guides
# Run everything for a video (pat3.mp4)
python vapor_complete_pipeline.py --video pat3.mp4
# Run with selective steps
python vapor_complete_pipeline.py --video pat3.mp4 --skip-blur --skip-reconstruction# 1. Generate blurred frames and videos (tested with pat3.mp4, stride 60)
python blur/blur_generator.py --video pat3.mp4 --stride 60
# 2. Calculate comprehensive quality metrics with new structure
python blur/metrics/metrics_calculator.py --video pat3.mp4
# 3. Run 3D reconstruction analysis using maploc integration
python reconstruction/reconstruction_pipeline.py --video pat3.mp4VAPOR now organizes data systematically with comprehensive separation of concerns:
data/
βββ videos/
β βββ original/ # Input videos (pat3.mp4, etc.)
β βββ blurred/ # Generated blurred videos
βββ frames/
β βββ original/{video}/ # Original extracted frames
β βββ blurred/{video}/ # Blurred frame variations (gaussian_low, motion_blur_high, etc.)
β βββ deblurred/{video}/ # Deblurred frames (when available)
βββ metrics/{video}/
β βββ original/ # Metrics for original frames
β βββ blurred/ # Metrics for blurred frames
β βββ deblurred/ # Metrics for deblurred frames
βββ point_clouds/{video}/
β βββ original/ # 3D reconstructions from original frames
β βββ blurred/ # 3D reconstructions from blurred frames
β βββ deblurred/ # 3D reconstructions from deblurred frames
βββ reports/ # Analysis reports
βββ logs/ # Pipeline execution logs
# Clone the repository
git clone <repository-url>
cd VAPOR
# Install basic dependencies
pip install -r requirements.txt# Setup maploc integration for 3D reconstruction
python reconstruction/setup_vapor_maploc.py# Setup all 6 deblurring models with conda environments
python blur/fx_02_deblur/setup_repositories.py --all
# See docs/Setup_Guides/Deblur_Models_Setup.md for detailed instructions# Run complete analysis pipeline on pat3.mp4
python vapor_complete_pipeline.py --video pat3.mp4
# Skip blur generation if frames already exist
python vapor_complete_pipeline.py --video pat3.mp4 --skip-blur
# Run only metrics and reconstruction
python vapor_complete_pipeline.py --video pat3.mp4 --skip-blur --skip-metrics# Generate blur effects (tested configuration)
python blur/blur_generator.py --video pat3.mp4 --stride 60
# Calculate comprehensive metrics
python blur/metrics/metrics_calculator.py --video pat3.mp4
# Run 3D reconstruction with specific algorithms
python reconstruction/reconstruction_pipeline.py --video pat3.mp4 --feature disk --matcher disk+lightglue
# Setup deblurring environments
python blur/fx_02_deblur/setup_repositories.py --method Restormer
# Run deblurring example (see deblur_example_usage.py for details)
python blur/fx_02_deblur/deblur_example_usage.py --input blurred_frames/ --output deblurred/ --method Restormer# Use different feature detectors for 3D reconstruction
python reconstruction/reconstruction_pipeline.py --video pat3.mp4 --feature superpoint --matcher superglue
# Process with custom stride for faster processing
python blur/blur_generator.py --video pat3.mp4 --stride 120
# Setup specific deblurring method only
python blur/fx_02_deblur/setup_repositories.py --method DeblurGANv2Deterministic Blur Generation (blur/fx_01_blur/):
- Gaussian Blur: Standard kernel-based blur with configurable sigma
- Motion Blur: Linear motion simulation with configurable angles
- Defocus Blur: Circular out-of-focus effects
- Haze Blur: Atmospheric scattering simulation
- Combined Blur: Sequential application of multiple effects
- Deterministic Kernels: Reproducible blur generation with fixed seeds
Advanced Deblurring (blur/fx_02_deblur/):
- 5 Integrated Models: DeblurGANv2, Restormer, Uformer, MPRNet, DPIR
- Example Usage:
blur/fx_02_deblur/deblur_example_usage.pyshows how to use deblurring methods - Environment Management: Automatic conda environment setup and switching
- Batch Processing: Process entire directories with multiple methods
Multi-Level Metrics:
- No-Reference: BRISQUE, NIQE (works on any image)
- Full-Reference: SSIM, PSNR (compares to original)
- Sharpness Analysis: Laplacian variance, gradient magnitude, total variation
Organized Output:
- Frame-by-frame detailed analysis
- Summary statistics by frame type (original/blurred/deblurred)
- Cross-comparison between blur methods and intensities
Structure from Motion (SfM) with Maploc:
- Feature Detectors: DISK, SuperPoint, SIFT, ALIKED
- Feature Matchers: LightGlue, SuperGlue, NN-ratio
- Quality Metrics: Point count, reprojection error, track length
- COLMAP Backend: Professional-grade 3D reconstruction
Reconstruction Analysis:
- Compare 3D quality across frame types (original/blurred/deblurred)
- Quantify blur impact on reconstruction quality
- Export point clouds (.ply format) for visualization
Reproducible Experiments:
- Complete Tracking: All parameters, seeds, and configurations logged
- Performance Monitoring: CPU, memory, and timing metrics
- Environment Recording: System information and dependency versions
- Result Comparison: Tools for analyzing multiple experimental runs
Extract frames from video and apply systematic blur effects using deterministic kernels:
python blur/blur_generator.py --video pat3.mp4 --stride 60- Tested with pat3.mp4 using stride 60 for efficient processing
- Generates 5 blur types Γ 2 intensities = 10 blur variations per frame
- Uses deterministic kernel generation for reproducible results
Calculate comprehensive quality metrics for all frame types using the new organized structure:
python blur/metrics/metrics_calculator.py --video pat3.mp4- Processes original, blurred, and deblurred frames separately
- Saves detailed and summary metrics in organized directory structure
- Supports both no-reference and full-reference quality assessment
Run SfM reconstruction on original, blurred, and deblurred frames using maploc integration:
python reconstruction/reconstruction_pipeline.py --video pat3.mp4 --feature disk --matcher disk+lightglue- Compares reconstruction quality across different blur conditions
- Exports point clouds and quality metrics for each frame type
- Quantifies how blur affects 3D reconstruction capabilities
Results are automatically organized and compared across:
- Image Quality: Sharpness degradation, perceptual quality (BRISQUE, SSIM, PSNR)
- 3D Reconstruction: Point density, geometric accuracy, feature matching quality
- Comparative Analysis: Optimal deblurring method identification for 3D applications
- Systematic organization by video β frame type β method hierarchy
- Clear separation of original, blurred, and deblurred data across all analysis types
- Comprehensive logging and experiment tracking for reproducibility
- Maploc Integration: Professional SfM reconstruction pipeline with COLMAP backend
- Quality Assessment: Quantitative 3D reconstruction analysis with multiple metrics
- Blur Impact Measurement: Direct measurement of blur effects on 3D reconstruction quality
- Single Command: Complete analysis from video to 3D reconstruction
- Flexible Execution: Skip individual steps as needed with command-line flags
- Resumable Processing: Continue interrupted analyses with existing data
- Deterministic Processing: Fixed seeds and reproducible kernel generation
- Comprehensive Logging: Complete experiment tracking with performance monitoring
Complete documentation is organized in the docs/ directory:
- 3D Reconstruction Integration: Complete guide to 3D analysis with maploc
- Directory Structure: Detailed file organization and navigation guide
- Deblur Models Setup: Complete setup guide for all 6 deblurring models
blur/: Complete blur processing pipeline with deterministic effectsfx_01_blur/: Blur generation with deterministic kernel system (kernels/generator.py)fx_02_deblur/: 6 integrated deblurring models with example usage (deblur_example_usage.py)metrics/: Comprehensive quality assessment system (metrics_calculator.py)runs/: Advanced experiment logging and tracking (experiment_logger.py,vapor_logger.py)
reconstruction/: 3D reconstruction pipeline with maploc integrationutils/: Shared utilities for video processing, ROI detection, and file management (core_utilities.py)data/: Organized data storage with systematic hierarchy by video/type/method
vapor_complete_pipeline.py: β Master controller for complete analysis - TESTED & WORKINGblur/blur_generator.py: β Blur generation pipeline - TESTED with pat3.mp4, stride 60blur/metrics/metrics_calculator.py: β Metrics calculation with organized structure - COMPLETEreconstruction_pipeline.py: β 3D reconstruction with maploc integration - COMPLETEsetup_vapor_maploc.py: β Setup script for maploc integration - COMPLETE
Deterministic Kernel Generation:
- Reproducible blur effects using fixed random seeds
- Multiple blur types: Gaussian, Motion, Defocus, Haze, Combined
- Configurable intensity levels with consistent parameters
- Complete metadata logging for experiment reproducibility
Enhanced Blur Engine:
- Scientific-grade blur application with proper normalization
- Advanced motion blur with configurable angles and lengths
- Atmospheric effects simulation for realistic degradation
- Batch processing with progress tracking
Unified Calculator (calculator.py):
- Integrates all metrics types in single interface
- Organized output with detailed and summary statistics
- Support for comparative analysis across blur methods
Full-Reference Metrics (full_reference.py):
- SSIM (Structural Similarity Index) with multiple scales
- PSNR (Peak Signal-to-Noise Ratio) calculation
- MSE (Mean Squared Error) analysis
No-Reference Metrics (no_reference.py):
- BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator)
- NIQE (Natural Image Quality Evaluator)
- Advanced perceptual quality assessment
Sharpness Metrics (sharpness.py):
- Laplacian variance (primary sharpness measure)
- Gradient magnitude analysis
- Total variation calculation
- Multi-scale sharpness assessment
Example Usage Script (deblur_example_usage.py):
- Shows how to use 6 different deblurring models
- Automatic conda environment switching
- Batch processing with progress tracking
- Performance comparison across methods
Supported Models:
- DeblurGANv2: GAN-based deblurring with adversarial training
- Restormer: Transformer-based restoration with multi-scale processing
- Uformer: U-Net with transformer blocks for efficient deblurring
- DPIR: Deep plug-and-play image restoration
- MPRNet: Multi-patch relationship network
Environment Management (setup_repositories.py):
- Automated repository cloning and environment setup
- Dependency management for each model
- Pre-trained model downloading assistance
- Environment documentation and verification
# Process pat3.mp4 with complete pipeline
python vapor_complete_pipeline.py --video pat3.mp4
# Output structure:
# data/
# βββ frames/pat3/{original,blurred,deblurred}/
# βββ metrics/pat3/{original,blurred,deblurred}/
# βββ point_clouds/pat3/{original,blurred,deblurred}/
# βββ reports/pat3_complete_analysis_report.txt# Only blur processing (skip 3D reconstruction)
python vapor_complete_pipeline.py --video pat3.mp4 --skip-reconstruction
# Only metrics and reconstruction (frames already exist)
python vapor_complete_pipeline.py --video pat3.mp4 --skip-blur# 1. Generate blurred frames
python blur/blur_generator.py --video pat3.mp4 --stride 60
# 2. Calculate quality metrics
python blur/metrics/metrics_calculator.py --video pat3.mp4
# 3. Run 3D reconstruction
python reconstruction/reconstruction_pipeline.py --video pat3.mp4 --feature disk- Higher Laplacian Variance = Sharper images
- Lower BRISQUE Score = Better perceptual quality
- Higher SSIM/PSNR = Better similarity to original
- More 3D Points = Denser reconstruction
- Lower Reprojection Error = More accurate geometry
- Longer Track Length = More robust feature matching
Compare across:
- Original frames (baseline quality)
- Blurred variants (degradation analysis)
- Deblurred frames (restoration effectiveness)
blur/: Core blur processing and deblurringreconstruction/: 3D reconstruction with maplocutils/: Shared utilities and core functionsdata/: Organized data storage hierarchydocs/: Comprehensive documentation
- Modular Architecture: Each component is self-contained
- Reproducible Processing: All operations use fixed seeds
- Organized Output: Systematic data hierarchy
- Flexible Execution: Skip/resume individual steps
VAPOR is designed for research in:
- Image Quality Assessment: Quantitative blur impact measurement
- Deblurring Algorithm Evaluation: Systematic comparison framework
- 3D Reconstruction Analysis: SfM quality under varying conditions
- Medical Imaging: CT scan reconstruction quality assessment
- β Blur Pipeline: Successfully tested with pat3.mp4 using stride 60
- β Deterministic Kernels: Reproducible blur generation with fixed seeds implemented and working
- β Metrics Calculation: Comprehensive quality assessment system operational with new directory structure
- β 3D Reconstruction: Maploc integration framework established and functional
- β Directory Organization: New structure implemented and validated (data/metrics/{video}/{original/blurred/deblurred})
- β Documentation: Complete documentation system organized in docs/ directory
| Component | Status | Details |
|---|---|---|
| Blur Generation | β Working | Deterministic kernel system operational |
| Quality Metrics | β Working | Full-reference, no-reference, and sharpness metrics |
| Deblurring Integration | β Ready | 6 models with unified CLI and environment management |
| 3D Reconstruction | β Framework Ready | Maploc integration established |
| Experiment Logging | β Operational | Comprehensive tracking and reproducibility system |
| File Organization | β Implemented | Systematic data organization by video/type/method |
The VAPOR system has been comprehensively tested and validated with:
- pat3.mp4 video processing with stride 60 frame extraction
- Complete blur generation pipeline with all 5 blur types and 2 intensities
- Deterministic kernel generation ensuring reproducible results across runs
- Organized directory structure separating original, blurred, and deblurred data
- Comprehensive setup guides and documentation system
- File cleanup and organization removing duplicates and establishing clear structure
VAPOR is now ready for:
- Research Applications: Quantitative blur impact studies
- Algorithm Evaluation: Systematic deblurring method comparison
- 3D Analysis: SfM reconstruction quality assessment
- Production Use: Reproducible video analysis workflows
If you use VAPOR in your research, please cite:
@software{vapor2025,
title={VAPOR: Video Analysis Processing for Object Recognition},
author={VAPOR Development Team},
year={2025},
url={https://github.com/KesneyFerro/VAPOR},
note={Integrated blur processing and 3D reconstruction analysis system}
}- Documentation: See
docs/directory for detailed guides - Issues: Report bugs and feature requests via GitHub issues
- Setup Help: See
docs/Setup_Guides/for detailed setup instructions
VAPOR provides a comprehensive framework for analyzing the impact of blur on both 2D image quality and 3D reconstruction capabilities, enabling quantitative evaluation of deblurring methods for computer vision applications.