A super-reliable Python script for backing up ALL Zoom cloud recordings to local storage (SSD/HDD).
- Streaming Download - Downloads in 8MB chunks directly to disk (no RAM buffering)
- Resume Capability - Uses HTTP Range headers to continue interrupted downloads
- SQLite Progress Tracking - Persistent database tracks every file's status
- Integrity Verification - Verifies file size after download
- Disk Health Checks - Monitors disk availability, free space, and write access
- Graceful Shutdown - Ctrl+C saves progress, resume anytime
- Auto-Restart Wrapper - Bash script with exponential backoff for unattended operation
- Progress Bars - Visual progress for overall and per-file downloads
- All User Types - Fetches recordings from active, inactive, and pending users
- Python 3.7+
- Zoom Server-to-Server OAuth App credentials
- External storage with sufficient space
# Clone the repository
git clone https://github.com/yourusername/zoom-cloud-backup.git
cd zoom-cloud-backup
# Install dependencies
pip install -r requirements.txt
# Copy and configure
cp config.example.json config.json
# Edit config.json with your Zoom credentials and storage path- Go to Zoom App Marketplace
- Click Develop → Build App
- Choose Server-to-Server OAuth
- Fill in the app information
- Add scopes:
cloud_recording:read:list_user_recordings:admincloud_recording:read:list_recording_files:adminuser:read:list_users:adminuser:read:user:admin
- Copy Account ID, Client ID, and Client Secret to
config.json
Edit config.json:
{
"zoom": {
"account_id": "YOUR_ACCOUNT_ID",
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET"
},
"storage": {
"base_path": "/path/to/backup/folder",
"min_free_space_gb": 5
},
"download": {
"chunk_size_mb": 8,
"max_retries": 5,
"retry_delay_sec": 10,
"rate_limit_delay": 0.1,
"timeout_sec": 60
},
"scan": {
"start_date": "2020-01-01",
"end_date": null
}
}| Option | Description | Default |
|---|---|---|
storage.base_path |
Where to save recordings | Required |
storage.min_free_space_gb |
Minimum free space to continue | 5 GB |
download.chunk_size_mb |
Download chunk size | 8 MB |
download.max_retries |
Max retries per file | 5 |
download.retry_delay_sec |
Delay between retries | 10 sec |
download.rate_limit_delay |
Delay between API calls | 0.1 sec |
download.timeout_sec |
Request timeout | 60 sec |
scan.start_date |
Earliest date to scan | 2020-01-01 |
scan.end_date |
Latest date (null = today) | null |
# Full scan and download
python zoom_backup.py
# Resume interrupted download (skip scanning)
python zoom_backup.py --resume
# Scan only (no download)
python zoom_backup.py --scan-only
# Show statistics
python zoom_backup.py --statusUse the wrapper script for overnight/unattended backups:
# Make executable
chmod +x run_backup.sh
# Run with auto-restart
./run_backup.sh
# Watch logs in another terminal
tail -f zoom_backup.logThe wrapper script:
- Automatically restarts on failure (up to 100 times)
- Uses exponential backoff (10 sec → 5 min)
- Checks disk availability before restart
- Resets retry counter after 1 hour of successful operation
- Exits when all files are downloaded
Downloaded recordings are organized by user and date:
/backup/path/
├── user1@example.com/
│ ├── 2024-01-15/
│ │ ├── Meeting_Topic_10-30.mp4
│ │ ├── Meeting_Topic_10-30.m4a
│ │ ├── Meeting_Topic_10-30.vtt
│ │ └── Meeting_Topic_10-30.chat.txt
│ └── 2024-01-16/
│ └── ...
├── user2@example.com/
│ └── ...
├── progress.db # SQLite progress database
├── zoom_backup.log # Main log file
└── wrapper.log # Wrapper script log
- MP4 - Video recordings
- M4A - Audio only
- VTT - Subtitles/captions
- TRANSCRIPT - Meeting transcripts
- CHAT - Chat logs
- SHARED_SCREEN - Screen sharing recordings
The script uses SQLite to track progress:
-- Check pending files
sqlite3 progress.db "SELECT COUNT(*) FROM recordings WHERE status='pending'"
-- Check failed files
sqlite3 progress.db "SELECT local_path, error_message FROM recordings WHERE status='failed'"
-- Reset failed files to retry
sqlite3 progress.db "UPDATE recordings SET status='pending', retry_count=0 WHERE status='failed'"- Ensure your external drive is connected and mounted
- Check the mount point in config matches your system
- Increase
chunk_size_mbfor faster networks - Check your internet connection
- Some files may be corrupted on Zoom's servers
- Reset and retry:
sqlite3 progress.db "UPDATE recordings SET status='pending', retry_count=0 WHERE status='failed'"
- Ensure
scan.start_datecovers your earliest recordings - The script fetches active, inactive, AND pending users
- Deleted users are not recoverable via API
| Issue | Solution |
|---|---|
| Disk disconnects | Checks os.path.ismount() before each file |
| Network drops | HTTP Range headers for resume + retry with backoff |
| Out of space | Checks shutil.disk_usage() continuously |
| Ctrl+C interrupt | Graceful shutdown saves progress to SQLite |
| Corrupted download | Verifies file size, re-downloads on mismatch |
| Zoom rate limits | Configurable delays + exponential backoff |
| Token expiration | Auto-refresh OAuth token |
MIT License - see LICENSE file.
Contributions are welcome! Please feel free to submit a Pull Request.
This script was created to reliably back up ~300GB of Zoom recordings overnight. It has been tested with:
- 8,800+ files
- 290+ GB of data
- 13 users (active, inactive, pending)
- 10+ hours of continuous operation