Expert-level Postgres monitoring tool designed for humans and AI systems
Built for senior DBAs, SREs, and AI systems who need rapid root cause analysis and deep performance insights. This isn't a tool for beginners β it's designed for Postgres experts who need to understand complex performance issues in minutes, not hours.
Part of Self-Driving Postgres - postgres_ai monitoring is a foundational component of PostgresAI's open-source Self-Driving Postgres (SDP) initiative, providing the advanced monitoring and intelligent root cause analysis capabilities essential for achieving higher levels of database automation.
- Top-down troubleshooting methodology: Follows the Four Golden Signals approach (Latency, Traffic, Errors, Saturation)
- Expert-focused design: Assumes deep Postgres knowledge and performance troubleshooting experience
- Dual-purpose architecture: Built for both human experts and AI systems requiring structured performance data
- Comprehensive query analysis: Complete
pg_stat_statementsmetrics with historical trends and plan variations - Active Session History: Postgres's answer to Oracle ASH and AWS RDS Performance Insights
- Hybrid storage: Victoria Metrics (Prometheus-compatible) for metrics, Postgres for query texts β best of both worlds
π Read more: postgres_ai monitoring v0.7 announcement - detailed technical overview and architecture decisions.
This tool is NOT for beginners. It requires extensive Postgres knowledge and assumes familiarity with:
- Advanced Postgres internals and performance concepts
- Query plan analysis and optimization techniques
- Wait event analysis and system-level troubleshooting
- Production database operations and incident response
If you're new to Postgres, consider starting with simpler monitoring solutions before using postgres_ai.
Experience the full monitoring solution: https://demo.postgres.ai (login: demo / password: demo)
- Troubleshooting dashboard - Four Golden Signals with immediate incident response insights
- Query performance analysis - Top-N query workload analysis with resource consumption breakdowns
- Single query analysis - Deep dive into individual query performance and plan variations
- Wait event analysis - Active Session History for session-level troubleshooting
- Backups and DR - WAL archiving monitoring with RPO measurements
- Collection: pgwatch v3 (by Cybertec) for metrics gathering
- Storage: Victoria Metrics for time-series data + Postgres for query texts
- Visualization: Grafana with expert-designed dashboards
- Analysis: Structured data output for AI system integration
Infrastructure:
- Linux machine with Docker installed (separate from your database server)
- Docker access - the user running
postgres_aimust have Docker permissions - Access (network and pg_hba) to the Postgres database(s) you want to monitor
Database:
- Supports Postgres versions 14-18
- pg_stat_statements extension must be created for the DB used for connection
Create a database user for monitoring (skip this if you want to just check out postgres_ai monitoring with a synthetic demo database).
Use the CLI to create/update the monitoring role and grant all required permissions (idempotent):
# Connect as an admin/superuser and run the idempotent setup:
# - create/update the monitoring role
# - create required view(s)
# - apply required grants (and optional extensions where supported)
# Admin password comes from PGPASSWORD (libpq standard) unless you pass --admin-password.
#
# Monitoring password:
# - by default, postgresai generates a strong password automatically
# - it is printed only in interactive (TTY) mode, or if you opt in via --print-password
PGPASSWORD='...' npx postgresai init postgresql://admin@host:5432/dbnameOptional permissions (RDS/self-managed extras) are enabled by default. To skip them:
PGPASSWORD='...' npx postgresai init postgresql://admin@host:5432/dbname --skip-optional-permissionsVerify everything is in place (no changes):
PGPASSWORD='...' npx postgresai init postgresql://admin@host:5432/dbname --verifyIf you want to reset the monitoring password only (no other changes), you can rely on auto-generation:
PGPASSWORD='...' npx postgresai init postgresql://admin@host:5432/dbname --reset-passwordBy default, postgresai init auto-generates a strong password (see above).
If you want to set a specific password instead:
PGPASSWORD='...' npx postgresai init postgresql://admin@host:5432/dbname --reset-password --password 'new_password'If you want to see what will be executed first, use --print-sql (prints the SQL plan and exits; passwords redacted by default). This can be done without a DB connection:
npx postgresai init --print-sqlOptionally, to render the plan for a specific database:
# Pick database (default is PGDATABASE or "postgres"):
npx postgresai init --print-sql -d dbname
# Provide an explicit monitoring password (still redacted in output):
npx postgresai init --print-sql -d dbname --password '...'Permission denied errors
If you see errors like permission denied / insufficient_privilege / code 42501, you are not connected with enough privileges to create roles, grant permissions, or create extensions/views.
-
How to fix:
- Connect as a superuser, or a role with CREATEROLE and sufficient GRANT/DDL privileges
- On RDS/Aurora: use a user with the
rds_superuserrole (typicallypostgres, the most highly privileged user on RDS for PostgreSQL) - On Cloud SQL: use a user with the
cloudsqlsuperuserrole (oftenpostgres) - On Supabase: use the
postgresuser (default administrator with elevated privileges for role/permission management) - On managed providers: use the providerβs admin role/user
-
Review SQL before running (audit-friendly):
npx postgresai init --print-sql -d mydb
One command setup:
# Download the CLI
curl -o postgres_ai https://gitlab.com/postgres-ai/postgres_ai/-/raw/main/postgres_ai \
&& chmod +x postgres_aiNow, start it and wait for a few minutes. To obtain a PostgresAI access token for your organization, visit https://console.postgres.ai (Your org name β Manage β Access tokens):
# Production setup with your Access token
./postgres_ai quickstart --api-key=your_access_tokenNote: You can also add your database instance in the same command:
./postgres_ai quickstart --api-key=your_access_token --add-instance="postgresql://user:pass@host:port/DB"Or if you want to just check out how it works:
# Complete setup with demo database
./postgres_ai quickstart --demoThat's it! Everything is installed, configured, and running.
WARNING: Security is your responsibility!
This monitoring solution exposes several ports that MUST be properly firewalled:
- Port 3000 (Grafana) - Contains sensitive database metrics and dashboards
- Port 58080 (PGWatch Postgres) - Database monitoring interface
- Port 58089 (PGWatch Prometheus) - Database monitoring interface
- Port 59090 (Victoria Metrics) - Metrics storage and queries
- Port 59091 (PGWatch Prometheus endpoint) - Metrics collection
- Port 55000 (Flask API) - Backend API service
- Port 55432 (Demo DB) - When using
--demooption - Port 55433 (Metrics DB) - Postgres metrics storage
Configure your firewall to:
- Block public access to all monitoring ports
- Allow access only from trusted networks/IPs
- Use VPN or SSH tunnels for remote access
Failure to secure these ports may expose sensitive database information!
- Grafana Dashboards - Visual monitoring at http://localhost:3000
- Postgres Monitoring - PGWatch with comprehensive metrics
- Automated Reports - Daily performance analysis
- API Integration - Automatic upload to PostgresAI
- Demo Database - Ready-to-use test environment
For developers:
./postgres_ai quickstart --demoGet a complete monitoring setup with demo data in under 2 minutes.
For production:
./postgres_ai quickstart --api-key=your_key
# Then add your databases
./postgres_ai add-instance "postgresql://user:pass@host:port/DB"# Instance management
./postgres_ai add-instance "postgresql://user:pass@host:port/DB"
./postgres_ai list-instances
./postgres_ai test-instance my-DB
# Service management
./postgres_ai status
./postgres_ai logs
./postgres_ai restart
# Health check
./postgres_ai healthpostgres_ai monitoring generates automated health check reports based on postgres-checkup. Each report has a unique check ID and title:
| Check ID | Title |
|---|---|
| A001 | System information |
| A002 | Version information |
| A003 | Postgres settings |
| A004 | Cluster information |
| A005 | Extensions |
| A006 | Postgres setting deviations |
| A007 | Altered settings |
| A008 | Disk usage and file system type |
| Check ID | Title |
|---|---|
| D004 | pg_stat_statements and pg_stat_kcache settings |
| Check ID | Title |
|---|---|
| F001 | Autovacuum: current settings |
| F004 | Autovacuum: heap bloat (estimated) |
| F005 | Autovacuum: index bloat (estimated) |
| Check ID | Title |
|---|---|
| G001 | Memory-related settings |
| Check ID | Title |
|---|---|
| H001 | Invalid indexes |
| H002 | Unused indexes |
| H004 | Redundant indexes |
| Check ID | Title |
|---|---|
| K001 | Globally aggregated query metrics |
| K003 | Top-50 queries by total_time |
After running quickstart:
- π MAIN: Grafana Dashboard: http://localhost:3000 (login:
monitoring; password is shown at the end of quickstart)
Technical URLs (for advanced users):
- Demo DB: postgresql://postgres:postgres@localhost:55432/target_database
- Monitoring: http://localhost:58080 (PGWatch)
- Metrics: http://localhost:59090 (Victoria Metrics)
./postgres_ai help# run without install
node ./cli/bin/postgres-ai.js --help
# local dev: install aliases into PATH
npm --prefix cli install --no-audit --no-fund
npm link ./cli
postgres-ai --help
postgresai --help
# or install globally after publish (planned)
# npm i -g @postgresai/cli
# postgres-ai --help
# postgresai --helpGet your access token at PostgresAI for automated report uploads and advanced analysis.
- Host stats for on-premise and managed Postgres setups
pg_wait_samplingandpg_stat_kcacheextension support- Additional expert dashboards: autovacuum, checkpointer, lock analysis
- Query plan analysis and automated recommendations
- Enhanced AI integration capabilities
Python-based report generation lives under reporter/ and now ships with a pytest suite.
Install dev dependencies (includes pytest, pytest-postgresql, psycopg, etc.):
python3 -m pip install -r reporter/requirements-dev.txtRun only unit tests with mocked Prometheus interactions:
pytest tests/reporterThis automatically skips integration tests. Or run specific test files:
pytest tests/reporter/test_generators_unit.py -v
pytest tests/reporter/test_formatters.py -vRun the complete test suite (both unit and integration tests):
pytest tests/reporter --run-integrationIntegration tests create a temporary PostgreSQL instance automatically and require PostgreSQL binaries (initdb, postgres) on your PATH. No manual database setup or environment variables are required - the tests create and destroy their own temporary PostgreSQL instances.
Summary:
pytest tests/reporterβ Unit tests only (integration tests skipped)pytest tests/reporter --run-integrationβ Both unit and integration tests
Generate coverage report:
pytest tests/reporter -m unit --cov=reporter --cov-report=htmlView the coverage report by opening htmlcov/index.html in your browser.
We welcome contributions from Postgres experts! Please check our GitLab repository for:
- Code standards and review process
- Dashboard design principles
- Testing requirements for monitoring components
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
postgres_ai monitoring is developed by PostgresAI, bringing years of Postgres expertise into automated monitoring and analysis tools. We provide enterprise consulting and advanced Postgres solutions for fast-growing companies.
- π¬ Get support
- πΊ Postgres.TV (YouTube)
- ποΈ Postgres FM Podcast
- π Report issues
- π§ Enterprise support
