Enterprise Server: Setup Overview

This guide walks through the general process of setting up pganalyze Enterprise Server in your environment. It covers what you'll need to provision, how the components fit together, and where to go for platform-specific instructions.

If you're upgrading an existing installation, see the upgrade instructions instead.

How it all fits together

A pganalyze Enterprise Server deployment has three main components:

  • pganalyze Enterprise Server — the central application that stores metrics, processes statistics, and serves the web UI. It runs as one or more containers (Docker, podman, or Kubernetes) and needs a PostgreSQL database for its internal data storage. This is what your team logs into.

  • Statistics database — a PostgreSQL database that you provision and manage. The pganalyze Enterprise Server uses it to store all collected metrics, query statistics, and configuration data. This is separate from the databases you are monitoring.

  • pganalyze collector — a lightweight agent that connects to your monitored PostgreSQL servers, gathers performance data, and reports it back to the pganalyze Enterprise Server. The collector can run as part of the pganalyze Enterprise Server container (simplest for getting started) or as a separate installation (recommended for production deployments with multiple servers or network boundaries).

The data flow is straightforward: each collector connects to one or more PostgreSQL servers, collects statistics and optionally streams logs, then sends everything to your pganalyze Enterprise Server over a secure WebSocket connection. End users access the pganalyze web UI through a load balancer or directly.

What you'll set up

Setting up pganalyze Enterprise Server happens in two phases. The first phase gets the central application running. The second phase prepares each database you want to monitor and connects it to the system.


Phase 1: Deploy pganalyze Enterprise Server

This phase is typically done once in each environment. By the end, you'll have a running pganalyze Enterprise Server instance that your team can log into.

1. Provision a statistics database

You need to supply an empty PostgreSQL database for the pganalyze Enterprise Server's internal use. pganalyze will connect to this database during setup to initialize the schema and seed the necessary data.

  • PostgreSQL 14 or newer
  • Self-hosted or cloud-managed (Amazon RDS, Azure Database for PostgreSQL, etc.)
  • At least 50 GB of storage (100 GB or more recommended for production)
  • A modest instance size is fine for evaluations; production deployments monitoring many servers will benefit from additional CPU, memory, and storage resources

2. Deploy the pganalyze Enterprise Server container

The pganalyze Enterprise Server is delivered as a container image. Choose the deployment method that fits your infrastructure. Each guide walks through pulling the image, configuring environment variables, initializing the database, creating an admin user, and starting the server. Resource requirements will vary — a proof-of-concept with a handful of servers needs far fewer resources than a production deployment monitoring dozens of instances.

3. Configure network access

The following network connections need to be open between components:

ConnectionPortProtocol
collector Monitored PostgreSQL databases5432TCP
Collector pganalyze Enterprise Server443WebSocket Secure
End users pganalyze Enterprise Server443HTTPS
Internal load balancer pganalyze Enterprise Server5000HTTP
Email alerts (optional)587SMTP

The pganalyze Enterprise Server exposes its web UI on port 5000 internally. In most deployments, a load balancer or reverse proxy terminates TLS on port 443 and forwards traffic to port 5000. See the firewall configuration guide for details.

4. Run initial setup and verify

Each deployment guide includes steps to initialize the database schema, run the pganalyze Enterprise Server self-check, and create an initial admin user. Once complete, you can log in to the pganalyze web UI, create your organization, and optionally get the required API key(s) to allow a separately installed collector to submit monitoring data.


Phase 2: Prepare and connect each monitored database

Once the pganalyze Enterprise Server is running, you'll repeat these steps for each PostgreSQL server you want to monitor. You can start with a single server and add more over time.

Monitoring layers

pganalyze monitoring is made up of three layers. Enabling all three layers provides the most robust monitoring and optimization experience in pganalyze and is what we recommend. However, some services (eg. Azure Event Hubs for streaming Postgres logs in Azure) may require assistance from other teams in your organization and cannot be setup initially. Each of these layers can be enabled incrementally as time and resources allow.

  1. Query and metric collection — the core layer. Requires pg_stat_statements and a restricted monitoring user. This gives you query performance statistics, table and index stats, connection activity, vacuum metrics, and more. This layer is always required.

  2. Log Insights — streams and parses PostgreSQL logs into structured events: slow queries, lock waits, autovacuum activity, errors, and more. The mechanism for delivering logs to the collector varies by platform (for example, AWS uses IAM-based API access while Azure requires an Event Hub), but the end result is the same. Log Insights is also required for retrieving EXPLAIN plans for slow queries, which is the next layer.

  3. Automated EXPLAIN — once logs are flowing, pganalyze can automatically collect EXPLAIN plans for slow queries using auto_explain. This gives you the actual execution plan used at query time, including buffer usage and timing data.

For each monitored server

  • Enable pg_stat_statements — this extension is bundled with PostgreSQL but not enabled by default in most environments. It must be added to shared_preload_libraries, which requires a database restart. If you plan to also enable auto_explain, add both at the same time to avoid a second restart.

  • Create a monitoring user and helper functions — we recommend a dedicated user with the pg_monitor role, which provides read access to all the statistics views pganalyze needs without requiring superuser privileges.

  • Configure log delivery (for Log Insights) — the setup varies by platform. See your platform's installation guide for specific instructions.

  • Enable auto_explain (for Automated EXPLAIN) — configure it to log execution plans for queries exceeding a duration threshold. With the recommended settings, the performance impact is minimal.

Install and configure the collector

The collector is what connects your monitored databases to the pganalyze Enterprise Server. You have two options:

  • Separate collector installation — runs on its own instance near the monitored databases. Recommended for production deployments with multiple servers, when databases span different networks or regions, or when you want the collector closer to the databases it monitors. Additionally, the collector is updated at a different, often more frequent schedule than the Enterprise server container. When it is installed separately, patches and improvements can be installed as they become available. See the separate collector installation guide.

  • Integrated collector — runs inside the pganalyze Enterprise Server container. This is the fastest way to get started, especially for evaluations or when monitoring a small number of databases. You configure monitored servers through the pganalyze web UI.

Each collector instance can monitor multiple PostgreSQL servers. Collectors authenticate to the pganalyze Enterprise Server using an API key.

Platform-specific installation guides

These guides cover the full process for preparing a database and connecting the collector:


Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →