MCP Server (Preview)

MCP Server is in Early Access

This feature is available to a limited number of customers in early access and may change without notice. Reach out to us if you're interested.

The pganalyze MCP (Model Context Protocol) server allows AI assistants to interact with your pganalyze data. This enables AI coding tools like Claude Code, Codex, or Cursor to query server metrics, inspect EXPLAIN plans, run the Index Advisor, and review active issues.

For a demo of the MCP server in action, watch the webinar recording:



Setup

Claude Code

Add the pganalyze MCP server using the CLI:

claude mcp add --transport http pganalyze https://app.pganalyze.com/mcp

Codex

Add the pganalyze MCP server in your project's codex.json configuration:

{
  "mcpServers": {
    "pganalyze": {
      "type": "url",
      "url": "https://app.pganalyze.com/mcp"
    }
  }
}

Cursor

In Cursor settings, add a new MCP server with the following configuration:

{
  "mcpServers": {
    "pganalyze": {
      "url": "https://app.pganalyze.com/mcp"
    }
  }
}

Other MCP clients

Any MCP client that supports HTTP transport can connect to the pganalyze MCP server at https://app.pganalyze.com/mcp. Refer to your client's documentation for how to configure HTTP-based MCP servers.

Authentication

The MCP server uses OAuth for authentication. When you first connect, you will be prompted to authorize access through your pganalyze account. Access is further limited by your account permissions. For example, if you can only view certain servers, this app will have the same restriction.

OAuth authorization prompt when connecting to the pganalyze MCP server

Available tools

The MCP server exposes tools, organized by the type of data they access. Since this feature is in preview, the available tools and their parameters may change.

ToolDescription
Servers
list_serversList monitored PostgreSQL servers
get_server_detailsGet details for a specific server
get_postgres_settingsGet PostgreSQL configuration settings
Databases
get_databasesList databases with size stats and issue counts
Queries
get_query_statsGet top queries by runtime percentage
get_query_detailsGet full normalized query text
get_query_samplesGet sample executions with runtime and parameters
Tables
get_tablesList tables with filtering and pagination
get_table_statsGet time-series table statistics
get_index_selectionGet Index Advisor results for an existing run
run_index_selectionRun the Index Advisor for a table
EXPLAIN Plans
get_query_explainsList EXPLAIN plans for a query (last 7 days)
get_query_explainGet a specific EXPLAIN plan with full output
get_query_explain_from_traceResolve a trace span to an EXPLAIN plan (requires OpenTelemetry integration)
Backends
get_backend_countsGet time-series connection counts by state
get_backendsGet a point-in-time snapshot of active connections
get_backend_detailsGet details for a specific connection
Issues
get_issuesGet active check-up issues and alerts
get_checkup_statusGet check-up status overview for a database

Example use cases

  • Investigate slow queries during development: Ask your AI tool to pull the top queries by runtime for a specific database using get_query_stats, then inspect EXPLAIN plans for a query that regressed using get_query_explains and get_query_explain.
  • Review active issues: Check the current check-up status for a database using get_checkup_status to see if there are unresolved alerts, such as insufficient VACUUM frequency or unused indexes.
  • Trace a slow request to its query plan: If your application uses OpenTelemetry tracing, resolve a trace span to the corresponding EXPLAIN plan using get_query_explain_from_trace to understand why a specific request was slow.

Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →