MCP Server (Preview)
MCP Server is in Early Access
This feature is available to a limited number of customers in early access and may change without notice. Reach out to us if you're interested.
The pganalyze MCP (Model Context Protocol) server allows AI assistants to interact with your pganalyze data. This enables AI coding tools like Claude Code, Codex, or Cursor to query server metrics, inspect EXPLAIN plans, run the Index Advisor, and review active issues.
For a demo of the MCP server in action, watch the webinar recording:
Setup
Claude Code
Add the pganalyze MCP server using the CLI:
claude mcp add --transport http pganalyze https://app.pganalyze.com/mcpCodex
Add the pganalyze MCP server in your project's codex.json configuration:
{
"mcpServers": {
"pganalyze": {
"type": "url",
"url": "https://app.pganalyze.com/mcp"
}
}
}Cursor
In Cursor settings, add a new MCP server with the following configuration:
{
"mcpServers": {
"pganalyze": {
"url": "https://app.pganalyze.com/mcp"
}
}
}Other MCP clients
Any MCP client that supports HTTP transport can connect to the pganalyze MCP server at https://app.pganalyze.com/mcp. Refer to your client's documentation for how to configure HTTP-based MCP servers.
Authentication
The MCP server uses OAuth for authentication. When you first connect, you will be prompted to authorize access through your pganalyze account. Access is further limited by your account permissions. For example, if you can only view certain servers, this app will have the same restriction.
Available tools
The MCP server exposes tools, organized by the type of data they access. Since this feature is in preview, the available tools and their parameters may change.
| Tool | Description |
|---|---|
| Servers | |
list_servers | List monitored PostgreSQL servers |
get_server_details | Get details for a specific server |
get_postgres_settings | Get PostgreSQL configuration settings |
| Databases | |
get_databases | List databases with size stats and issue counts |
| Queries | |
get_query_stats | Get top queries by runtime percentage |
get_query_details | Get full normalized query text |
get_query_samples | Get sample executions with runtime and parameters |
| Tables | |
get_tables | List tables with filtering and pagination |
get_table_stats | Get time-series table statistics |
get_index_selection | Get Index Advisor results for an existing run |
run_index_selection | Run the Index Advisor for a table |
| EXPLAIN Plans | |
get_query_explains | List EXPLAIN plans for a query (last 7 days) |
get_query_explain | Get a specific EXPLAIN plan with full output |
get_query_explain_from_trace | Resolve a trace span to an EXPLAIN plan (requires OpenTelemetry integration) |
| Backends | |
get_backend_counts | Get time-series connection counts by state |
get_backends | Get a point-in-time snapshot of active connections |
get_backend_details | Get details for a specific connection |
| Issues | |
get_issues | Get active check-up issues and alerts |
get_checkup_status | Get check-up status overview for a database |
Example use cases
- Investigate slow queries during development: Ask your AI tool to pull the top queries by runtime for a specific database using
get_query_stats, then inspect EXPLAIN plans for a query that regressed usingget_query_explainsandget_query_explain. - Review active issues: Check the current check-up status for a database using
get_checkup_statusto see if there are unresolved alerts, such as insufficient VACUUM frequency or unused indexes. - Trace a slow request to its query plan: If your application uses OpenTelemetry tracing, resolve a trace span to the corresponding EXPLAIN plan using
get_query_explain_from_traceto understand why a specific request was slow.
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →