Active Queries
Check Frequency
Near Realtime (every 10 seconds)
Default Configuration
Detects queries currently running for longer than the specified threshold of 1800
seconds and creates an issue with severity "warning". Escalates to "critical" if any queries are running longer than 3600
seconds. Resolves automatically once these queries stop running.
Ignores queries from backup programs (pg_dump
, Heroku Postgres Backups
), as well as any queries that contain the /* pganalyze:no-alert */
or /* pganalyze=no-alert */
magic comment.
This check is enabled by default. These parameters can be tuned in the Configure section of the Alerts & Check-Up page.
Guidance
Impact
Long-running queries can hold locks, blocking other queries. They can prevent VACUUM (including autovacuum) from cleaning up dead rows, leading to table or index bloat. They may also be consuming a lot of I/O or CPU resources, impacting overall performance.
Common Causes
Locks
The queries may be waiting for locks. Check the Wait Events page for the associated backends, and look for any wait events of type Lock. If you have
log_lock_waits
turned on, you can also see details about locking in the Logs page for the backend.Long-running migration
If some of these queries are DDL that is part of a database migration, they may need to rewrite a table (e.g., changing the type of a column) or scan a whole table (e.g.,
CREATE INDEX
withoutCONCURRENTLY
), which can take much longer in production than in a staging or development environment. If your migration is transactional, consider canceling it and refactor it to avoid table rewrites.Missing index
If a query lacks the necessary indexes to execute efficiently, it may take longer and use more I/O than necessary, causing a negative impact on the whole system. Check the the Index Check tab for the referenced queries to review indexing recommendations.
Data changes
If the the data accessed by a query changes dramatically (either grows in size or changes in terms of statistics relevant to query planning ), the query may start performing poorly. You may need to review the EXPLAIN plans for the query, change indexes, or create auxiliary statistics to give Postgres more information about the distribution of the underlying data. You should also check vacuum activity on the tables involved to make sure bloat is under control and that the tables have been analyzed recently.
Solution
If you've confirmed the queries are causing problems, you can cancel any of them with SELECT pg_cancel_backend('<query_pid>');. Note that this only treats the symptom: if the query runs again without any changes, it is likely to run slowly again.
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →