Join us for our webinar on June 18th, 9:30am Pacific Time: Optimizing slow queries with EXPLAIN to fix bad query plans - register now.

Out of Disk Space

Check Frequency

Every 30 minutes

Default Configuration

Detects when disk usage on the partition of your data directory reaches 90% and creates an issue with severity "warning". Escalates to "critical" if when disk usage reaches 98%. Resolves once usage drops below90% again.

Ignores situations where more than 50GB is available, regardless of the percent utilization.

This check is enabled by default. These parameters can be tuned in the Configure section of the Alerts & Check-Up page.



The database may soon be unable to accept new writes.

Common Causes

  • Increase in table/index disk space usage

    Some tables or indexes may be growing faster than expected. You can check the server-wide Schema statistics overview to see which databses are using the most disk space, and then check database-specific schema statistics to see which tables or indexes are taking the most space within that database.

  • Temporary files

    If the Postgres temp_tablespaces setting is empty or set to a directory on the same partition as the data directory, you may have a long-running query using a lot of temporary files and consuming disk space. Canceling the query withpg_cancel_backend will release any temporary files and may reclaim some space.


Expanding storage and deleting data can both resolve this issue, though note that due to the MVCC mechanism Postgres uses, deleting data is unlikely to reclaim much space until you can run VACUUM FULL (which takes a full table lock (blocking all other queries) and has considerable disk space overhead while executing) or use pg_repack to reclaim disk space. Running TRUNCATE to purge all data from some tables may help.

Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →