Notice: We've updated our Privacy Policy, effective November 14, 2024.

S5: Out of memory

Category: Server Events
SQLSTATE: 53200 (Class 53 — Insufficient Resources: out_of_memory)
Urgency: high

Example Postgres Log Output:

ERROR: out of memory
DETAIL: Failed on request of size 324589128.
STATEMENT: SELECT 1
TopMemoryContext: 68688 total in 10 blocks; 2184 free (7 chunks); 66504 used
  TopTransactionContext: 8192 total in 1 blocks; 7328 free (0 chunks); 864 used
  TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used
  MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  smgr relation table: 24576 total in 2 blocks; 9808 free (4 chunks); 14768 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
    PortalHeapMemory: 15360 total in 4 blocks; 6304 free (0 chunks); 9056 used
      ExecutorState: 452800624 total in 4 blocks; 18040 free (9 chunks); 452782584 used
        ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
  Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used
  CacheMemoryContext: 817840 total in 20 blocks; 53136 free (1 chunks); 764704 used
    ...
  MdSmgr: 8192 total in 1 blocks; 7872 free (0 chunks); 320 used
  LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
LOG: server process (PID 123) was terminated by signal 9: Killed

Explanation:

This low-level out-of-memory (OOM) error occurs when Postgres is unable to allocate the memory required for a query to run, or any other internal operation to complete.

Note that depending on your operating system configuration you might also see the terminated by signal 9: Killed message, which can occur when the operating system's OOM killer process goes into action due to the whole system running out of memory.

Postgres is usually good at handling the explicit out of memory errors, so if you only have a momentary out of memory condition it will recover without a restart, and without crashing.

On the other hand if you get the OOM killer message that indicates a process was terminated by the kernel, Postgres will restart and then switch into S3 - Server Start (Recovering).

In general it is not advised to operate your database under a recurring OOM condition, as you may see unexpected error messages at any time, and processes are at risk of being terminated by the OS.

Recommended Action:

First of all, if you see this log event, it makes sense to capture the full message, including the last part that starts with TopMemoryContext and describes individual blocks in detail, for later analysis. This quite length memory statistics dump can help to determine which part of Postgres was using RAM.

More generally, often out of memory occurs when operating a combination of too high work_mem together with a high number of active connections.

If you are getting frequent OOM errors a good first step is to reduce work_mem, which should improve stability at the cost of more sort/hash operations going to disk instead.

It also often makes sense to monitor the number of active connections at the time of the issue, e.g. through pganalyze's Connection Tracing feature.

Additionally, if you absolutely need more RAM to work with, you can evaluate reducing shared_buffers to provide more available RAM for memory directly used by connections. This should be done carefully, and whilst actively watching Buffer Cache Hit Ratio statistics.

Learn More:


Download Free eBook: The Top 6 Postgres Log Events
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →