S1: Server process was terminated / Terminating connection because of crash
SQLSTATE: 57P02 (Class 57 — Operator Intervention: crash_shutdown)
Urgency: high
Example Postgres Log Output:
LOG: server process (PID 660) was terminated by signal 6: Aborted
DETAIL: Failed process was running: SELECT pg_advisory_lock(1, 2);
LOG: terminating any other active server processes
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
LOG: all server processes terminated; reinitializing
Explanation:
One or more Postgres server processes have crashed and the server will be restarted to recover.
Typically this occurs due to bugs in Postgres itself, or in a Postgres extension. It may also occur when unexpected signals (e.g. SIGABRT) are sent to one of the Postgres processes.
Note that due to the way Postgres uses shared memory the crash of one server process means that everything will be restarted. That means you will not just see that single connection error out, but every other connection as well. The server will transition into S3 - Server Start (Recovering) after it successfully terminated all processes.
Recommended Action:
Depending on your setup you have different options available to investigate the root cause of the crash further. If this occurs on a hosted cloud platform like Amazon RDS, you should reach out to their support team, as only they will be able to access the necessary details on the server system.
If this occurs on your own self-managed system the next step is to review the core dump file for the crashed process - note that you might need to enable core dumps first, see the Postgres wiki links at the end for details on how to do this.
Assuming you have already enabled core dumps, and are on a Linux/BSD system, you can use gdb
to access the core dump like this, replacing the locations of the core dump (named postmaster.1271644870
here) and postgres binary to match the correct ones on your system:
gdb /var/lib/postgresql/9.6/main/postmaster.1271644870 /usr/lib/postgresql/9.6/bin/postgres
This puts you into the debugger - the most useful part is the backtrace:
(gdb) bt
#0 0x00007f5859747ec1 in memcpy () from /lib/libc.so.6
#1 0x00007f5857321621 in postgis_crash (fcinfo=0x7fff622f5f70) at lwgeom_functions_basic.c:114
#2 0x000000000058a7a9 in ExecMakeFunctionResult (fcache=0xc51310, econtext=0xc511e0, isNull=0xc518c0 "..", isDone=0xc51978)
at execQual.c:1351
#3 0x000000000058b0a4 in ExecEvalFunc (fcache=0xc51310, econtext=0xc511e0, isNull=0xc518c0 "..", isDone=0xc51978) at execQual.c:1753
#4 0x0000000000591a72 in ExecTargetList (targetlist=0xc51750, econtext=0xc511e0, values=0xc518a0, isnull=0xc518c0 "..",
itemIsDone=0xc51978, isDone=0x7fff622f6494) at execQual.c:4610
#5 0x0000000000591f3a in ExecProject (projInfo=0xc518e0, isDone=0x7fff622f6494) at execQual.c:4811
#6 0x00000000005a200a in ExecResult (node=0xc510c8) at nodeResult.c:155
#7 0x0000000000587bc1 in ExecProcNode (node=0xc510c8) at execProcnode.c:319
#8 0x00000000005853af in ExecutePlan (estate=0xc50e98, planstate=0xc510c8, operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection, dest=0xc07008)
at execMain.c:1335
#9 0x0000000000583796 in ExecutorRun (queryDesc=0xbe9150, direction=ForwardScanDirection, count=0) at execMain.c:270
#10 0x000000000066022d in PortalRunSelect (portal=0xc08ba8, forward=1 '\001', count=0, dest=0xc07008) at pquery.c:943
#11 0x000000000065fe7d in PortalRun (portal=0xc08ba8, count=9223372036854775807, isTopLevel=1 '\001', dest=0xc07008, altdest=0xc07008, completionTag=0x7fff622f6890 "")
at pquery.c:769
#12 0x000000000065a32e in exec_simple_query (query_string=0xbcb078 "select postgis_crash();") at postgres.c:1004
#13 0x000000000065e232 in PostgresMain (argc=4, argv=0xb360c0, username=0xb36080 "pg83") at postgres.c:3631
#14 0x000000000061f9f2 in BackendRun (port=0xb4bce0) at postmaster.c:3207
#15 0x000000000061efc2 in BackendStartup (port=0xb4bce0) at postmaster.c:2830
#16 0x000000000061cb35 in ServerLoop () at postmaster.c:1274
#17 0x000000000061c570 in PostmasterMain (argc=3, argv=0xb33310) at postmaster.c:1029
#18 0x00000000005b61ba in main (argc=3, argv=0xb33310) at main.c:188
For example in this case we can see the actual crash is in a PostGIS function, an extension to Postgres, and not Postgres itself. It usually makes sense to report such extension errors directly to the correct project.
If you are seeing a lot of ??
in the backtrace, and getting errors that debug symbols are missing, you will need to install the debug info packages for your binary. After you've installed the debugging information the symbols in the core dump should resolve correctly.
You may also want to search for known issues first based on the top lines in the backtrace, especially if you are not on the latest Postgres or extension version.
Learn More:
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →