L71: Process still waiting for lock on tuple / relation / object
SQLSTATE: n/a
Urgency: low
Example Postgres Log Output:
LOG: process 2078 still waiting for ShareLock on transaction 1045207414 after 1000.100 ms
DETAIL: Process holding the lock: 583. Wait queue: 2078, 456
QUERY: INSERT INTO x (y) VALUES (1)
CONTEXT: PL/pgSQL function insert_helper(text) line 5 at EXECUTE statement
STATEMENT: SELECT insert_helper($1)
LOG: process 663 still waiting for ShareLock on virtual transaction 2/7 after 1000.123 ms
LOG: process 663 still waiting for ExclusiveLock on advisory lock [233136,1,2,2] after 1000.365 ms
DETAIL: Process holding the lock: 660. Wait queue: 663.
STATEMENT: SELECT pg_advisory_lock(1, 2);
Explanation:
You will see this log message when log_lock_waits = on
is configured, deadlock_timeout
has
been exceeded (default 1s), and a process is still waiting for a lock.
This would typically occur when another long-running query takes a conflicting lock, e.g. a schema migration that accidentally takes a longer than expected exclusive lock on a table.
You may also see this with lock contention around specific rows in a table, especially when you have complex transactions that take a long time to COMMIT.
The details of the log message tell you which other process is holding the lock, as well as the complete wait queue for the lock in question. Note that the current process PID is also listed in the wait queue, and that the first in the list will be granted the lock next.
Note that even though the logging of this message is related to deadlock_timeout
,
this log message doesn't necessarily mean it relates to a deadlock, it is merely
a side effect of that same timeout since lock checks are relatively expensive for
the database to run. If you are worried about deadlocks you should instead watch for
L73: Deadlock detected.
Recommended Action:
Review the current activity on the database, in particular the query that is being run by the process holding the lock.
You may have a long-running DDL or DML statement holding the offending lock, or
a connection that is in idle in transaction
state where the client has
disappeared but didn't disconnect cleanly.
If you are getting these messages a lot, but once you investigate the problem has resolved itself, it makes most sense to look for a pattern in the acquired locks, and determine where the contention is.
It sometimes also is useful to investigate the exact sequence of queries executed and any other work done by the client, as locks will be held for the duration of a transaction. If an application worker is executing an external API call inside a transaction, all other workers affecting the same rows would have to queue until the API call finished and the transaction commits.
Learn More:
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →