Notice: We've updated our Privacy Policy, effective November 14, 2024.

GitLab's challenge with Postgres LWLock lock_manager contention

In E91 of “5mins of Postgres”, we're going to talk about lock_manager LWLock contention in Postgres and how the team at GitLab has addressed this production problem.



Share this episode: Click here to share this episode on LinkedIn or on twitter. Feel free to sign up for our newsletter and subscribe to our YouTube channel.


Transcript

Debugging a problem with LWLock lock_manager contention

We'll start with this issue by Matt Smiley from the GitLab team. I want to first mention that it's great that GitLab documents so much of their work publicly. We can really all learn a lot from their production use of Postgres because they have a lot of Postgres databases that are running in Google Cloud on virtual machines. There is a lot of operational knowledge documented here. I think this is really gold for all of us learning and trying to improve our own use of Postgres in production.

In his post, Matt does follow up work from an incident earlier in the year where he's investigating a particular problem that they encountered in production. For a quick recap, this issue essentially caused their API and web services to be slow for a period of about two hours in April. That sounds not too bad, but if you're at GitLab scale, two hours of slowdown across your most important services is actually a big deal – and really, what they're worried about is this repeating and them not knowing why it's happening or why it's recurring.

In this investigation, one of the things that they've determined back then on this incident was that it was caused by them consolidating the database servers into one less replica. Because of that, there was more query activity on individual servers, which then caused this contention that caused the slowness.

The immediate fix was to just bring back that replica so that they had the query workload spread out over more servers. But of course, it'd interesting to understand why there was this contention to begin with. Matt took a close look at all the potential issues and why this problem might be happening.

Using pg_stat_activity for more insights

And so a lot of the investigation they've done here is looking at various metrics on the database. One of the most telling graphs Matt shows is when they just looked at the pg_stat_activity table in Postgres and they tried to say which wait events, if any, were the queries that were currently running.

You can see when this contention was occurring, they saw a spike of hundreds of queries in the lock_manager LWLock wait event, meaning that clearly this was what was slowing down the workload. This was unexpected, it was not like they had a big change in workload when this was happening!

Another important aspect that they mention is whilst there is a correlation to having more queries causing more contention here, not all queries cause the same level of lock acquisition. And so really, this is mainly a question of how many of these locks are being acquired.

Heavyweight locks vs. Lightweight Locks in Postgres

Now, talking about locks, an important note here is the different kinds of locks. So when we talk about lock manager, really what this is, is a structure in Postgres that handles heavyweight locks. These are the user facing locks that you would be familiar with when you're either doing explicit LOCK TABLE, or you're just querying and there's an AccessExclusive lock or an AccessShare lock that your query or your schema modification takes.

These are called heavyweight locks, but there are also lightweight locks, in short “LWLock” in Postgres. Those are the ones that are used for internal concurrency control between different processes in Postgres. For example, the lock manager that maintains these heavyweight locks is a shared data structure, so you can't have two backends writing to the same part of that data structure at the same time.

When we see this contention, ultimately what's happening is that there's too much activity where this particular partition, the lock manager is partitioned into 16 pieces essentially, and each of these partitions has one lock controlling it. So if you have too much concurrency on these individual lightweight locks, then that's where you're seeing this contention and this slowdown.

Download Free eBook: How To Get 3x Faster Postgres

When a fast path happens

Now, when you have these heavyweight locks being acquired by a connection, this would happen during planning or query execution. There's what's called the fast path. That's when the connection does not need to go through the lock_manager LWLock and is able to do this with less concurrency control being required. On the other hand, there is the regular slow path. So in a slow path, we are seeing these contentions.

Now, for the fast path to happen, there's a couple of rules that Matt summarizes. First of all, the requested lock has to be on a relation, like a table, an index or a view. It has to be something that's a weak mode of locking. That means an AccessShare lock or individual rows being locked, but not the whole table being locked.

Because if you do that, then everybody else would need to know about that very explicitly. And then of course, also important is that no other transaction has that same relation locked in a strong mode. The other important note is that there's only a limited number of fast path locks to be used for each connection, for each transaction, and that's a fixed number that's limited to 16. We'll come back to that in a moment.

I think what's great here is that Matt shares some actual numbers from their production workload he got from a two hour survey on their production database, which they did using bpftrace. On the read replicas, most of the locks are hitting those first three criteria, but the problem is this limit of 16 fast path locks, because that essentially means that a lot of cases cannot use the fast path.

There are different causes of this. When you look at a particular query, for each of the tables involved in that query, during query planning, Postgres will have to lock both the table and the indexes. You have a lot of tables being joined. By definition, that means you cannot use fast path for most of those, you only have 16 of the fast path locks available.

Now, Nikolay who works with the GitLab team on some of these issues, reported this to the mailing list asking "What could Postgres do better?" Nikolay's suggestion was that the FP_LOCK_SLOTS_PER_BACKEND variable could be raised because that ultimately is the limit here. If we had more fast path locks, then it would be less of an issue. Now, unfortunately, it's not as simple as that.

There are a lot of conversations here. Tomas Vondra actually had a working version where he raised that limit, but then there was clear slowdowns for certain workloads when you have just a few locks you need. And so it's not as simple as raising that define.

A helpful change in Postgres 16

The one thing I want to mention here, which is actionable for those of you encountering this contention today is this response by Andres Freund to Matt and Nikolay. He points out that first of all, in Postgres 16, there's a change that improves quadratic behavior with these LWLock lists. When you have multiple connections waiting on the same LWLock, before Postgres 16, it gets very slow once there are a couple of waiting connections. The more are waiting, the more Postgres has to do work to look at this wait list of connections.

Second of all, one of the important things to understand is that Postgres will use more locks during planning than during execution. And so you could consider using prepared statements more heavily, if you find something similar to what GitLab found, because prepared statements have the big benefit that you're taking that penalty of the planning of the lock acquisition in this case once. And then when you execute the query again, you would reduce that because you're in execution. An index that's not actually used does not need to be locked versus during planning, because the planner looks at it has to also lock the index. And so that is a way to improve the performance because you would have fast path locking for the execution more often.

Thank you so much for listening. This was five minutes of Postgres. Subscribe to our YouTube channel to hear about next week's episode and talk to you next week.

If you enjoyed E91 of 5mins of Postgres, feel free to subscribe to our YouTube channel, sign up for our newsletter and follow us on LinkedIn and Twitter to get updates about new episodes!

What we have discussed in this episode of 5mins of Postgres


Enjoy blog posts like this?

Get them once a month to your inbox