PgBouncer 1.21 adds prepared statement support in transaction mode
In this episode, we're going through the new PgBouncer 1.21 release which adds prepared statement support for PgBouncer when running in transaction mode. We talked about this previously in episode 73 of 5mins of Postgres a couple months ago and now this is actually released in a production version of PgBouncer.
Share this episode: Click here to share this episode on twitter, or sign up for our newsletter and subscribe to our YouTube channel.
Transcript
What is transaction mode in PgBouncer?
For those of you who don't know, transaction mode in PgBouncer is when PgBouncer reuses the same database connections for different transactions. Each time you do a BEGIN
or COMMIT
or just a regular query, that essentially represents the boundaries where PgBouncer will keep the same connection for a particular client connection, but then once the transaction is finished, it'll give it to the next transaction or the next query to be executed.
That's a very important mode to run PgBouncer in. When you use PgBouncer in production settings, usually what I've seen people do is that they install PgBouncer on the database side next to the Postgres server, as a way to scale connections in Postgres more easily.
They use it in transaction mode so that when they have a lot of connections from the client side, they don't need as many connections to the actual database server.
Now, up until the 1.21 release of PgBouncer, you could not use prepared statements in such configurations, which means you couldn't benefit from things like the caching that Postgres does with query plans in prepared statements.
Added support for prepared statements
As Greg Sabino Mullane describes on the Crunchy Data blog, this new release adds support for this. Let's dive in and take a look at how that works. Greg starts by describing prepared statements.
Better Postgres performance with prepared statements
He ran a test to check how much faster prepared statements are. what he was testing was essentially the fact that the planner will take a lot of time to come up with plans when there's complex joins involved. In his example, he is joining pg_class
with itself 14 times. That's something where Postgres will spend a lot of time thinking about the optimal join strategy. This makes planning expensive. You can see in Greg's article, the first time around he’s getting a planning time of 170 milliseconds, the next time he’s getting a very similar planning time. The third time was also a similar planning time. Even if he runs the same statement multiple times, he’s not going to get any reduction in planning time.
Reduction in planning time
If you run this same example with prepared statements, on the other hand, you'll see that the first execution is slow, but then the second and third executions are fast. This is one of the biggest benefits that prepared statements can give you, a reduction in planning time.
Previous issues
Previously, PgBouncer in transaction pooling mode, would actually complain with errors if you were trying to use prepared statements. It would say that it didn’t know about this prepared statement.
Really, the issue here was that PgBouncer would not recognize what you were doing with prepared statements, so it wouldn't guarantee that if you are trying to use a prepared statement in a transaction or in a query, that it would have that prepared on the server side. Because what Postgres does on the server side is that prepared statements are local to each backend, to each process in Postgres.
So you must guarantee that you're going back to the same connection where that prepared statement existed, or you might need to re-PREPARE it. The work that was done in PgBouncer was to add support for that, so that PgBouncer can prepare statements correctly on the server site if needed, and it's transparent to the client.
New max_prepared_statements setting in PgBouncer 1.21
Greg wrote a little Perl test script that essentially tests the prepared statement support. On older PgBouncer versions, this will give an error where it says "prepared statement does not exist". Now in PgBouncer 1.21, we set the new max_prepared_statements
setting. This will let PgBouncer manage a certain amount of prepared statements on each connection on the server.
Now, presumably you wouldn't want to set this to a very high value because that would cause higher memory utilization on the database side. So 10 is probably a good starting point. You could probably go a little bit higher, I imagine, but I would watch the connection memory utilization closely.
In Greg’s article you can see that his Perl test script just works, right? So you didn't have to change anything. It just works.
Caveats when using prepared statements in transaction mode
There are a couple of caveats that Greg mentions, and I think they're very important. First of all, you can only use this with protocol level prepared statements. That means that, depending on your client library, it may not work if it's not using those protocol level functions versus raw statements like EXECUTE
and PREPARE
.
The second thing, and that's actually something I didn't know about, was that currently there is no way to do a protocol level DEALLOCATE
.
Only in Postgres 17, which will come out next year, do they add support for being able to close a prepared statement that was prepared through the protocol. That means right now you will see issues with client side drivers doing DEALLOCATE
and that not working as expected, and so you will get errors.
PgBouncer has a way to clean up the statements itself, so the problem isn't really on the database side as much as it is on the client side, where you'll get these weird errors. In the Perl driver there is a new method to skip deallocations to avoid these error cases. Maybe we'll see other ORMs, other libraries, also adopt a way to turn off deallocations because we trust PgBouncer to issue the DEALLOCATE
s for us.
All in all, I think this is really cool. I also want to come back to the people who initially worked on this. We previously mentioned this, but Jelte on the PgBouncer team and Konstantin, they drove a lot of this effort to get this change into PgBouncer.
You can see here, there were a lot of reviewers also on the PgBouncer PR, and so I think that just shows how much work was behind this. I'm just very excited to see so many improvements to PgBouncer recently, and I look forward to using this myself.
Thanks for your interest in E88 of 5mins of Postgres! Subscribe to our YouTube channel, sign up for our newsletter and follow us on LinkedIn and Twitter to get updates about new episodes!
What we have discussed in this episode of 5mins of Postgres
-
5mins of Postgres E73: Speed up Postgres with transaction pooling with prepared statements
-
PgBouncer 1.21.0 released - Now with prepared statements! - on postgresql.org
-
Prepared Statements in Transaction Mode for PgBouncer - by Greg Sabino Mullane
-
Support of prepared statements - PR on the PgBouncer repo on GitHub