5mins of Postgres E15: Tuning Postgres' Deterministic Query Planner, Extended Statistics and Join Collapse Limits

Today, we'll talk about optimizing with the Postgres deterministic query planner, have a look at using extended statistics, and show how to tune Join collapse times.



Share this episode: Click here to share this episode on twitter, or sign up for our newsletter and check out the newsletter archive and subscribe to our YouTube channel.


Transcript

Let's jump in!

Optimizing with Postgres’ deterministic query planner

In this blog post by Alex Bartlow from the engineering team at Aha! he is talking about three different ways of how they've tuned the Postgres planner, and they also show a specific case where they've improved query performance up to 20,000%. Pretty bold claim. We'll talk more about that in a second.

Setting costs for page access

First of all, they mention setting the cost for page access. The planner looks at different tables in your query and it looks at doing a full table scan or index scan and the different options that are available. The planner uses two important parameters to cost its disk access.

  • It uses the random_page_cost when it expects to make a random read. For example, in an index, all the pages, they are all over the place.
  • seq_page_cost comes into play for a sequential scan, where it has to load big portions of the table.

Pages in Postgres are usually eight kilobytes, that means each of these values for these cost settings talks about fetching eight kilobyte worth of data.

The random_page_cost by default is 4.0, and seq_page_cost is 1.0. The default costs a random page access forty times slower than a sequential scan, but it does assume that 90% of the pages are already cached in memory, which is why random page cost is not 40, random page cost is four because it assumes that 90% cache rate.

It's assuming that this is a spinning metal storage device, a good old hard disk. With a typical NVME drive on a cloud provider these days the number is probably closer to 5 to 10x. Random page access is still more expensive, but it's not as expensive as it was on older drives.

The way that they've updated their random_page_cost setting, they didn't just mess around with the parameters until a plan looked good, they actually tried to reason about it. They said: we think it's 5 to 10x slower, let's err on the five times, and then they're assuming a higher cache hit ratio of 99%, and so that's why they've set the random_page_cost parameter to 1.05. In their particular case this did give them a drastic improvement on their query performance.

Using Extended Statistics in Postgres

The second thing that they discovered: using statistics when columns have dependencies between each other. They give an example of a features table where they are querying the account_id, the product_id and the initiative_id columns.

By default, what Postgres will do is assume that these are independent columns, they don't have a relation to another. There might be some overlap, but none of them is contained by the other.

In Aha’s particular case, their data looks more like this, where all the initiative_id "3" rows also had account_id "1", and most of these rows also have product_id "2". If you were to filter just by initiative_id "3", you would get very high selectivity, a good indicator that you would want an index on initiative_id. When Postgres would see multiple of these indexes, Postgres doesn't know that, Postgres might choose the wrong index, or might combine multiple indexes unnecessarily.

CREATE STATISTICS and the dependencies statistics type in Postgres

What they're suggesting is to use the CREATE STATISTICS command and the dependencies statistics type, and to tell Postgres to collect additional statistics on the columns that are dependent on another. Then they run ANALYZE to have Postgres collect the new statistics.

What the planner will know is that when two of these columns are involved in the query, it will be able to estimate much more accurately how much each of those is adjusting the selectivity of the query.

This was the pathological case where a query was taking 20 seconds, and then with these statistics, they were able to reduce that down to a millisecond.

Tuning Join Collapse Limits in Postgres

The last thing that they mentioned here: when Postgres joins a lot of tables, there's logic in Postgres that switches how it behaves in terms of the planning process. Once you exceed a certain number of tables, it becomes more expensive to find the optimal JOIN order.

By default, Postgres will have a deterministic planning algorithm, but if there are too many tables involved, Postgres actually switches to a genetic algorithm to try to find the optimal JOIN order. You might think genetic algorithm, sounds fancy, is good to have. But the trade-offs there are different, like it's not going to give you the best results in many cases.

If you can afford it, deterministic planning is probably better.

What they ended up doing was raising the geqo_threshold to have the genetic algorithm trigger less often. They also increased the geqo_effort, which meant that the genetic algorithm tried more generations.

The other thing that you can tune in terms of handling a larger number of tables better is the join_collapse and the from_collapse_limit settings.

If you have a lot of tables involved and you do want to pay the extra cost at planning time it may make sense to raise this limit.

Overall, they have a saying here:

Instead of using tricks, give Postgres better data.

Tell Postgres how your data looks like, give it better statistics. That's how you tune the planner. If you just used hints, for example, you're just overriding what Postgres is doing, versus using its ability to adapt based on how your data might change.

This was five minutes of Postgres. Thank you so much for listening. Subscribe to our YouTube channel, follow us on Twitter, and looking forward to see you again for next week’s episode!

What we have discussed in this episode of 5mins of Postgres


Enjoy blog posts like this?

Get them once a month to your inbox