Performance implications of medium size values and TOAST in Postgres and how to mitigate them

Today, we're going to talk about the surprising impact of medium sized texts on Postgres performance. We take a look at a great blog post by Haki Benita.



Share this episode: Click here to share this episode on twitter, or sign up for our newsletter and subscribe to our YouTube channel.


Transcript

Understanding medium size values and TOAST performance in Postgres

This article by Haki Benita from back in 2020 is a really good post that's worth revisiting. I was discussing recently with a colleague how to tune the toast_tuple_target setting that controls when TOAST takes effect, and I think this will be very useful to you in real world workloads.

Categories of text in your database

To start with, Haki describes three different categories of text you might find in your database. You might find

small text, like username or emails large text, like full blog posts or articles. medium texts, which could be things like descriptions, or blog post comments, or stack traces if you're tracking errors.

Now, medium texts are the type that Haki found surprising performance behavior with in this blog post.

How TOAST works

To start with, Haki gives a quick overview of how TOAST works in Postgres. If you remember, Postgres structures its tables into pages, typically of 8 kB size. If we have 1 MB of data, how do we store that in Postgres? That's where TOAST comes in. TOAST is a mechanism where Postgres either compresses and/or stores data out of line separately from the main table.

TOAST is a mechanism where Postgres either compresses and/or stores data out of line separately from the main table.

That means, once your text field exceeds the toast_tuple_threshold, by default set to 2 kB, then TOAST will first try to compress things, and if the compression doesn't make it small enough, it will store it separately in what's called the TOAST table.

TOAST tables are usually something that you don't have to worry about when you're querying your database. When you do a query, Postgres will transparently read the data from TOAST whilst you're querying it.

Now, TOAST performance is really why we're here.

Download Free eBook: How To Get 3x Faster Postgres

TOAST Performance in Postgres

Let's look at the small, medium, and large texts and see how they perform. In his blog post, Haki creates 500,000 records in three tables. In the first case for the "small" table is literally the string "small value". In the second case, it's a 1.8 kB random string, so it's just under the toast_tuple_target. Then, in the "large" table, it's a 4 kB random string, so it's clearly above the toast_tuple_target.

For the first test here, we're selecting just one of the rows here, but note that we don't have any indexes here, we don't have any primary key, so we're doing a sequential scan across the table until we find our matching row. Note also that we're matching on the id column here, which is in the main table, not in the TOAST table.

We can see what Haki describes as a surprising performance behavior, and I think I would agree, which is that the small table is fast, the large table is fast, but the medium table is slow.

Slow performance with medium sized values

The reason the medium table is slow is because it has a lot more data stored in the main table versus the large table is roughly about the same size as the small table if you consider just the main table portion, with the actual text stored in the separate TOAST. So, when Postgres does that query, when it does that sequential scan, as it's matching on the id column, it doesn't have to go to the TOAST table to find its matching row, it can stay in the main table, and only once it found that row, then it will look up the TOAST for just that one matching row. That's why the large table performs much better in Haki’s example. It is because it’s essentially avoiding all the expense of reading over these text values that you don't actually need for the query.

Now, of course, if you run a test where you were looking for the value that's toasted, then the picture looks different. In this case, you could see that when you're doing a text comparison on the field, that's small, medium, or large, then you’d actually see that large is the slowest because you actually have to look at all the toasted values. Medium is slightly better. And then small, of course, is a lot better.

Indexes

If you have indexes, of course, this picture changes a little bit. So in that case, if we do that comparison on the id column, if that's indexed, that's going to be fast on all three types of columns. But oftentimes indexes are not perfect matches, right?

Oftentimes, an index just gives you an idea of where in the table to look for, because you might have a filter condition on the index, or you might be using other index types like BRIN. In such situations, the more pages in the main table you have to look at, the more expensive it's going to be.

So in that case, that medium sized text can really be a bad performance hit again.

Mitigating the medium size value performance drop

How can we avoid this problem? Now, there are two things you can do: you can either adjust the toast_tuple_target to have more values be considered for either TOAST compression or moving them out of line. This is something that, for example, Haki sets to 128 bytes in his article. This then would likely cause all of the medium table to behave like the large table so that first performance test is going to be much faster.

This is a table storage option that you can either set at the table creation time, or you can change it for your existing table, but it would only apply to new values that are added to the table. So, to really see the performance benefit, you may need to rewrite your table, for example, by performing an INSERT INTO SELECT into a temporary table, then renaming the table.

The other thing that Haki suggests, and I think that's a good practice in general, is when you have these text values and you only need them sometimes, it's a good practice to just keep them in a separate table altogether, because that way the main table stays small, stays compact, and then only when you actually need that text, you will access it through that separate table.

That of course will need application side changes, so toast_tuple_target might be easier to implement.

Overall, I would say that this can have significant performance benefits, especially for those heavily used tables that might be very large in size just because of that medium text, and so I would encourage you to take a look at your TOAST table size and your data values to understand better whether you should adjust these settings.

This was episode 89 of 5mins of Postgres! Subscribe to our YouTube channel, sign up for our newsletter and follow us on LinkedIn and Twitter to get updates about new episodes!

What we have discussed in this episode of 5mins of Postgres


Enjoy blog posts like this?

Get them once a month to your inbox