How to partition Postgres tables by timestamp based UUIDs

In today’s E98 of “5mins of Postgres” we're going to talk about partitioning Postgres tables by timestamp based UUIDs. We're also going to talk about the status of UUIDv7 in the current Postgres development branch.

Share this episode: Click here to share this episode on LinkedIn or on twitter. Feel free to sign up for our newsletter and subscribe to our YouTube channel.


Partitioning Postgres tables by ULID

To start with, this is a forum post by Chris O'Brien on the Elixir forum.

I'll prefix this by saying I'm not familiar with Elixir personally, but I found this post quite insightful. And we're going to take a look at an example that doesn't require Elixir at all.

What Chris describes here in his post is an approach on how to partition a table by ULID.

What are ULIDs?

ULIDs are essentially timestamp based UUIDs that are encoded slightly differently for display purposes, but for now, we'll assume it's essentially like a UUID that has a timestamp in front.

Chris has an application where they are doing a lot of lookups based on the id column. Now, oftentimes in Postgres, best practice is to do partitioning. If you have a bunch of data, it's more efficient to use a partitioned table than to keep everything in one big table.

Smaller tables are faster to do lookups on, vacuum is going to run faster, etc. So, in general, small tables are good.

Challenges with time-based partitioning

Chris and team wanted to speed up their performance by partitioning the table based on the insertion time of the record. This can make a lot of sense. For example, if you know that most of your users are accessing recent data and that old data is usually only accessed for archiving purposes, then you could imagine that partitioning your table based on the insertion time, and then assuming that the most recent partition is going to be in cache, that makes a lot of sense.

The problem here is that if you want to have an id column that's still unique, and for example you use random UUIDs, then you wouldn't actually be able to guarantee uniqueness across the different partitions, assuming you do time-based partitioning on the insertion time, because Postgres does not let you create globally unique indexes.

This is a documented limitation of Postgres. It is very intentional. There are projects that try to address this, but it's kind of complicated. This is not something that we'll see, I would say, in the near future.

Now, the other thing that Chris mentioned here is without having a global unique index, he was worried that queries for a particular id would have to check every partition. Of course, if you create indexes, it's going to be fast. But the problem is that you can't make use of Postgres' partition pruning, because you will have to look at every partition and every index, and ask the index in each of these partitions, if the ID is present in that partition.

Partitioning with ULIDs

The idea here in this post, and this is quite neat, is that Chris is going to partition the tables based on the timestamp component of the ULID. Each ULID you're generating, or each UUID with a timestamp component you're generating is still unique globally, but then you have a timestamp component, that means it sorts based on that timestamp, and so we can actually have daily partitions that belong to a certain set of UUIDs.

If you're using Elixir, I would recommend taking a look at the rest of Chris' post, which describes in detail how to set up the migrations in Elixir and how to create the daily partitions automatically. And again, this is using ULIDs.

Now, I think what's actually more actionable for many of you is to look at this from the SQL level, because then you could use this technique in Rails, Django or other frameworks as well. All you need to have is a timestamp prefixed UUID or ULID.

Before we take a look at an example of how this can work, I wanted to use this opportunity to take a look at the status of the UUIDv7 work.


Let's quickly recap what UUIDv7 is. These are essentially UUIDs that have a timestamp component in front of them, which really is the core feature that we need for both index performance, but then also to make use of the partitioning we're discussing here.

This is still a draft. But, as of November 2024, this new UUID standard is actually in the process of being published. Looking at the current queue, there’s a good chance this is being published soon. It's been in the queue for a little bit under two months, and they are saying that it’s currently taking about one to two months for documents to become an RFC. So I think we are actually right at the time where this new standard may become official very soon!

Now, on the Postgres side, there has been an effort to add the UUIDv7 support into Postgres. Part of the reason why this didn't land in Postgres 16 is because it's still a draft that's being worked on, but we'll actually have a good chance that this patch lands in Postgres 17.

Postgtres 17 and UUIDv7

I wanted to test the patch here in this thread. Looking back at the history from earlier this week, there's a patch which is still being discussed and there's a bunch of feedback. The names are still going to change, but I just want to try this out and see how this could work.

What I did is: Locally, I pulled down the latest Postgres development branch. Totally experimental. Don't do this at home for any production use. But I would encourage you to engage with the community on testing patches like this. Anyways, I downloaded the patch from the mailing list.

Download Free eBook: How To Get 3x Faster Postgres

Running this fully on the Postgres side

I applied this locally to my local development version with the patch utility. Then I built Postgres and connected to it. What we have then is a fresh Postgres 17 development branch with that UUIDv7 patch that's still being discussed. What I want to show you now, is what Chris implemented on the Elixir forum with a ULID generated on the client side, but this time with the UUIDv7 support that's being discussed in Postgres, totally on the Postgres side without any client side support being needed!

You could achieve the same outcome by the way with some of the UUIDv7 extensions that exist for Postgres today. This does not necessarily mean you have to wait for Postgres 17 or 18 potentially, to get this support. But I think this shows how convenient this will be in the future.


For your information, you can find the full code for this example further below at the end of this article.

First of all, we're going to create a table called test. And really the important thing we do here is we make an id that's a UUID, and we make that the primary key. Then we also have some other columns. And we're partitioning this table by range by the id. You could do this today on any Postgres version, but where it starts being challenging is once you start saying which partitions to create. First we create an index on that other data field. And now the real magic comes in.

What we're doing is that we're creating a table for each day, so a daily partition, and we're using the new gen_uuid_v7 functions (name may still change), to say from the lowest given possible UUID on this day, to the highest possible UUID on this day, these are all the UUIDs that are going to be contained in this day's partition. Now we can do the same for tomorrow. And of course we could repeat this for many other days.

This is the same as regular time-based partitioning, but really what we're doing here is that we know that if we generate a random UUID: today, it's going to fit into this first partition, just by virtue of how UUIDv7 works.

Now, what I can do is I can insert a test value with example1 as the data. Then let's insert a second one with example2 as the data. Now let's suppose we want to search for some of the data. If I search for the indexed field, that's not the primary key, but if I just want to search for the data field here, what Postgres will do is it'll look at each of the indexes on each of the partitions. We get an Append node and we get a bitmap index scan for both of these indexes on the partitions.

If I were to search for an id on the other hand, what it's able to do is: there's no Append node, because Postgres does partition pruning. So we know that this UUID value can only ever be in one partition. From a lookup perspective, that's going to be faster, you also have to load less data.

Personally, I could see a big benefit of utilizing this technique when most of the data you're accessing is the data you created recently, with recent IDs, because then you can essentially guarantee that that table is going to be kept in cache!

Let's imagine you have a bunch of older data. You could then over time repackage that, move it to an archive, but still keep it for lookup. If you have a given id, you'll be able to know which of the partitions to look for. That makes all the difference!

I'm really excited about this. You could start using this today as mentioned, but then in Postgres 17, assuming the UUIDv7 patch gets committed, this is going to be something that you can do built in.

Full example code

lfittl@starfish bin % ./psql postgres
psql (17devel)
Type "help" for help.

postgres=# SET timezone = 'utc';
postgres=# SET enable_seqscan = off;
postgres=# CREATE TABLE test (id uuid PRIMARY KEY, data text) PARTITION BY RANGE(id);
postgres=# CREATE INDEX ON test(data);
postgres=# CREATE TABLE test_p20240118 PARTITION OF test FOR VALUES
postgres-#   FROM (overlay(gen_uuid_v7('2024-01-18')::text placing '0000-0000-0000-000000000000' from 10)::uuid)
postgres-#   TO (overlay(gen_uuid_v7('2024-01-19')::text placing '0000-0000-0000-000000000000' from 10)::uuid);
postgres=# CREATE TABLE test_p20240119 PARTITION OF test FOR VALUES
postgres-#   FROM (overlay(gen_uuid_v7('2024-01-19')::text placing '0000-0000-0000-000000000000' from 10)::uuid)
postgres-#   TO (overlay(gen_uuid_v7('2024-01-20')::text placing '0000-0000-0000-000000000000' from 10)::uuid);
postgres=# INSERT INTO test VALUES(gen_uuid_v7(now()), 'example1');
postgres=# INSERT INTO test VALUES(gen_uuid_v7(now()), 'example2');
postgres=# SELECT * FROM test WHERE data = 'example1';
                  id                  |   data   
 018d1a75-1818-70c4-867a-4bae6925034d | example1
(1 row)

postgres=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM test WHERE data = 'example1';
                                                              QUERY PLAN                                                              
 Append  (cost=4.19..25.38 rows=10 width=48) (actual time=0.012..0.014 rows=1 loops=1)
   Buffers: shared hit=4
   ->  Bitmap Heap Scan on test_p20240118 test_1  (cost=4.19..12.66 rows=5 width=48) (actual time=0.012..0.012 rows=1 loops=1)
         Recheck Cond: (data = 'example1'::text)
         Heap Blocks: exact=1
         Buffers: shared hit=2
         ->  Bitmap Index Scan on test_p20240118_data_idx  (cost=0.00..4.19 rows=5 width=0) (actual time=0.009..0.009 rows=1 loops=1)
               Index Cond: (data = 'example1'::text)
               Buffers: shared hit=1
   ->  Bitmap Heap Scan on test_p20240119 test_2  (cost=4.19..12.66 rows=5 width=48) (actual time=0.001..0.001 rows=0 loops=1)
         Recheck Cond: (data = 'example1'::text)
         Buffers: shared hit=2
         ->  Bitmap Index Scan on test_p20240119_data_idx  (cost=0.00..4.19 rows=5 width=0) (actual time=0.001..0.001 rows=0 loops=1)
               Index Cond: (data = 'example1'::text)
               Buffers: shared hit=2
   Buffers: shared hit=2
 Planning Time: 0.084 ms
 Execution Time: 0.033 ms
(19 rows)

postgres=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM test WHERE id = '018d1a75-1818-70c4-867a-4bae6925034d';
                                                                QUERY PLAN                                                                
 Index Scan using test_p20240118_pkey on test_p20240118 test  (cost=0.15..8.17 rows=1 width=48) (actual time=0.007..0.008 rows=1 loops=1)
   Index Cond: (id = '018d1a75-1818-70c4-867a-4bae6925034d'::uuid)
   Buffers: shared hit=2
 Planning Time: 0.161 ms
 Execution Time: 0.028 ms
(5 rows)


I hope you liked E98 of 5mins of Postgres. Feel free to subscribe to our YouTube channel, sign up for our newsletter or follow us on LinkedIn and Twitter to get updates about new episodes!

What we have discussed in this episode of 5mins of Postgres

Enjoy blog posts like this?

Get them once a month to your inbox