Notice: We've updated our Privacy Policy, effective November 14, 2024.

5mins of Postgres E43: Logical replication in Postgres 15 & why the REPLICA IDENTITY matters

Today, we're going to talk about logical replication improvements in Postgres 15. In particular, we’ll talk about improvements to data filtering, why the REPLICA IDENTITY is important, and how logical replication deals with errors.



Share this episode: Click here to share this episode on twitter, or sign up for our newsletter and check out the newsletter archive and subscribe to our YouTube channel.


Transcript

Let's dive in!

Postgres 15 logical replication improvements

In this blog post, Amit Kapila describes some of the logical replication features that were improved in Postgres 15.

He starts out by talking about prepared transactions. If you're not familiar with prepared transactions or two phase commit, it's usually used when you have two servers where you're committing the data you're trying to synchronize and make sure that either the transaction fails on both servers or succeeds on both servers. You're not getting constraint violations on one of them, but not the other.

The way you would use this with logical replication is that you first enable the two_phase = true setting. This can be used to make sure that logical replication replicates data at PREPARE time instead of at COMMIT time.

Now, you still have to do work to actually use this to build, what Amit calls in his article, “conflict-free logical replication architecture”. Postgres doesn't necessarily take care of everything here for you, but it gives you an important building block to then be able to build an effective architecture.

Logical replication in Postgres 15: Defining which data gets replicated in the replication stream

Next, the article walks through a few ways that help you better define which data gets replicated in the replication stream.

Amit starts out by using a new setting that lets you define that all tables in a particular schema should be replicated. This setting saves you from having to list tables manually. You could have done something like this in previous Postgres versions, but in these versions, when the schema got new tables added, you would have to go back and add them later manually.

This new option does it automatically – when a new table gets created, it automatically gets added to the replication stream! Note: On the subscriber, you do have to make sure that the table exists, so this does not replicate DDL. But, this makes sure that the replication stream contains the data. You do still have to make sure that the table definitions are updated ahead of time though.

On the subscriber, you do have to make sure that the table exists, so this does not replicate DDL. But, this makes sure that the replication stream contains the data.

And there are two improvements on how to filter the data in the tables that are being replicated.

Row filters Column lists

Postgres data filtering in replicated tables: Row filters

First of all, there are row filters. Row filters are simple expressions that are applied to the data in the replication stream and they can do things like: “this column needs to be larger than this value”, or “this column needs to match this LIKE text expression”. Basically, these expressions let you select a subset of your data and just direct that data to your replica.

There are important restrictions here: The most important one that I would note is that the UPDATEs and DELETEs that go into your replication stream will only contain what's referred to as the REPLICA IDENTITY. Usually, that would be the primary key. The REPLICA IDENTITY is something you can set on a per table basis. If you are setting a row filter, that row filter needs to include the REPLICA IDENTITY columns only because of the way this is built inside Postgres.

The REPLICA IDENTITY is something you can set on a per table basis. If you are setting a row filter, that row filter needs to include the REPLICA IDENTITY columns only because of the way this is built inside Postgres.

In Amit’s particular example, if I have UPDATEs on this table, I would have to make sure that both "c1" and "c2" are included in the REPLICA IDENTITY.

Postgres data filtering in replicated tables: Column lists

The other thing that was added were column lists. Previously, you always had to replicate the whole table, not a subset of columns – sometimes that is not desirable.

For example, imagine you have a "users" table. You’d like to replicate the metadata, but you don't really want to replicate your hashed password field. With Postgres 15, you now can restrict which columns get replicated!

Previously, you always had to replicate the whole table, not a subset of columns – sometimes that is not desirable.

Similar to the row filters, you also still have to make sure that your column lists include the REPLICA IDENTITY because otherwise Postgres wouldn't be able to identify which column an UPDATE or DELETE is referring to.

How logical replication deals with errors in Postgres 15

Now, there are a couple of other improvements here. The one I find quite noteworthy is how logical replication deals with errors. If you've used logical replication in production, you probably ran into a problem where you added a column and you forgot to do that on your subscriber side.

Turning off replication when there are errors

Postgres 15 now has a better way of dealing with errors like that and others. First of all, it lets you define how to turn off replication of a subscription when there are errors. You can say disable_on_error = true. What this means is that the subscription will actually stop trying to repeat the exact same error again and again when it hits an error the first time, and you can go in and fix it.

Skipping transactions

There is also a better way to skip transactions. Let's say for some reason you have data that conflicts with what's on the subscriber side and you're okay with skipping over the bad data. There is a new way to skip a particular LSN explicitly. There are some additional safety checks that allow you to make that more easily without running into a couple of problems.

Statistics tracking and monitoring

Last but not least, there is an improvement in terms of statistics tracking and monitoring. There's a new pg_stat_subscription_stats view. If you look at this more closely, there are really two main things here for each subscription that you have.

The apply_error_count and The sync_error_count.

"Sync" would apply at the initial synchronization time and "apply" would apply when the replication stream runs and when you run into errors. The scenario that I've seen often in production is: you mess something up, you forget something, and then suddenly the replication breaks and errors start happening on the subscriber.

This is where I would watch out for the apply_error_count becoming non-zero or being incrementing from the previous count because that would mean that there is something that needs my attention and I need to go in and fix it.

Thanks so much for joining us for episode 43 of 5mins of Postgres. Subscribe to our YouTube channel, sign up for our newsletter and follow us on LinkedIn and Twitter to get updates about new episodes!

What we have discussed in this episode of 5mins of Postgres


Enjoy blog posts like this?

Get them once a month to your inbox