5mins of Postgres E9: PostGIS vs GPUs, tuning parallel queries, and automating Citus extension benchmarks with HammerDB

In this episode, we'll talk about PostGIS versus GPU performance for spatial queries and also take a look at using HammerDB and custom automation scripts to benchmark the Citus extension.



Share this episode: Click here to post to twitter, or sign up for our newsletter and check out the newsletter archive and subscribe to our YouTube channel.


Transcript

Let's dive in!

Looking at PostGIS vs. GPU for Performance and Spatial Joins

In this blog post by Paul Ramsey, he compares PostGIS performance against GPU performance using a simple dataset of Philadelphia parking infractions data.

He's trying to answer the question: should we be using GPUs for this? Does it make sense to optimize PostGIS with GPUs?

Paul is using the Crunchy Bridge database as a service to test PostGIS with, he loads the Philadelphia data into the database and then creates a spatial index on that table. He then runs a rather simple query and looks at the performance output. The plan had no parallelism, so he was really only using one core! The reason is that there is a minimum table size for parallel queries.

If you have a very small table, Postgres is not going to parallelize the query because the overhead of parallelism may be higher than the benefit.

Now in this case, Paul wanted to make sure that they are always using parallelism, even though it's a small dataset. The result here was running 24 seconds to execute a join and looking at 9 million parking infractions joined with 115 neighborhoods. The answer to the initial question: is this faster than a GPU? It is actually not!

The GPU purpose-built solution took just eight seconds to execute the same kind of data analysis.

GPU based analysis is actually faster, but you often will find yourself in a situation where you want the flexibility of PostGIS. However, there could be a benefit to using GPU in the future to optimize these queries.

Great article by Paul, who you might remember from when we highlighted another of his blog posts in our 5mins of Postgres E2: Using unnest(..), generate_series(), and PostGIS.

If you're looking to optimize PostGIS performance, it could also interest you to look at this PostGIS performance benchmark tooling that the PostGIS team themselves uses for testing their extension.

The importance of automating performance benchmarks

Now, the other blog post that I found interesting was this article written by Jelte Fennema from the Citus team. I think this is actually good additional advice to Paul's earlier blog post. What Jelte is saying: when running performance benchmarks the important part is automating it.

If you run a benchmark, you should automate it.

In his blog post, Jelte goes into a lot of details about different kinds of benchmarks. He compares OLTP, OLAP benchmarks. I'll skip over most of that, but do take a look at his post, if you're interested in the basics of running benchmarks.

Using HammerDB to drive a TPC-C or TPC-H benchmark

They ended up using HammerDB for doing a lot of the testing for the Citus extension. HammerDB is a nice standard benchmark suite that you can use to compare database performance. I'd encourage you to take a look if you're trying to benchmark your configuration settings or your database hardware. HammerDB might be useful to you as a way to drive a typical TPC-C or TPC-H benchmark.

They've also built their own tooling, which we'll get to in a moment. I think what they did nicely here is they have one command, provision some databases, run the benchmark suite and get the results, where it made it easy and cheap for them to run repeated benchmarks.

If you're building your own extension or even if you're just trying to get the best hardware or get the best configuration settings, I think making it as simple as this, having a single command line to run your benchmark, that's really key to be able to benchmark well.

Making it as simple as this, having a single command line to run your benchmark, that's really key to be able to benchmark well

What they were trying to prove here, by the way, is that if you add more nodes to a Citus cluster, it has a higher transaction throughput essentially. They were able to prove that successfully. They also optimize the Citus extension using this approach.

You may find that the automation tooling that they built is quite useful and good to look at. Ever since Citus got acquired by Microsoft, of course, these things run on Azure, but I think the lesson learned here is more general, which is: looking at infrastructure automation, not just database query automation, because ultimately you need some kind of hardware to run your benchmark on.

It's actually really useful, instead of running something on your laptop where you will have not as comparable data and the caches might be warm and so you get not as repeatable results.

What's useful is, if you can afford it, to have infrastructure automation. In the Citus benchmark suite they actually provision new servers and you can see they used the new Bicep templating in Azure to provision the hardware that they then run the benchmarks on.

Ultimately, these are just scripts, right? It's nothing fancy really, but I think going through the effort to make sure you have the scripts going and you keep doing repeated runs is well worth it. Maybe you automate that as part of your CI. If you were developing an extension like them, I think that's really beneficial. That's what I take away here myself:

When I optimize database performance, I should make sure that I can run a script to see if my change actually made a benefit.

All right. That was 5mins of Postgres. Thank you so much for listening. Subscribe to our YouTube channel to hear about new episodes and talk to you next week.

What we have discussed in this episode of 5mins of Postgres


Enjoy blog posts like this?

Get them once a month to your inbox