Log Insights: Tuning Log Config Settings
Here we describe the individual parameters you can configure with Postgres to get useful log output that can then be read and analyzed by pganalyze Log Insights.
If you want to skip the details, this is the recommended configuration to start with:
log_min_duration_statement = 1000 log_lock_waits = on log_temp_files = 0 log_checkpoints = on log_connections = on log_disconnections = on log_autovacuum_min_duration = 0
For logging specific slow queries, instead of the generalized form which has
$1 instead of values, the
log_min_duration_statement setting can be quite
As a start, you can set this to
1000 milliseconds, which logs every query
that takes longer than that:
log_min_duration_statement = 1000
pganalyze will then show these as query samples on the query detail page:
We also support extraction of EXPLAIN plans from
auto_explain output. Please
review the auto_explain documentation
carefully to see what makes sense for your system.
We've had good experience with a configuration like this, which will log the EXPLAIN output for every query slower than 1s:
auto_explain.log_analyze = 1 auto_explain.log_buffers = 1 auto_explain.log_timing = 0 auto_explain.log_triggers = 1 auto_explain.log_verbose = 1 auto_explain.log_format = json auto_explain.log_min_duration = 1000 auto_explain.log_nested_statements = 1 auto_explain.sample_rate = 1
The important detail here is that
log_timing is disabled, as this will have a
negative performance impact on query execution. You may also want to disable
log_analyze if you see increased CPU utilization after enabling
Note that you will also need to add
auto_explain to your
if you haven't done so already.
EXPLAIN plans will be classified as T84: EXPLAIN plan for slow query and can also be accessed on the associated query detail page:
log_lock_waits = on
Note that this will only report on lock waits that exceed
which defaults to 1s. Whilst you can lower this a bit, the lock checks can be
expensive if run much more often than the default setting.
See Guide: Monitoring Postgres locks using Log Insights for more details.
For getting S7: Temporary file notices, add the following:
log_temp_files = 0
Note that the value here is a threshold, but in practice its often very useful
to start with a threshold of
0 (i.e. log all temp files) and then raise the
setting value if you are getting too much log output.
Checkpoints can be I/O intensive events in Postgres, and so its highly recommended to enable logging for them to get more details:
log_checkpoints = on
pganalyze will also do analysis on the checkpoint reason for you:
See Guide: Tuning checkpoint intervals to reduce I/O spikes for more details.
log_connections = on
For disconnects (C23: Disconnection) add the following:
log_disconnects = on
In general it is very useful to enable connection logging, both from an auditing perspective as well as to be able to trace back when a connection was opened and closed.
That said, if you are not using a connection pooler, and you are getting a lot of very short connections, you might find the amount of log data from connection logs overwhelming, and it'd be better to disable it in such cases.
In order to log changes to your schema, or other database-affecting operations, you can enable logging of all DDL statements:
log_statement = ddl
These will then show up as T83: Statement notice.
Note that there is also a setting that would log every single query (
log_statement = all)
which is not recommended for production systems, since the effort of log output
would slow down your database.
log_autovacuum_min_duration = 0
If you are seeing a lot of autovacuum activity it may make sense to specify a specific
threshold so only slow vacuums get logged. Note that as a starting point
0 is actually
useful, since you will also see if you have tables that get vacuumed a lot, but VACUUM
is too fast to get seen by other methods (e.g. sampling of
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →