pganalyze Log Insights

PostgreSQL logs often contain critical details about whats going on in your database. pganalyze Log Insights automatically extracts the logs into structured data, and filters any sensitive information.

2017-11-17 16:42:23 UTC C21 LOG: connection authorized: user=myuser database=mydb SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
2017-11-17 16:41:21 UTC V102 ERROR:  null value in column "reference_id" violates not-null constraint
2017-11-17 16:41:21 UTC V102 DETAIL:  Failing row contains (null, secretvalue).
2017-11-17 16:41:21 UTC V102 STATEMENT: INSERT INTO secrets (reference_id, secret) VALUES (null, 'secretvalue')
2017-11-17 16:00:41 UTC A65 LOG:  automatic vacuum of table “mydb.public.mytable”: index scans: 1
    pages: 0 removed, 15092 remain, 0 skipped due to pins, 10999 skipped frozen
    tuples: 17675 removed, 300160 remain, 0 are dead but not yet removable, oldest xmin: 1033269669
    buffer usage: 58297 hits, 337478 misses, 10646 dirtied
    avg read rate: 3.938 MB/s, avg write rate: 0.124 MB/s
    system usage: CPU: user: 1.22 s, system: 2.45 s, elapsed: 669.58 s
2017-11-17 16:00:03 UTC T84 LOG:  duration: 2334.085 ms  plan:
    {
          Query Text": "SELECT reference_id FROM table WHERE secret = 'verysecret';",
          "Plan": {
            "Node Type": "Index Scan",

Screenshot of query normalization

Detailed Log Insights Documentation

It can be confusing working with hundreds of different log events, which is why we’ve written detailed documentation, providing additional community resources when available, as well as instructions on how to resolve critical events.

Learn more about how to handle out of memory errors, server crashes, and more. Explore the pganalyze Log Insights documentation.

auto_explain Integration

Often times the most important part of debugging a query performance problem is the execution plan.

Using pganalyze Log Insights you can automatically extract the output for the auto_explain extension that comes bundled with Postgres.

Screenshot of EXPLAIN plan

Learn How Atlassian uses pganalyze Enterprise

Screenshot of Vacuum Monitoring feature

Postgres Vacuum Monitoring

There are multiple sources of getting vacuum information in Postgres. The logs contain important information about how efficient a vacuum has run, and whether there are any tuples it could not clean up.

pganalyze Log Insights automatically combines this information with the statistics data, and presents it in one unified interface.

PII Filtering

Postgres logging can produce a variety of information, and often this contains details of the query that has run, or the information that is being inserted into the database.

pganalyze Log Insights has over 100 log filters for known Postgres log events, and is able to distinguish the log message itself from any sensitive information that might be contained. This enables the pganalyze collector to filter out such PII before it gets sent to the pganalyze service.

Learn more about PII filtering settings

Example of PII filtering

Hundreds Of Companies Monitor Their Production PostgreSQL Databases With pganalyze

Atlassian
DoorDash
Moody's
fuboTv
Salsify
CounterPath
Ipsy