Note our Terms of Service update as of Mar. 5, 2025

Step 1: Create helper function and test

Log output for Crunchy Bridge database servers is collected directly from the database server using SQL functions.

If you are using the application user, an additional function needs to be created to grant access to the logs (run the following with superuser credentials):

CREATE SCHEMA IF NOT EXISTS pganalyze;
GRANT USAGE ON SCHEMA pganalyze TO application;
CREATE OR REPLACE FUNCTION pganalyze.read_log_file(log_filename text, read_offset bigint, read_length bigint) RETURNS text AS
$$
DECLARE
  result text;
BEGIN
  IF log_filename !~ '\A[\w\.-]+\Z' THEN
    RAISE EXCEPTION 'invalid log filename';
  END IF;

  SELECT pg_catalog.pg_read_file(
    pg_catalog.current_setting('data_directory') || '/' || pg_catalog.current_setting('log_directory') || '/' || log_filename,
    read_offset,
    read_length
  ) INTO result;

  RETURN result;
END
$$ LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

Next, configure your cluster to enable file logging by adding config parameters using the dashboard with these values:

  • log_destination: syslog,stderr
  • log_line_prefix: '%m [%p] %q[user=%u,db=%d,app=%a] '

You can then verify whether this is working by running the test command:

SELECT run_container('-t -e DB_URL="REPLACE_ME" -e PGA_API_KEY="API_KEY" -e CRUNCHY_BRIDGE_API_KEY="REPLACE_ME" -e DB_ALL_NAMES=1 --privileged quay.io/pganalyze/collector:stable test');

This should show an output like this:

I Running collector test with pganalyze-collector X.XX.X
I [default] Testing statistics collection...
I [default]   Test collection successful for PostgreSQL XX.X
I [default] Submitted full snapshot successfully
I [default] Testing activity snapshots...
I [default] Testing log download...
I [default]   Log test successful

...Test summary

Test successful

Note the "Log test successful", which indicates that the helper method works correctly.

You will start seeing log data in pganalyze Log Insights within a few minutes.

We recommend setting up Automated EXPLAIN as a follow-up step, to automatically EXPLAIN slow queries in Postgres.


Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →