Step 1: Deploy the Collector
In order to monitor a Crunchy Bridge database you run the pganalyze collector directly on your database server using Postgres Container Apps.
We recommend using a role with limited privileges for monitoring. You can use Crunchy Bridge's application user as the user for monitoring, or create a separate role. The instructions here assume you are using the application user.
To ensure the application user can access monitoring statistics for all users (not just itself), and can read the server log, run the following with the superuser credentials:
GRANT pg_monitor TO application;
Additionally, run the following to allow the application user to read the Postgres server log:
CREATE SCHEMA IF NOT EXISTS pganalyze; GRANT USAGE ON SCHEMA pganalyze TO application; CREATE OR REPLACE FUNCTION pganalyze.read_log_file(log_filename text, read_offset bigint, read_length bigint) RETURNS text AS $$ DECLARE result text; BEGIN IF log_filename !~ '\A[\w\.-]+\Z' THEN RAISE EXCEPTION 'invalid log filename'; END IF; SELECT pg_catalog.pg_read_file( pg_catalog.current_setting('data_directory') || '/' || pg_catalog.current_setting('log_directory') || '/' || log_filename, read_offset, read_length ) INTO result; RETURN result; END $$ LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
Then, connect to each database that you plan to monitor on this server as a superuser (or equivalent) and run the following to enable the collection of additional column statistics:
CREATE SCHEMA IF NOT EXISTS pganalyze; GRANT USAGE ON SCHEMA pganalyze TO application; CREATE OR REPLACE FUNCTION pganalyze.get_column_stats() RETURNS SETOF pg_stats AS $$ /* pganalyze-collector */ SELECT schemaname, tablename, attname, inherited, null_frac, avg_width, n_distinct, NULL::anyarray, most_common_freqs, NULL::anyarray, correlation, NULL::anyarray, most_common_elem_freqs, elem_count_histogram FROM pg_catalog.pg_stats; $$ LANGUAGE sql VOLATILE SECURITY DEFINER;
Note: We never collect actual table data through this method (see the
NULL values in the function), but we do collect statistics about the distribution of values in your tables. You can skip creating the
get_column_stats helper function if the database contains highly sensitive information and statistics about it should not be collected. This will impact the accuracy of Index Advisor recommendations.
In later steps you can now specify the application user credentials as the
If you have not yet enabled Postgres Container Apps, connect to your Crunchy Bridge Postgres database, and run the following commands as a superuser:
CREATE EXTENSION IF NOT EXISTS pgpodman;
Run the following command to verify the configuration. For the
DB_URL specify the application user credentials.
SELECT run_container('-t -e DB_URL="REPLACE_ME" -e PGA_API_KEY="API_KEY" -e DB_ALL_NAMES=1 quay.io/pganalyze/collector:stable test');
This should return output like this:
run_container -------------------------------------------------------------------------------- I Running collector test with pganalyze-collector X.XX.X I [default] Testing statistics collection... I [default] Test collection successful for PostgreSQL XX.X I [default] Test snapshot successful (27.9 KB) I [default] Testing activity snapshots... I [default] Test snapshot successful (2.28 KB) I [default] Testing log download... I [default] Test snapshot successful (1.45 KB) I [default] Log test successful Test successful. View servers in pganalyze: - [default]: https://app.pganalyze.com/servers/SERVER_ID (1 row)
With a successful test, you can now start the collector container in the background:
SELECT run_container('-d -t -e DB_URL="REPLACE_ME" -e PGA_API_KEY="API_KEY" -e DB_ALL_NAMES=1 quay.io/pganalyze/collector:stable');
Once you've confirmed the install is successful and you're receiving query data in pganalyze, we recommend setting up Log Insights as a follow-up step, to automatically track log events in your database.
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →