Notice: We've updated our Privacy Policy, effective November 14, 2024.

Step 4: Configure the Collector

Configuring the collector on Amazon EKS

To start, download the example values.yaml file for the Helm chart:

helm show values pganalyze/pganalyze-collector > values.yaml

Update the serviceAccount section in the values.yaml file to use the role just created in the previous step. For the name, use "pganalyze-service-account" as specified in the Pod Identity association (for Pod Identity) or the assume role policy (for IRSA):

serviceAccount:
  # -- Specifies whether a service account should be created
  create: true
  # -- Annotations to add to the service account
  annotations: {}
  # Update above annotation if you're using IRSA
  # annotations:
  #   eks.amazonaws.com/role-arn: "arn:aws:iam::<aws-account-id>:role/pganalyzeServiceAccountRole"
  # -- The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "pganalyze-service-account"

Next, update the extraEnv section in the values.yaml file with the following. This example uses extraEnv field for a temporary demonstration. For production environments or any sensitive data handling, follow your organization's security best practices and use Kubernetes secrets.

extraEnv:
  PGA_API_KEY: your-organization-api-key
  DB_HOST: your_database_host
  DB_NAME: your_database_name
  DB_USERNAME: your_monitoring_user
  DB_PASSWORD: your_monitoring_user_password
  DB_SSLROOTCERT: rds-ca-global
  DB_SSLMODE: verify-full

Fill in the values step-by-step:

  1. The PGA_API_KEY can be found on the pganalyze Settings page for your organization under the API keys tab
  2. The DB_HOST is the hostname / endpoint of your RDS instance (for Amazon Aurora you can use the cluster endpoint in many cases, see for details below)
  3. The DB_NAME is the database name on the Postgres instance you want to monitor
  4. The DB_USERNAME and DB_PASSWORD should be the monitoring user we created in Step 2
  5. The DB_SSLROOTCERT and DB_SSLMODE is the recommended SSL connection configuration that you can usually keep as specified above

When using Kubernetes secrets, it's easier to use the INI file style for the config. You can find the example of this in the Multiple DB Instances example above, where it is used as the value of the CONFIG_CONTENTS environment variable.

For instance, you can create a secret called pganalyze-secrets and add a key pganalyze-collector.conf, setting the INI collector config as its value.

apiVersion: v1
kind: Secret
metadata:
  name: pganalyze-secrets
data:
  pganalyze-collector.conf: |
    [pganalyze]
    api_key = your_api_key
    [instance1]
    db_host = your_database_host
    db_name = your_database_name
    ...

Then, create a secret volume config-volume and mount it to the /config path so that the config file will be created in the default config file location:

# -- List of volumes to attach to the pod
volumes:
  - name: scratch
    emptyDir: {}
  - name: config-volume
    secret:
      secretName: pganalyze-secrets

# -- List of volume mounts to attach to the container
volumeMounts:
  - mountPath: /tmp
    name: scratch
    subPath: tmp
  - mountPath: /state
    name: scratch
    subPath: state
  - mountPath: /config
    name: config-volume

With this, the config-volume volume will contain a pganalyze-collector.conf file, with its content being the value of the secret, mounted at /config/pganalyze-collector.conf.

Note: The pganalyze collector allows for more optional settings (e.g. AWS access keys, multiple database names)

Handling Amazon Aurora clusters vs instances

In the case of Amazon Aurora, the collector automatically resolves cluster endpoints to the underlying writer instance.

DB_HOST: mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com

This will only monitor the writer instance. If you also want to monitor a reader instance, you'll need to use the Multiple Instances method above and specify the reader instance as a second instance within the CONFIG_CONTENTS environment variable.

[pganalyze]
api_key = 'your_pga_organization_api_key'

[writer_instance]
db_host = mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com
...

[reader_instance]
db_host = mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com
...

Alternatively, you can install a Helm chart with the separate release name to monitor a reader instance. Use the cluster-ro endpoint as the DB_HOST of the environment variables:

DB_HOST: mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com

If you have multiple readers you want to monitor, you either need to specify using CONFIG_CONTENTS environment variable, or install one pganalyze collector Helm chart for each instance.

Install a Helm chart

We can now install a Helm chart to your EKS cluster. First, add a pganalyze repo:

helm repo add pganalyze https://charts.pganalyze.com/

Then, install a chart, using the values.yaml file created earlier:

helm upgrade --install my-collector pganalyze/pganalyze-collector --values=values.yaml

Adjust the namespace if needed. You can run the same command when you make a change to the values.yaml file.

To verify that the install went well, check the deployment:

$ kubectl get deployment
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
my-collector-pganalyze-collector   1/1     1            1           1m10s

The deployment will create one pod for the collector. Check the pod name and obtain the logs to make sure that the collector is running successfully:

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
my-collector-pganalyze-collector-7d599b49c8-dgxzk   1/1     Running   0          1m10s
$ kubectl logs my-collector-pganalyze-collector-7d599b49c8-dgxzk
I [default] Submitted compact snapshots successfully: 6 activity, 2 logs

The Submitted compact snapshots successfully message indicates that you have configured the collector correctly.

Your setup is complete. The dashboard will start showing data within 15 minutes.

Once you've confirmed the install is successful and you're receiving query data in pganalyze, we recommend setting up Log Insights as a follow-up step, to automatically track log events in your database.


Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →