Step 4: Configure the Collector
Configuring the collector on Amazon ECS
To start, create SSM secrets for storing the pganalyze API key and database password:
aws ssm put-parameter --name /pganalyze/DB_PASSWORD --type SecureString --value "YOUR_MONITORING_USER_PASSWORD"
aws ssm put-parameter --name /pganalyze/PGA_API_KEY --type SecureString --value "YOUR_PGANALYZE_API_KEY"
Replace YOUR_PGANALYZE_API_KEY
with the API key from your organization's Settings page (under the API Keys tab), and YOUR_DB_PASSWORD
with the monitoring user password for your database.
Now, save the following ECS task definition to a file, for example pganalyze_task.json
:
{
"family": "pganalyze-fargate",
"requiresCompatibilities": [
"FARGATE"
],
"executionRoleArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/pganalyzeTaskRole",
"taskRoleArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/pganalyzeTaskRole",
"networkMode": "awsvpc",
"memory": "512",
"cpu": "256",
"containerDefinitions": [
{
"name": "pganalyze",
"image": "quay.io/pganalyze/collector:stable",
"essential": true,
"environment": [
{"name": "DB_HOST", "value": "your_database_host"},
{"name": "DB_USERNAME", "value": "your_monitoring_user"},
{"name": "DB_NAME", "value": "your_database_name"},
],
"secrets": [
{"name": "PGA_API_KEY", "valueFrom": "/pganalyze/PGA_API_KEY"},
{"name": "DB_PASSWORD", "valueFrom": "/pganalyze/DB_PASSWORD"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/pganalyze",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "pganalyze"
}
},
"user": 1000,
"readonlyRootFilesystem": false,
"mountPoints": []
}
]
}
Make sure to modify the values for DB_HOST
, DB_USERNAME
, DB_NAME
, AWS_REGION
, AWS_INSTANCE_ID
to be correct for your RDS instance or Aurora cluster. Also adjust YOUR_ACCOUNT_ID
to be your AWS account ID (in the executionRoleArn
field).
Handling Amazon Aurora clusters vs instances
In the case of Amazon Aurora, the collector automatically resolves cluster
endpoints to the underlying writer instance.
{"name": "DB_HOST", "value": "mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com"},
This will only monitor the writer instance. If you also want to monitor a reader instance,
you'll need to use the Multiple Instances method above and specify the reader instance as a
second instance within the pganalyze_collector.conf
file, then update the /pganalyze/CONFIG_CONTENTS
SSM secret.
[pganalyze]
api_key = 'your_pga_organization_api_key'
[writer_instance]
db_host = mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com
...
[reader_instance]
db_host = mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com
...
Alternatively, you can run a separate Docker container to monitor a reader instance.
Use the cluster-ro
endpoint as the DB_HOST
of the environment variables:
{"name": "DB_HOST", "value": "mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com"},
If you have multiple readers you want to monitor, you either need to add it to
the content of the pganalyze_collector.conf
file, then update the /pganalyze/CONFIG_CONTENTS
SSM secret, or run one pganalyze collector Docker container for each instance.
Registering the task and launching it
We can now register the task definition like this, as well as create the log group:
aws ecs register-task-definition --cli-input-json file://pganalyze_task.json
aws logs create-log-group --log-group-name /ecs/pganalyze
And then launch the task like this:
aws ecs run-task --task-definition pganalyze-fargate --launch-type FARGATE --platform-version 1.3.0 --cluster test-cluster --network-configuration "awsvpcConfiguration={assignPublicIp=ENABLED,subnets=[subnet-YOUR_SUBNET],securityGroups=[sg-YOUR_SECURITYGROUP]}"
You will need to make sure that subnet-YOUR_SUBNET
and sg-YOUR_SECURITYGROUP
are correctly specified.
To verify that the task is running successfully, first retrieve the task ID:
$ aws ecs list-tasks --cluster test-cluster
{
"taskArns": [
"arn:aws:ecs:us-east-1:ACCOUNTID:task/TASKID"
]
}
Now you can request the logs, which should look like this:
$ aws logs get-log-events --log-group-name /ecs/pganalyze --log-stream-name pganalyze/pganalyze/TASKID
{
"nextForwardToken": "...",
"events": [
{
"ingestionTime": 1564856657429,
"timestamp": 1564856653493,
"message": "I [default] Submitted compact snapshots successfully: 5 activity, 2 logs"
},
The Submitted compact snapshots successfully
message indicates that you have
configured the collector correctly.
Your setup is complete. The dashboard will start showing data within 15 minutes.
Once you've confirmed the install is successful and you're receiving query data in pganalyze, we recommend setting up Log Insights as a follow-up step, to automatically track log events in your database.
Couldn't find what you were looking for or want to talk about something specific?
Start a conversation with us →