Set up metrics ingestion using AWS Distro for Open Telemetry
This section describes how to configure the AWS Distro for OpenTelemetry (ADOT) Collector
to scrape from a Prometheus-instrumented application, and send the metrics to Amazon
Managed Service for Prometheus. For more information about
the ADOT Collector, see AWS Distro for OpenTelemetry
Collecting Prometheus metrics with ADOT involves two OpenTelemetry components: the Prometheus Receiver and the AWS Prometheus Remote Write Exporter.
You can configure the Prometheus Receiver using your existing Prometheus configuration
to perform service discovery
and metric scraping. The Prometheus Receiver scrapes metrics in the Prometheus exposition
format.
Any applications or endpoints that you want to scrape should be configured with the
Prometheus client library.
The Prometheus Receiver supports the full set of Prometheus scraping and re-labeling
configurations described in
Configuration
The AWS Prometheus Remote Write Exporter uses the remote_write
endpoint to send the scraped metrics to your management portal workspace. The HTTP
requests to export
data will be signed with AWS SigV4, the AWS protocol for secure authentication. For
more
information, see Signature
Version 4 signing process.
The collector
automatically discovers Prometheus metrics endpoints on Amazon EKS and uses the configuration
found in
<kubernetes_sd_config>.
The following demo is an example of this configuration on a cluster running Amazon
Elastic Kubernetes Service or self-managed Kubernetes. To perform these steps,
you must have AWS credentials from any of the potential options in the default AWS
credentials chain. For more information, see
Configuring
the AWS SDK for Go. This demo use a sample app that is used for integration tests of the process.
The sample app exposes metrics at the /metrics
endpoint, just as the Prometheus client library does.
Prerequisites
Before you begin the ingestion steup steps below, you must set up the your IAM role for service account and trust policy.
To set up the IAM role for service account and trust policy
-
Create the IAM role for the service account by following the steps in Set up service roles for the ingestion of metrics from Amazon EKS clusters.
The ADOT Collector will use this role when it scrapes and exports metrics.
-
Next, edit the trust policy. Open the IAM console at at https://console.aws.amazon.com/iam/
. -
In the left navigation pane choose Roles and find the amp-iamproxy-ingest-role that you created in step 1.
-
Choose the Trust relationships tab and choose Edit trust relationship.
-
In the trust relationship policy JSON, replace
aws-amp
withadot-col
and then choose Update Trust Policy. Your resulting trust policy should look like the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::
account-id
:oidc-provider
/oidc.eks.aws_region
.amazonaws.com/id/openid
" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.aws_region
.amazonaws.com/id/openid:sub": "system:serviceaccount:adot-col:amp-iamproxy-ingest-service-account" } } } ] } -
Choose the Permissions tab and make sure that the following permissions policy is attached to the role.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "aps:RemoteWrite", "aps:GetSeries", "aps:GetLabels", "aps:GetMetricMetadata" ], "Resource": "*" } ] }
Enabling Prometheus metric collection
To enable Prometheus collection on an Amazon EKS or Kubernetes cluster
-
Fork and clone the sample app from the repository at aws-otel-community
. Then run the following commands.
cd ./sample-apps/prometheus docker build . -t prometheus-sample-app:latest
-
Push this image to a registry such as Amazon ECR or DockerHub.
-
Deploy the sample app in the cluster by copying this Kubernetes configuration and applying it. Change the image to the image that you just pushed by replacing
{{PUBLIC_SAMPLE_APP_IMAGE}}
in theprometheus-sample-app.yaml
file.curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/examples/eks/prometheus-pipeline/prometheus-sample-app.yaml -o prometheus-sample-app.yaml kubectl apply -f prometheus-sample-app.yaml
-
Enter the following command to verify that the sample app has started. In the output of the command, you will see
prometheus-sample-app
in theNAME
column.kubectl get all -n aoc-prometheus-pipeline-demo
-
Start a default instance of the ADOT Collector. To do so, first enter the following command to pull the Kubernetes configuration for ADOT Collector.
curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/examples/eks/prometheus-pipeline/eks-prometheus-daemonset.yaml -o eks-prometheus-daemonset.yaml
Then edit the template file, substituting the remote_write endpoint for your AMP workspace for
YOUR_ENDPOINT
and your Region forYOUR_REGION
. Use the remote_write endpoint that is displayed in the AMP console when you look at your workspace details.You'll also need to change
YOUR_ACCOUNT_ID
in the service account section of the Kubernetes configuration to your AWS account ID.In this example, the ADOT Collector configuration uses an annotation (
scrape=true
) to tell which target endpoints to scrape. This allows the ADOT Collector to distinguish the sample app endpoint from kube-system endpoints in your cluster. You can remove this from the re-label configurations if you want to scrape a different sample app. -
Enter the following command to deploy the sample app.
kubectl apply -f eks-prometheus-daemonset.yaml
-
Enter the following command to verify that the ADOT collector has started. Look for
adot-col
in theNAMESPACE
column.kubectl get pods -n adot-col
-
Verify that the pipeline works by using the logging exporter. Our example template is already integrated with the logging exporter. Enter the following commands.
kubectl get pods -A kubectl logs -n adot-col
name_of_your_adot_collector_pod
Some of the scraped metrics from the sample app will look like the following example.
Resource labels: -> service.name: STRING(kubernetes-service-endpoints) -> host.name: STRING(192.168.16.238) -> port: STRING(8080) -> scheme: STRING(http) InstrumentationLibraryMetrics #0 Metric #0 Descriptor: -> Name: test_gauge0 -> Description: This is my gauge -> Unit: -> DataType: DoubleGauge DoubleDataPoints #0 StartTime: 0 Timestamp: 1606511460471000000 Value: 0.000000
-
To test whether AMP received the metrics, use
awscurl
. This tool enables you to send HTTP requests through the command line with AWS Sigv4 authentication, so you must have AWS credentials set up locally with the correct permissions to query from AMP For instructions on installingawscurl
, see awscurl. In the following command, replace
AMP_REGION
, andAMP_ENDPOINT
with the information for your AMP workspace.awscurl --service="aps" --region="
AMP_REGION
" "https://AMP_ENDPOINT
/api/v1/query?query=adot_test_gauge0" {"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"adot_test_gauge0"},"value":[1606512592.493,"16.87214000011479"]}]}}If you receive a metric as the response, that means your pipeline setup has been successful, and the metric has successfully propagated from the sample app into AMP.
Cleaning up
To clean up this demo, enter the following commands.
kubectl delete namespace aoc-prometheus-pipeline-demo kubectl delete namespace adot-col
Advanced configuration
The Prometheus Receiver supports the full set of Prometheus scraping and re-labeling
configurations described in
Configuration
The configuration for the Prometheus Receiver includes your service discovery, scraping configurations, and re-labeling configurations. The receiver configurations looks like the following.
receivers: prometheus: config: [
[Your Prometheus configuration]
]
The following is an example configuration.
receivers: prometheus: config: global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: kubernetes-service-endpoints sample_limit: 10000 kubernetes_sd_configs: - role: endpoints tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
If you have existing an Prometheus configuration, you must replace the $
characters with $$
to avoid having the values replaced with environment variables. *This is especially
important for the
replacement value of the relabel_configurations. For example, if you start with the
following relabel_configuration:
relabel_configs: - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path] regex: (.+);(.+);(.+) replacement: ${1}://${2}${3} target_label: __param_target
It would become the following:
relabel_configs: - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path] regex: (.+);(.+);(.+) replacement: $${1}://${2}${3} target_label: __param_target
AWS Prometheus remote write exporter
The configuration for the AWS Prometheus Remote Write Exporter is simpler than the Prometheus receiver. At this stage in the pipeline, metrics have already been ingested, and we’re ready to export this data to AMP. The minimum requirement for a successful configuration to communicate with AMP is shown in the following example.
exporters: awsprometheusremotewrite: endpoint: "https://aws-managed-prometheus-endpoint/v1/api/remote_write" aws_auth: service: "aps" region: "user-region"
This configuration sends an HTTPS request that is signed by AWS SigV4 using AWS credentials
from the default AWS credentials chain.
For more information, see
Configuring
the AWS SDK for Go. You must specify the service to be aps
.
Regardless of the method of deployment, the ADOT collector must have access to one of the listed options in the default AWS credentials chain. The AWS Prometheus Remote Write Exporter depends on the AWS SDK for Go AWS Go SDK and uses it to fetch credentials and authenticate. You must ensure that these credentials have remote write permissions for AMP.