Skip to content

otelcol.processor.resourcedetection eks detector node name is empty error #5481

@armsnyder

Description

@armsnyder

Component(s)

otelcol.processor.resourcedetection

What's wrong?

When installing Alloy via the Helm chart, the otelcol.processor.resourcedetection component fails to start when the eks detector is enabled.

Component error

component shut down with error: no components started successfully: can't get K8s Instance Metadata; node name is empty

Steps to reproduce

Install Alloy using the latest Helm chart (1.6.0) and declare a otelcol.processor.resourcedetection block with the eks detector enabled.

System information

No response

Software version

Alloy 1.13.0, Alloy Helm chart 1.6.0

Configuration

otelcol.processor.resourcedetection "default" {
  detectors = ["env", "eks", "ec2", "system"]
  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

Logs


Workarounds

Workaround 1: Roll back Alloy to 1.9.0.

I have rolled back to Alloy 1.9.0 to resolve the component error. Alloy 1.10.0 has the same error.

Workaround 2: Use kubernetes_node detector.

This config works:

otelcol.processor.resourcedetection "default" {
  // TODO(asnyder): Use the eks detector once this issue is fixed:
  // https://github.com/grafana/alloy/issues/5481
  detectors = ["env", "kubernetes_node", "ec2", "system"]
  kubernetes_node {
    node_from_env_var = "HOSTNAME"
  }
  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

Additional Context

The Helm chart is not setting the K8S_NODE_NAME environment variable, which is the default used by the otelcol.processor.resourcedetection component.

The upstream opentelemetry-collector-contrib attempts to read the node name from an environment variable

https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/fe329e4f6ebb2a7c8115730206c540abacde9b11/processor/resourcedetectionprocessor/internal/aws/eks/detector.go#L73

But the environment variable is empty, resulting in an error in the NewProvider function here

https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/fe329e4f6ebb2a7c8115730206c540abacde9b11/internal/metadataproviders/aws/eks/metadata.go#L89

However, even when adding the K8S_NODE_NAME environment variable to the pod via a manual DaemonSet change, adding:

    - name: K8S_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName

I am still seeing an error:

component shut down with error: no components started successfully: can't get K8s Instance Metadata; node name is empty

This leads me to believe that the eks metadata provider is not being configured correctly somehow, beginning in Alloy 1.10.0.

I looked into the opentelemetry-collector-contrib history and see that the eks provider had a node_from_env_var config field added in version v0.145.0.

open-telemetry/opentelemetry-collector-contrib@4c1d8af

It does not have a default. There is not a corresponding Alloy field in the eks config block.

Tip

React with 👍 if this issue is important to you.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions