Skip to content

ab-ghosh/knative-otel-integrator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Knative OpenTelemetry (OTEL) Controller

A Kubernetes operator built with Knative's controller framework that automatically applies custom labels to Deployments based on Custom Resource definitions.

Overview

The Labeler Controller watches for Labeler custom resources and automatically applies specified labels to all Deployments in the same namespace. This is useful for:

  • Automated label management across multiple deployments
  • Enforcing organizational labeling standards
  • Dynamic label updates without manual intervention
  • Centralized label configuration

Architecture

This project follows the standard Kubernetes Operator pattern with three main components:

1. Custom Resource Definition (CRD)

Defines the Labeler resource type and its schema.

2. Controller

A Knative-based controller that:

  • Watches for Labeler custom resources
  • Lists all Deployments in the Labeler's namespace
  • Applies/updates labels on those Deployments
  • Reconciles on CR create, update, delete, and periodic resync

3. Custom Resources (CR)

Instances of Labeler that specify which labels to apply.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  User creates Labeler CR                            β”‚
β”‚  ↓                                                   β”‚
β”‚  Controller detects CR                              β”‚
β”‚  ↓                                                   β”‚
β”‚  Lists Deployments in namespace                     β”‚
β”‚  ↓                                                   β”‚
β”‚  Patches each Deployment with custom labels         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Prerequisites

  • Kubernetes cluster (v1.25+)
  • kubectl configured to access your cluster
  • ko for building and deploying Go applications
  • Helm 3 (for Prometheus setup)
  • Go 1.25+ (for development)

Complete Installation Guide

This guide covers the complete setup from scratch, including Prometheus monitoring.

πŸ’‘ Quick Install: Use the automated installation script:

# Install without Prometheus
./install.sh

# Install with Prometheus monitoring
./install.sh --with-prometheus

# Skip Prometheus checks entirely
./install.sh --skip-prometheus

Step 1: Install Prometheus Operator (Optional but Recommended)

The controller exposes Prometheus metrics for production monitoring. Install Prometheus Operator first to enable metrics collection.

Add Helm Repository

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Create Monitoring Namespace

kubectl create namespace monitoring

Install kube-prometheus-stack

This includes Prometheus Operator, Prometheus, Alertmanager, and Grafana:

helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring

Verify Prometheus Installation

kubectl get pods -n monitoring

Wait for all pods to be ready (this may take 1-2 minutes):

kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=prometheus -n monitoring --timeout=300s

Verify ServiceMonitor CRD

kubectl get crd servicemonitors.monitoring.coreos.com

Expected output:

NAME                                    CREATED AT
servicemonitors.monitoring.coreos.com   2025-01-XX...

Note: If you don't want metrics/monitoring, you can skip this step and continue to Step 2.


Step 2: Install the Labeler CRD

kubectl apply -f config/crd/clusterops.io_labelers.yaml

Verify the CRD is installed:

kubectl get crd labelers.clusterops.io

Expected output:

NAME                      CREATED AT
labelers.clusterops.io    2025-10-26T...

Step 3: Create Labeler Namespace

kubectl create namespace labeler

Step 4: Deploy the Controller

Deploy RBAC, Controller, Metrics Service, ServiceMonitor, and Example CR:

ko apply -Rf config/ -- -n labeler

This deploys:

  • βœ… ServiceAccount (clusterops)
  • βœ… Role and RoleBinding (permissions for Deployments and Labelers)
  • βœ… Controller Deployment (with metrics enabled on port 9090)
  • βœ… Config-observability ConfigMap (Prometheus configuration)
  • βœ… Metrics Service (label-controller-metrics)
  • βœ… ServiceMonitor (tells Prometheus where to scrape)
  • βœ… Example Labeler CR

Step 5: Verify Installation

Check Controller is Running

kubectl get pods -n labeler

Expected output:

NAME                                READY   STATUS    RESTARTS   AGE
label-controller-xxxxx-yyyyy        1/1     Running   0          30s

Check Labeler CR

kubectl get labeler -n labeler

Expected output:

NAME                  AGE
example-labeler       30s

Check Metrics Service

kubectl get svc label-controller-metrics -n labeler

Expected output:

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
label-controller-metrics    ClusterIP   10.96.xxx.xxx   <none>        9090/TCP   30s

Check ServiceMonitor (if Prometheus is installed)

kubectl get servicemonitor -n labeler

Expected output:

NAME                 AGE
labeler-controller   30s

Step 6: Verify Prometheus Integration (Optional)

If you installed Prometheus in Step 1, verify it's scraping metrics:

Port-forward to Prometheus UI

kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090

Open Prometheus UI

open http://localhost:9090

Or visit: http://localhost:9090

Check Targets

  1. Go to: Status β†’ Targets
  2. Look for: labeler/labeler-controller
  3. Status should be: UP (green)

Test Metrics Query

Go to Graph tab and run:

kn_workqueue_depth{name="main.Reconciler"}

You should see metrics data returned.

Troubleshooting: If target shows DOWN or no data, see the Prometheus Metrics section below.


Step 7: Verify Labels are Applied

Check that your Deployments now have the custom labels:

kubectl get deployment -n labeler -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels}{"\n"}{end}'

Example output:

label-controller    {"clusterops.io/release":"devel","environment":"production","managed-by":"labeler-controller","team":"platform"}

βœ… Installation Complete!

Your Knative Labeler Controller is now fully installed and running with metrics enabled.

What's been set up:

  • βœ… Labeler CRD installed
  • βœ… Controller running and reconciling
  • βœ… Example labels applied to Deployments
  • βœ… Prometheus metrics exposed (if Prometheus installed)
  • βœ… Automatic monitoring configured

Usage

Update Labels

Edit your Labeler CR to change or add labels:

spec:
  customLabels:
    environment: "staging"     # Changed
    team: "platform"
    version: "v2.0"            # Added

Apply the changes:

kubectl apply -f config/cr.yaml -n labeler

The controller will automatically detect the change and update all Deployment labels.

View Controller Logs

kubectl logs -n labeler -l app=label-controller -f

Example log output:

{"severity":"INFO","message":"Reconciling Labeler : example-labeler"}
{"severity":"INFO","message":"Found 1 deployments in namespace labeler"}
{"severity":"INFO","message":"Reconcile succeeded","duration":"10.97ms"}

Configuration

Labeler Spec

Field Type Required Description
customLabels map[string]string Yes Key-value pairs of labels to apply to Deployments

Example:

spec:
  customLabels:
    environment: "production"
    team: "devops"
    cost-center: "engineering"
    app-version: "2.0"

Controller Configuration

The controller supports Knative's standard configuration via ConfigMaps:

Logging Configuration:

kubectl apply -f config/config-logging.yaml -n labeler

Adjust log levels (debug, info, warn, error) in config/config-logging.yaml.

How It Works

Reconciliation Triggers

The controller reconciles (applies labels) when:

  1. Labeler CR is created - Initial label application
  2. Labeler CR is updated - Labels are re-applied with new values
  3. Labeler CR is deleted - (Future: cleanup logic)
  4. Controller restarts - Resyncs all existing CRs
  5. Periodic resync - Every 10 hours (default)

Label Merging

The controller merges labels rather than replacing them:

  • Existing labels on Deployments are preserved
  • Only specified labels in the Labeler CR are added/updated
  • No labels are removed

Example:

# Deployment has: {"app": "nginx", "version": "1.0"}
# Labeler adds: {"team": "devops", "env": "prod"}
# Result: {"app": "nginx", "version": "1.0", "team": "devops", "env": "prod"}

Development

Prerequisites

  • Go 1.25+
  • Docker or compatible container runtime
  • Access to a Kubernetes cluster (kind, minikube, etc.)

Project Structure

.
β”œβ”€β”€ cmd/
β”‚   └── labeler/
β”‚       β”œβ”€β”€ main.go          # Entry point
β”‚       β”œβ”€β”€ controller.go    # Controller setup
β”‚       └── reconciler.go    # Reconciliation logic
β”œβ”€β”€ pkg/
β”‚   β”œβ”€β”€ apis/
β”‚   β”‚   └── clusterops/
β”‚   β”‚       └── v1alpha1/
β”‚   β”‚           β”œβ”€β”€ doc.go        # Package documentation
β”‚   β”‚           β”œβ”€β”€ types.go      # API types (Labeler, LabelerSpec)
β”‚   β”‚           β”œβ”€β”€ register.go   # Scheme registration
β”‚   β”‚           └── zz_generated.deepcopy.go  # Auto-generated
β”‚   └── client/               # Auto-generated clientsets, listers, informers
β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ crd/                  # CRD definitions
β”‚   β”œβ”€β”€ 100-serviceaccount.yaml
β”‚   β”œβ”€β”€ 200-role.yaml
β”‚   β”œβ”€β”€ 201-rolebinding.yaml
β”‚   β”œβ”€β”€ controller.yaml       # Controller Deployment
β”‚   β”œβ”€β”€ config-logging.yaml
β”‚   └── cr.yaml              # Example Custom Resource
β”œβ”€β”€ hack/
β”‚   β”œβ”€β”€ update-codegen.sh    # Code generation script
β”‚   └── tools.go             # Tool dependencies
β”œβ”€β”€ vendor/                   # Vendored dependencies
β”œβ”€β”€ go.mod
└── README.md

Building

# Build locally
go build -o bin/labeler ./cmd/labeler

# Build and push container image with ko
ko publish github.com/ab-ghosh/knative-otel-integrator/cmd/labeler

Code Generation

After modifying API types in pkg/apis/clusterops/v1alpha1/types.go, regenerate code:

# Regenerate deepcopy, clientset, listers, informers, and injection code
./hack/update-codegen.sh

# Regenerate CRDs
GOFLAGS=-mod=mod controller-gen crd paths=./pkg/apis/... output:crd:artifacts:config=config/crd

Local Development

  1. Create a local cluster:

    kind create cluster
  2. Install CRD and RBAC:

    kubectl apply -f config/crd/
    kubectl create namespace labeler
    kubectl apply -f config/100-serviceaccount.yaml -n labeler
    kubectl apply -f config/200-role.yaml
    kubectl apply -f config/201-rolebinding.yaml
  3. Deploy controller:

    ko apply -f config/controller.yaml -n labeler
  4. Test with example CR:

    kubectl apply -f config/cr.yaml -n labeler
  5. Watch logs:

    kubectl logs -n labeler -l app=label-controller -f

Running Tests

# Run unit tests
go test ./...

# Run with coverage
go test -cover ./...

Troubleshooting

Installation Issues

Prometheus Operator not found:

# Verify Prometheus Operator is installed
kubectl get crd servicemonitors.monitoring.coreos.com

# If missing, install using Step 1 above

ServiceMonitor not discovered by Prometheus:

Check if Prometheus is using a label selector:

kubectl get prometheus -n monitoring -o jsonpath='{.items[0].spec.serviceMonitorSelector}'

If it returns {"matchLabels":{"release":"kube-prometheus-stack"}}, ensure your ServiceMonitor has the correct label (already included in config/servicemonitor.yaml).


Controller not reconciling

Check if controller is running:

kubectl get pods -n labeler -l app=label-controller

View controller logs:

kubectl logs -n labeler -l app=label-controller --tail=50

Labels not applied

Verify Labeler CR exists:

kubectl get labeler -n labeler
kubectl describe labeler example-labeler -n labeler

Check RBAC permissions:

kubectl auth can-i list deployments --as=system:serviceaccount:labeler:clusterops -n labeler
kubectl auth can-i patch deployments --as=system:serviceaccount:labeler:clusterops -n labeler

Trigger manual reconciliation:

kubectl annotate labeler example-labeler reconcile=trigger -n labeler --overwrite

Controller pod fails to start

Check service account exists:

kubectl get sa clusterops -n labeler

View pod events:

kubectl describe pod -n labeler -l app=label-controller

Prometheus Metrics

The controller exposes Prometheus metrics on port 9090 for production monitoring.

How to View Available Metrics

To see all metrics that your controller is currently exposing:

# Port-forward to the metrics service
kubectl port-forward -n labeler svc/label-controller-metrics 9090:9090

# In another terminal, view all metrics
curl http://localhost:9090/metrics

# Filter for specific metric types
curl http://localhost:9090/metrics | grep kn_workqueue
curl http://localhost:9090/metrics | grep kn_k8s_client
curl http://localhost:9090/metrics | grep go_

Direct pod access:

# Get pod name
POD_NAME=$(kubectl get pod -n labeler -l app=label-controller -o jsonpath='{.items[0].metadata.name}')

# Access metrics directly from pod
kubectl port-forward -n labeler $POD_NAME 9090:9090
curl http://localhost:9090/metrics

Example output:

# HELP kn_workqueue_depth Current depth of workqueue
# TYPE kn_workqueue_depth gauge
kn_workqueue_depth{name="main.Reconciler"} 0

# HELP kn_workqueue_adds_total Total number of adds handled by workqueue
# TYPE kn_workqueue_adds_total counter
kn_workqueue_adds_total{name="main.Reconciler"} 15

Available Metrics

Workqueue Metrics

Metric Type Description
kn_workqueue_depth Gauge Current queue depth
kn_workqueue_adds_total Counter Total items added
kn_workqueue_queue_duration_seconds Histogram Time waiting in queue
kn_workqueue_process_duration_seconds Histogram Processing time
kn_workqueue_unfinished_work_seconds Gauge Unfinished work duration
kn_workqueue_longest_running_processor_seconds Gauge Longest running item
kn_workqueue_retries_total Counter Retry count

Kubernetes Client Metrics

Metric Type Description
kn_k8s_client_http_request_duration_seconds Histogram K8s API request latency
kn_k8s_client_http_response_status_code_total Counter K8s API request count by status

Go Runtime Metrics

Metric Description
go_memory_used_bytes Memory used
go_goroutine_count Goroutine count
go_memory_allocated_bytes Heap allocations
And more standard Go runtime metrics...

Query Examples

Access Prometheus UI:

kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090
open http://localhost:9090

Current queue depth:

kn_workqueue_depth{name="main.Reconciler"}

Items processed per second (5min average):

rate(kn_workqueue_adds_total{name="main.Reconciler"}[5m])

95th percentile processing time:

histogram_quantile(0.95, 
  rate(kn_workqueue_process_duration_seconds_bucket{name="main.Reconciler"}[5m])
)

Memory usage (MB):

go_memory_used_bytes / 1024 / 1024

Active goroutines:

go_goroutine_count

K8s API requests by method:

sum by (http_request_method) (
  rate(kn_k8s_client_http_response_status_code_total[5m])
)

Grafana Dashboard (Optional)

If you installed kube-prometheus-stack, Grafana is available:

# Port-forward to Grafana
kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80

# Open Grafana (default credentials: admin/prom-operator)
open http://localhost:3000

Add Dashboard Panels:

  1. Queue Depth (Graph):

    kn_workqueue_depth{name="main.Reconciler"}
    
  2. Processing Rate (Graph):

    rate(kn_workqueue_adds_total{name="main.Reconciler"}[5m])
    
  3. Processing Duration p95 (Graph):

    histogram_quantile(0.95, 
      rate(kn_workqueue_process_duration_seconds_bucket{name="main.Reconciler"}[5m])
    )
    
  4. Memory Usage (Graph):

    go_memory_used_bytes / 1024 / 1024
    

Alerting Rules (Optional)

Create alerts for production monitoring:

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: labeler-alerts
  namespace: labeler
  labels:
    release: kube-prometheus-stack
spec:
  groups:
  - name: labeler
    interval: 30s
    rules:
    - alert: HighQueueDepth
      expr: kn_workqueue_depth{name="main.Reconciler"} > 100
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "High work queue depth"
        description: "Queue depth is {{ $value }} items"
    
    - alert: HighProcessingLatency
      expr: |
        histogram_quantile(0.95, 
          rate(kn_workqueue_process_duration_seconds_bucket[5m])
        ) > 1
      for: 10m
      labels:
        severity: warning
      annotations:
        summary: "High processing latency"
        description: "95th percentile is {{ $value }}s"

Apply:

kubectl apply -f alerts.yaml

Troubleshooting Metrics

Metrics endpoint not accessible:

# Test metrics endpoint directly
kubectl port-forward -n labeler svc/label-controller-metrics 9090:9090
curl http://localhost:9090/metrics

Target shows DOWN in Prometheus:

# Check Service exists
kubectl get svc label-controller-metrics -n labeler

# Check pod is running
kubectl get pods -n labeler -l app=label-controller

# Check ServiceMonitor matches Service labels
kubectl get svc label-controller-metrics -n labeler -o yaml | grep -A5 labels
kubectl get servicemonitor labeler-controller -n labeler -o yaml | grep -A5 selector

No metrics appearing:

# Verify config-observability is correct
kubectl get cm config-observability -n labeler -o yaml

# Should have:
#   metrics-protocol: prometheus
#   metrics-endpoint: ":9090"

# Check controller logs
kubectl logs -n labeler -l app=label-controller | grep -i observability

# Restart controller if needed
kubectl rollout restart deployment/label-controller -n labeler

Custom Metrics

The controller includes a custom metric that tracks CR reconciliations. You can add your own custom metrics following the same pattern.

Available Custom Metrics

Metric Type Description
labeler_cr_reconcile_count_total Counter Total number of Labeler CR reconciliations (create/update)

How Custom Metrics Flow Through the System

This diagram shows the complete journey of a custom metric from creation to Prometheus:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 1. APPLICATION STARTUP (main.go)                                 β”‚
β”‚    sharedmain.Main("custom-labeler", NewController)              β”‚
β”‚                             ↓                                     β”‚
β”‚ 2. SHAREDMAIN SETUP (vendor/knative.dev/pkg/...)                β”‚
β”‚    Line 286: SetupObservabilityOrDie()                          β”‚
β”‚         β”œβ”€ Line 393: Gets config-observability ConfigMap         β”‚
β”‚         β”œβ”€ Line 402: Creates MeterProvider                       β”‚
β”‚         β”œβ”€ Line 412: otel.SetMeterProvider(meterProvider) ← GLOBALβ”‚
β”‚         └─ Starts Prometheus exporter on :9090                   β”‚
β”‚                             ↓                                     β”‚
β”‚ 3. YOUR CONTROLLER INIT (controller.go)                          β”‚
β”‚    Line 26: meter = otel.Meter("labeler-controller")            β”‚
β”‚         └─ Gets the GLOBAL MeterProvider set above               β”‚
β”‚    Line 27-30: Creates counter metric                            β”‚
β”‚                             ↓                                     β”‚
β”‚ 4. RECONCILIATION (reconciler.go)                                β”‚
β”‚    Line 36: crReconcileCounter.Add(ctx, 1)                      β”‚
β”‚         └─ Increments counter with attributes                    β”‚
β”‚                             ↓                                     β”‚
β”‚ 5. OPENTELEMETRY SDK (automatic)                                 β”‚
β”‚    - Collects all metric values                                  β”‚
β”‚    - Applies naming conventions                                  β”‚
β”‚    - Adds otel_scope_* attributes                                β”‚
β”‚                             ↓                                     β”‚
β”‚ 6. PROMETHEUS EXPORTER (automatic)                               β”‚
β”‚    - Converts OTel metrics to Prometheus format                  β”‚
β”‚    - Adds "_total" suffix to counters                            β”‚
β”‚    - Serves on http://localhost:9090/metrics                     β”‚
β”‚                             ↓                                     β”‚
β”‚ 7. YOUR CURL COMMAND                                             β”‚
β”‚    curl http://localhost:9090/metrics                            β”‚
β”‚         └─ Returns: labeler_cr_reconcile_count_total{...} 1     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Points:

  • βœ… No manual setup needed - Knative's sharedmain handles all OpenTelemetry configuration
  • βœ… Global MeterProvider - Set once at startup, used by all metrics
  • βœ… Automatic export - Prometheus exporter runs automatically on port 9090
  • βœ… ConfigMap driven - All configuration comes from config-observability ConfigMap

Testing Custom Metrics

Step 1: Deploy the Controller with Prometheus

./install.sh --with-prometheus

This installs:

  • The Labeler controller with metrics enabled
  • Prometheus Operator (kube-prometheus-stack)
  • ServiceMonitor for automatic metrics discovery

Step 2: Verify Metrics Endpoint

Port-forward to the controller's metrics service:

kubectl port-forward -n labeler svc/label-controller-metrics 9090:9090

Step 3: Check Custom Metric

In another terminal, query the custom metric:

curl -s http://localhost:9090/metrics | grep -A 2 "labeler_cr_reconcile_count"

Expected output:

# HELP labeler_cr_reconcile_count_total Total number of Labeler CR reconciliations (create/update)
# TYPE labeler_cr_reconcile_count_total counter
labeler_cr_reconcile_count_total{name="example-labeler",namespace="labeler",otel_scope_name="labeler-controller"} 1

Step 4: Trigger Metric Updates

Trigger reconciliations to see the counter increment:

# Update the CR to trigger reconciliation
kubectl annotate labeler example-labeler test-trigger=run-$(date +%s) -n labeler --overwrite

# Check the metric again - count should increase
curl -s http://localhost:9090/metrics | grep "labeler_cr_reconcile_count_total"

Step 5: View in Prometheus UI

Port-forward to Prometheus (note: different port to avoid conflict):

kubectl port-forward -n monitoring svc/prometheus-operated 9091:9090

Open Prometheus UI:

open http://localhost:9091

Run queries in the Prometheus UI:

Basic query:

labeler_cr_reconcile_count_total

Query by namespace:

labeler_cr_reconcile_count_total{namespace="labeler"}

Rate of reconciliations (per second over 5 minutes):

rate(labeler_cr_reconcile_count_total[5m])

Total reconciliations across all CRs:

sum(labeler_cr_reconcile_count_total)

Adding Your Own Custom Metrics

To add custom metrics to your controller:

1. Import OpenTelemetry packages (controller.go):

import (
    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/metric"
)

2. Create the metric (controller.go):

// Get the global meter
meter := otel.Meter("labeler-controller")

// Create a counter
myCounter, err := meter.Int64Counter(
    "my.custom.metric",
    metric.WithDescription("Description of what this metric tracks"),
    metric.WithUnit("{units}"),
)
if err != nil {
    logger.Warnw("Failed to create custom metric", "error", err)
}

// Add to reconciler struct
reconciler := &Reconciler{
    // ... other fields
    myCounter: myCounter,
}

3. Record metric values (reconciler.go):

func (r *Reconciler) ReconcileKind(ctx context.Context, labeler *alpha1.Labeler) reconciler.Event {
    // Record the metric
    if r.myCounter != nil {
        r.myCounter.Add(ctx, 1,
            metric.WithAttributes(
                attribute.String("key", "value"),
            ),
        )
    }
    
    // ... rest of reconciliation logic
}

4. Deploy and verify:

ko apply -Rf config/ -n labeler
kubectl port-forward -n labeler svc/label-controller-metrics 9090:9090
curl http://localhost:9090/metrics | grep my_custom_metric

Metric Types

OpenTelemetry supports different metric types:

Type Use Case Example
Int64Counter Monotonically increasing values Request count, errors
Float64Counter Monotonically increasing decimal values Total bytes processed
Int64UpDownCounter Values that go up and down (gauges) Active connections, queue size
Float64UpDownCounter Gauge with decimal values Temperature, load average
Int64Histogram Distribution of values Request latency, payload size
Float64Histogram Distribution of decimal values Response time in seconds

Example: Tracking Deployment Count

Add a metric to track how many deployments are labeled:

// In controller.go - create the metric
deploymentsLabeled, _ := meter.Int64Counter(
    "labeler.deployments.labeled",
    metric.WithDescription("Total number of deployments labeled"),
    metric.WithUnit("{deployments}"),
)

// In reconciler.go - record the metric
labeledCount := 0
for _, deployment := range deployments {
    _, err := r.kubeclient.AppsV1().Deployments(deployment.Namespace).Patch(...)
    if err == nil {
        labeledCount++
    }
}

r.deploymentsLabeled.Add(ctx, int64(labeledCount),
    metric.WithAttributes(
        attribute.String("namespace", labeler.Namespace),
    ),
)

Examples

Example 1: Environment Labels

apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
  name: env-labeler
  namespace: production
spec:
  customLabels:
    environment: "production"
    tier: "frontend"
    region: "us-west-2"

Example 2: Team Ownership

apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
  name: team-labeler
  namespace: platform-team
spec:
  customLabels:
    team: "platform"
    owner: "john.doe@company.com"
    cost-center: "engineering-123"

Example 3: Compliance Labels

apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
  name: compliance-labeler
  namespace: secure-apps
spec:
  customLabels:
    compliance: "pci-dss"
    data-classification: "confidential"
    backup-required: "true"

Comparison with Other Solutions

vs Manual kubectl label

  • βœ… Automated - no manual intervention needed
  • βœ… Declarative - specify desired state
  • βœ… Namespace-wide - applies to all deployments
  • βœ… Self-healing - reapplies on drift

vs Admission Webhooks

  • βœ… Post-creation modification supported
  • βœ… Doesn't require webhook infrastructure
  • βœ… Can update existing resources
  • ❌ Not preventive (webhook would reject at creation)

vs Kyverno/OPA

  • βœ… Simpler - focused use case
  • βœ… Lighter weight
  • ❌ Less flexible - only label management

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

License

Licensed under the Apache License, Version 2.0. See LICENSE file for details.

Credits

Built with:

Contact

For questions or issues, please open a GitHub issue.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors