Skip to content

Conversation

@ghost
Copy link

@ghost ghost commented Oct 28, 2025

This PR contains the following updates:

Package Update Change
ghcr.io/prometheus-community/charts/kube-prometheus-stack (source) major 77.14.0 -> 79.2.1

Configuration

πŸ“… Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

β™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

πŸ”• Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@ghost
Copy link
Author

ghost commented Oct 28, 2025

--- k8s/apps/observability/kube-prometheus-stack/app Kustomization: observability/kube-prometheus-stack OCIRepository: observability/kube-prometheus-stack

+++ k8s/apps/observability/kube-prometheus-stack/app Kustomization: observability/kube-prometheus-stack OCIRepository: observability/kube-prometheus-stack

@@ -11,9 +11,9 @@

 spec:
   interval: 5m
   layerSelector:
     mediaType: application/vnd.cncf.helm.chart.content.v1.tar+gzip
     operation: copy
   ref:
-    tag: 77.14.0
+    tag: 79.2.1
   url: oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack
 

@ghost
Copy link
Author

ghost commented Oct 28, 2025

--- HelmRelease: observability/kube-prometheus-stack ClusterRole: observability/kube-prometheus-stack-operator

+++ HelmRelease: observability/kube-prometheus-stack ClusterRole: observability/kube-prometheus-stack-operator

@@ -27,16 +27,19 @@

   - prometheusagents/finalizers
   - prometheusagents/status
   - thanosrulers
   - thanosrulers/finalizers
   - thanosrulers/status
   - scrapeconfigs
+  - scrapeconfigs/status
   - servicemonitors
   - servicemonitors/status
   - podmonitors
+  - podmonitors/status
   - probes
+  - probes/status
   - prometheusrules
   verbs:
   - '*'
 - apiGroups:
   - apps
   resources:
@@ -82,12 +85,13 @@

   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
+  - events.k8s.io
   resources:
   - events
   verbs:
   - patch
   - create
 - apiGroups:
@@ -101,7 +105,18 @@

 - apiGroups:
   - storage.k8s.io
   resources:
   - storageclasses
   verbs:
   - get
+- apiGroups:
+  - discovery.k8s.io
+  resources:
+  - endpointslices
+  verbs:
+  - get
+  - create
+  - list
+  - watch
+  - update
+  - delete
 
--- HelmRelease: observability/kube-prometheus-stack DaemonSet: observability/node-exporter

+++ HelmRelease: observability/kube-prometheus-stack DaemonSet: observability/node-exporter

@@ -40,13 +40,13 @@

         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
       serviceAccountName: node-exporter
       containers:
       - name: node-exporter
-        image: quay.io/prometheus/node-exporter:v1.9.1
+        image: quay.io/prometheus/node-exporter:v1.10.2
         imagePullPolicy: IfNotPresent
         args:
         - --path.procfs=/host/proc
         - --path.sysfs=/host/sys
         - --path.rootfs=/host/root
         - --path.udev.data=/host/root/run/udev/data
--- HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-prometheus-stack-operator

+++ HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-prometheus-stack-operator

@@ -31,25 +31,25 @@

         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
       - name: kube-prometheus-stack
-        image: quay.io/prometheus-operator/prometheus-operator:v0.85.0
+        image: quay.io/prometheus-operator/prometheus-operator:v0.86.1
         imagePullPolicy: IfNotPresent
         args:
         - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
         - --kubelet-endpoints=true
         - --kubelet-endpointslice=false
         - --localhost=127.0.0.1
-        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.85.0
+        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.86.1
         - --config-reloader-cpu-request=0
         - --config-reloader-cpu-limit=0
         - --config-reloader-memory-request=0
         - --config-reloader-memory-limit=0
-        - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
+        - --thanos-default-base-image=quay.io/thanos/thanos:v0.40.1
         - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
         - --web.enable-tls=true
         - --web.cert-file=/cert/cert
         - --web.key-file=/cert/key
         - --web.listen-address=:10250
         - --web.tls-min-version=VersionTLS13
--- HelmRelease: observability/kube-prometheus-stack Alertmanager: observability/kube-prometheus-stack

+++ HelmRelease: observability/kube-prometheus-stack Alertmanager: observability/kube-prometheus-stack

@@ -9,15 +9,15 @@

     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/part-of: kube-prometheus-stack
     release: kube-prometheus-stack
     heritage: Helm
 spec:
-  image: quay.io/prometheus/alertmanager:v0.28.1
+  image: quay.io/prometheus/alertmanager:v0.29.0
   imagePullPolicy: IfNotPresent
-  version: v0.28.1
+  version: v0.29.0
   replicas: 1
   listenLocal: false
   serviceAccountName: kube-prometheus-stack-alertmanager
   automountServiceAccountToken: true
   externalUrl: https://alertmanager.sbbh.cloud
   paused: false
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-alertmanager.rules

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-alertmanager.rules

@@ -21,13 +21,13 @@

           $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
         summary: Reloading an Alertmanager configuration has failed.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]) == 0
+        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]) == 0
       for: 10m
       labels:
         severity: critical
     - alert: AlertmanagerMembersInconsistent
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} has only
@@ -35,30 +35,30 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagermembersinconsistent
         summary: A member of an Alertmanager cluster has not found all other cluster
           members.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m])
+          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m])
         < on (namespace,service,cluster) group_left
-          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]))
+          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]))
       for: 15m
       labels:
         severity: critical
     - alert: AlertmanagerFailedToSendAlerts
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} failed
           to send {{ $value | humanizePercentage }} of notifications to {{ $labels.integration
           }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedtosendalerts
         summary: An Alertmanager instance failed to send notifications.
       expr: |-
         (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability"}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability"}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -68,15 +68,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration=~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration=~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration=~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration=~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: critical
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -86,15 +86,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a non-critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration!~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration!~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration!~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration!~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerConfigInconsistent
@@ -102,13 +102,13 @@

         description: Alertmanager instances within the {{$labels.job}} cluster have
           different configurations.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerconfiginconsistent
         summary: Alertmanager instances within the same cluster have different configurations.
       expr: |-
         count by (namespace,service,cluster) (
-          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",namespace="observability"})
+          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"})
         )
         != 1
       for: 20m
       labels:
         severity: critical
     - alert: AlertmanagerClusterDown
@@ -119,17 +119,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterdown
         summary: Half or more of the Alertmanager instances within the same cluster
           are down.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            avg_over_time(up{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]) < 0.5
+            avg_over_time(up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]) < 0.5
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="observability"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
@@ -141,17 +141,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclustercrashlooping
         summary: Half or more of the Alertmanager instances within the same cluster
           are crashlooping.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",namespace="observability"}[10m]) > 4
+            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[10m]) > 4
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="observability"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
--- HelmRelease: observability/kube-prometheus-stack ValidatingWebhookConfiguration: observability/kube-prometheus-stack-admission

+++ HelmRelease: observability/kube-prometheus-stack ValidatingWebhookConfiguration: observability/kube-prometheus-stack-admission

@@ -10,13 +10,13 @@

     app.kubernetes.io/part-of: kube-prometheus-stack
     release: kube-prometheus-stack
     heritage: Helm
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator-webhook
 webhooks:
-- name: prometheusrulemutate.monitoring.coreos.com
+- name: prometheusrulevalidate.monitoring.coreos.com
   failurePolicy: Ignore
   rules:
   - apiGroups:
     - monitoring.coreos.com
     apiVersions:
     - '*'
@@ -32,7 +32,29 @@

       path: /admission-prometheusrules/validate
   timeoutSeconds: 10
   admissionReviewVersions:
   - v1
   - v1beta1
   sideEffects: None
+- name: alertmanagerconfigsvalidate.monitoring.coreos.com
+  failurePolicy: Ignore
+  rules:
+  - apiGroups:
+    - monitoring.coreos.com
+    apiVersions:
+    - v1alpha1
+    resources:
+    - alertmanagerconfigs
+    operations:
+    - CREATE
+    - UPDATE
+  clientConfig:
+    service:
+      namespace: observability
+      name: kube-prometheus-stack-operator
+      path: /admission-alertmanagerconfigs/validate
+  timeoutSeconds: 10
+  admissionReviewVersions:
+  - v1
+  - v1beta1
+  sideEffects: None
 
--- HelmRelease: observability/kube-prometheus-stack ConfigMap: observability/kube-prometheus-stack-crds-upgrade

+++ HelmRelease: observability/kube-prometheus-stack ConfigMap: observability/kube-prometheus-stack-crds-upgrade

@@ -15,8 +15,8 @@

     release: kube-prometheus-stack
     heritage: Helm
     app: crds-operator
     app.kubernetes.io/name: crds-prometheus-operator
     app.kubernetes.io/component: crds-upgrade
 binaryData:
[Diff truncated by flux-local]
--- HelmRelease: observability/kube-prometheus-stack Job: observability/kube-prometheus-stack-admission-create

+++ HelmRelease: observability/kube-prometheus-stack Job: observability/kube-prometheus-stack-admission-create

@@ -30,13 +30,13 @@

         heritage: Helm
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator-webhook
     spec:
       containers:
       - name: create
-        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
+        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
         imagePullPolicy: IfNotPresent
         args:
         - create
         - --host=kube-prometheus-stack-operator,kube-prometheus-stack-operator.observability.svc
         - --namespace=observability
         - --secret-name=kube-prometheus-stack-admission
--- HelmRelease: observability/kube-prometheus-stack Job: observability/kube-prometheus-stack-admission-patch

+++ HelmRelease: observability/kube-prometheus-stack Job: observability/kube-prometheus-stack-admission-patch

@@ -30,13 +30,13 @@

         heritage: Helm
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator-webhook
     spec:
       containers:
       - name: patch
-        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
+        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
         imagePullPolicy: IfNotPresent
         args:
         - patch
         - --webhook-name=kube-prometheus-stack-admission
         - --namespace=observability
         - --secret-name=kube-prometheus-stack-admission

@ghost ghost force-pushed the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-79.x branch from d09cafb to 18201c1 Compare October 31, 2025 00:46
@ghost ghost changed the title feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.0.0 ) feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.0.1 ) Oct 31, 2025
@ghost ghost force-pushed the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-79.x branch from 18201c1 to 6c8da86 Compare November 2, 2025 00:50
@ghost ghost changed the title feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.0.1 ) feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.1.0 ) Nov 2, 2025
@ghost ghost force-pushed the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-79.x branch from 6c8da86 to 097b79a Compare November 3, 2025 00:50
@ghost ghost changed the title feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.1.0 ) feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.1.1 ) Nov 3, 2025
homelab-runner-sbbh[bot] added 4 commits November 4, 2025 00:48
….0 βž” 6.46.0 ) (#110)

Co-authored-by: homelab-runner-sbbh[bot] <222798589+homelab-runner-sbbh[bot]@users.noreply.github.com>
…be-prometheus-stack ( 77.14.0 βž” 79.2.1 )
@ghost ghost force-pushed the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-79.x branch from 097b79a to 390cfe0 Compare November 7, 2025 00:48
@ghost ghost changed the title feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.1.1 ) feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 βž” 79.2.1 ) Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants