Skip to content

Conversation

@prashanthjos
Copy link
Contributor

@prashanthjos prashanthjos commented Dec 24, 2025

Description

This PR adds observability for the websocket connection between the activator and autoscaler components. When the autoscaler is not reachable, operators currently have no easy way to identify this issue, which can lead to autoscaling failures.

Changes

New Metric

  • Name: kn.activator.autoscaler.reachable
  • Type: Int64Gauge
  • Values: 1 (reachable), 0 (not reachable)
  • Description: Whether the autoscaler is reachable from the activator

New Logging

  • ERROR level log when autoscaler is not reachable:
    • "Autoscaler is not reachable from activator. Stats were not sent." (on send failure)
    • "Autoscaler is not reachable from activator." (on connection check failure)

How It Works

The metric is recorded in two scenarios:

  1. Periodic check (every 5s):

    • Uses [conn.Status()] to check if connection is established
    • Catches: Connection never established, DNS failures, autoscaler not running at startup
  2. On stat send:

    • Detects [SendRaw()] failures during actual stat message transmission
    • Catches: Network timeouts, connection drops, autoscaler becoming unreachable

Testing

  • Unit tests pass (go test ./pkg/activator/... -v)
  • New test TestAutoscalerConnectionStatusMonitor added

Release Note

Include two new activator metrics (`kn.activator.stats.conn.reachable`, `kn.activator.stats.conn.errors`) that reflect the stats reporter connection status

@knative-prow
Copy link

knative-prow bot commented Dec 24, 2025

Welcome @prashanthjos! It looks like this is your first PR to knative/serving 🎉

@knative-prow knative-prow bot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 24, 2025
@knative-prow
Copy link

knative-prow bot commented Dec 24, 2025

Hi @prashanthjos. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@knative-prow knative-prow bot requested review from dsimansk and skonto December 24, 2025 02:33
This change adds observability for the websocket connection between the
activator and autoscaler components:

- Add `activator_autoscaler_reachable` gauge metric (1=reachable, 0=not reachable)
- Log ERROR when autoscaler is not reachable during stat sending
- Add periodic connection status monitor (every 5s) to detect connection
  establishment failures
- Add unit tests for the new AutoscalerConnectionStatusMonitor function

The metric is recorded in two scenarios:
1. When SendRaw fails/succeeds during stat message sending
2. When the periodic status check detects connection not established

This helps operators identify connectivity issues between activator and
autoscaler that could impact autoscaling decisions.
@thiagomedina
Copy link
Member

/ok-to-test

@knative-prow knative-prow bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 24, 2025
@codecov
Copy link

codecov bot commented Dec 24, 2025

Codecov Report

❌ Patch coverage is 69.04762% with 13 lines in your changes missing coverage. Please review.
✅ Project coverage is 80.09%. Comparing base (a8803aa) to head (b8ef157).
⚠️ Report is 14 commits behind head on main.

Files with missing lines Patch % Lines
pkg/activator/stat_reporter.go 50.00% 8 Missing and 1 partial ⚠️
pkg/activator/metrics.go 83.33% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main   #16318   +/-   ##
=======================================
  Coverage   80.09%   80.09%           
=======================================
  Files         215      216    +1     
  Lines       13391    13429   +38     
=======================================
+ Hits        10725    10756   +31     
- Misses       2304     2311    +7     
  Partials      362      362           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@prashanthjos
Copy link
Contributor Author

/retest

@prashanthjos
Copy link
Contributor Author

Related docs PR:

knative/docs#6548

@prashanthjos
Copy link
Contributor Author

@skonto @dsimansk can you please take a look at the PR when you get sometime

Copy link
Contributor

@linkvt linkvt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR! I understand the requirement but think we could simplify/change the current implementation a bit, see my comments.

I haven't worked that much with this code so far, a more experienced maintainer might have more thoughts about the changes, not sure if they are still in their winter break though.


// AutoscalerConnectionStatusMonitor periodically checks if the autoscaler is reachable
// and emits metrics and logs accordingly.
func AutoscalerConnectionStatusMonitor(ctx context.Context, logger *zap.SugaredLogger, conn StatusChecker, mp metric.MeterProvider) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this monitor as the stats are already reported every second, see

const reportInterval = time.Second

This means errors would be detected there already.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! You're right that errors are already detected every second via ReportStats when stats are being sent. However, the monitor handles one edge case: when there's no traffic. If no requests are coming in, ConcurrencyReporter sends nothing (len(msgs) == 0), so ReportStats never calls SendRaw(), and we wouldn't detect a broken connection.


meter := provider.Meter(scopeName)

m.autoscalerReachable, err = meter.Int64Gauge(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use counters with labels here instead of a gauge. If the connection is flaky we might get the case where we always check when the gauge is 1.
If we have a counter with result=success or result=error we would:

  • not miss any errors anymore
  • could create an alert based on the success rate e.g. during the last 5 minutes if success rate is e.g. below 95%

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good question - @prashanthjos do you have an opinion here

I'm wondering if adding a second metric that tracks reconnects is sufficient to answer @linkvt's question

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I've implemented a hybrid approach:

Keep the gauge (kn.activator.autoscaler.reachable), provides instant visibility into current connection state
Add a counter (kn.activator.autoscaler.connection_errors_total), monotonically increasing, tracks every error
This addresses @linkvt's concern:

Counter never misses errors (accumulates, doesn't sample)
Enables rate-based alerting: rate(connection_errors_total[5m]) > threshold
Gauge still answers "is it reachable right now?" for real-time dashboards
The counter increments on every error event (both periodic health checks and send failures), so flaky connections will be fully captured.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the argument for keeping the gauge, sounds good to me 👍

With regards to the error counter, I did more research and found the following evaluation that gives a recommendation and also doesn't recommend the approach I proposed: https://promlabs.com/blog/2023/09/19/errors-successes-totals-which-metrics-should-i-expose-to-prometheus/#recommended-for-binary-outcomes-exposing-errors-and-totals

In summary:

  • Add a counter for total connection checks
  • Keep the counter for connection checks with errors (exists right now)
  • This allows you to track the absolute error rate but also the relative error ratio which is in my experience more important than an absolute number

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "errors + totals" pattern makes sense for variable-rate operations like HTTP requests.

However, for this connection health check, I'm not sure the totals counter adds value because:

  • Check frequency is fixed (every 5s) - the rate is predictable/constant
  • Any error is actionable, we don't need a ratio to decide if the connection is healthy
  • The error counter alone lets us alert on rate(connection_errors_total[5m]) > 0, which captures "autoscaler became unreachable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partially agree and also thought about that but:

  • there can be multiple activator pods so there is a difference if you have 1 error with 1 or 10 activators
  • there are outages/hickups in the environment on every level if your environment is sufficiently large, it's just a statistical certainty with network partitions, disk errors, memory errors, etc. happening at some point in time. Not every error is actionable as errors are expected in large environments, this is also why error budgets are common.
  • following common patterns (error + total metric) makes it easier for infra teams to write their alerts based on error budget burn rate.

This is at least my experience from working in some large environments handling PBs of data and working on the German railroads sales platform to give my statements some credibility 🙂 .

Copy link
Contributor Author

@prashanthjos prashanthjos Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@linkvt thank you for the detailed feedback and context, this is really helpful! Your points about multiple activator pods and error budgets in large environments make a lot of sense. Following the error + total pattern for burn rate alerting is definitely the right approach for production observability.

Could I accommodate this in a follow-up PR? I'd like to get the current changes merged first and then add the connection_checks_total counter alongside the existing connection_errors_total in a subsequent PR.

@prashanthjos
Copy link
Contributor Author

Thank you @linkvt for dropping the comments, I will wait for @dsimansk and @skonto comments also, before I start addressing all comments together.


meter := provider.Meter(scopeName)

m.autoscalerReachable, err = meter.Int64Gauge(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good question - @prashanthjos do you have an opinion here

I'm wondering if adding a second metric that tracks reconnects is sufficient to answer @linkvt's question

@prashanthjos
Copy link
Contributor Author

/retest

@prashanthjos
Copy link
Contributor Author

@dprotaso @linkvt I addressed the comments, can you please take a look.

@dprotaso
Copy link
Member

Chatted on slack - I think we should do the pkg change - it'll make the diff here simpler. I explained how our automation works so once pkg changes is merged our automation will update the deps here.

Copy link
Member

@dprotaso dprotaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor thing on my end (re: total in metric name).

}

m.autoscalerConnectionErrors, err = meter.Int64Counter(
"kn.activator.autoscaler.connection_errors_total",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

according to OTel documentation total shouldn't be in the name

https://opentelemetry.io/docs/specs/semconv/general/naming/#do-not-use-total

meter := provider.Meter(scopeName)

m.autoscalerReachable, err = meter.Int64Gauge(
"kn.activator.autoscaler.reachable",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a huge deal in my mind but curious if we should make this generic.

For example kn.activator.open_connections and have a peer attribute of autoscaler

Thus if the activator were to ever connect to anything else we would just add a new peer attribute.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And likewise tweak the metric name below

@prashanthjos
Copy link
Contributor Author

@dprotaso done addressed your comments!

@dprotaso
Copy link
Member

thanks for the change @prashanthjos

/lgtm
/approve

@knative-prow knative-prow bot added the lgtm Indicates that a PR is ready to be merged. label Jan 15, 2026
@knative-prow
Copy link

knative-prow bot commented Jan 15, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dprotaso, prashanthjos

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow knative-prow bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 15, 2026
@knative-prow knative-prow bot merged commit ca98904 into knative:main Jan 15, 2026
91 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants