-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add metric and logging for activator-autoscaler connectivity #16318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Welcome @prashanthjos! It looks like this is your first PR to knative/serving 🎉 |
|
Hi @prashanthjos. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This change adds observability for the websocket connection between the activator and autoscaler components: - Add `activator_autoscaler_reachable` gauge metric (1=reachable, 0=not reachable) - Log ERROR when autoscaler is not reachable during stat sending - Add periodic connection status monitor (every 5s) to detect connection establishment failures - Add unit tests for the new AutoscalerConnectionStatusMonitor function The metric is recorded in two scenarios: 1. When SendRaw fails/succeeds during stat message sending 2. When the periodic status check detects connection not established This helps operators identify connectivity issues between activator and autoscaler that could impact autoscaling decisions.
|
/ok-to-test |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16318 +/- ##
=======================================
Coverage 80.09% 80.09%
=======================================
Files 215 216 +1
Lines 13391 13429 +38
=======================================
+ Hits 10725 10756 +31
- Misses 2304 2311 +7
Partials 362 362 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/retest |
|
Related docs PR: |
linkvt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR! I understand the requirement but think we could simplify/change the current implementation a bit, see my comments.
I haven't worked that much with this code so far, a more experienced maintainer might have more thoughts about the changes, not sure if they are still in their winter break though.
pkg/activator/stat_reporter.go
Outdated
|
|
||
| // AutoscalerConnectionStatusMonitor periodically checks if the autoscaler is reachable | ||
| // and emits metrics and logs accordingly. | ||
| func AutoscalerConnectionStatusMonitor(ctx context.Context, logger *zap.SugaredLogger, conn StatusChecker, mp metric.MeterProvider) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this monitor as the stats are already reported every second, see
| const reportInterval = time.Second |
This means errors would be detected there already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! You're right that errors are already detected every second via ReportStats when stats are being sent. However, the monitor handles one edge case: when there's no traffic. If no requests are coming in, ConcurrencyReporter sends nothing (len(msgs) == 0), so ReportStats never calls SendRaw(), and we wouldn't detect a broken connection.
pkg/activator/metrics.go
Outdated
|
|
||
| meter := provider.Meter(scopeName) | ||
|
|
||
| m.autoscalerReachable, err = meter.Int64Gauge( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use counters with labels here instead of a gauge. If the connection is flaky we might get the case where we always check when the gauge is 1.
If we have a counter with result=success or result=error we would:
- not miss any errors anymore
- could create an alert based on the success rate e.g. during the last 5 minutes if success rate is e.g. below 95%
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question - @prashanthjos do you have an opinion here
I'm wondering if adding a second metric that tracks reconnects is sufficient to answer @linkvt's question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I've implemented a hybrid approach:
Keep the gauge (kn.activator.autoscaler.reachable), provides instant visibility into current connection state
Add a counter (kn.activator.autoscaler.connection_errors_total), monotonically increasing, tracks every error
This addresses @linkvt's concern:
Counter never misses errors (accumulates, doesn't sample)
Enables rate-based alerting: rate(connection_errors_total[5m]) > threshold
Gauge still answers "is it reachable right now?" for real-time dashboards
The counter increments on every error event (both periodic health checks and send failures), so flaky connections will be fully captured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the argument for keeping the gauge, sounds good to me 👍
With regards to the error counter, I did more research and found the following evaluation that gives a recommendation and also doesn't recommend the approach I proposed: https://promlabs.com/blog/2023/09/19/errors-successes-totals-which-metrics-should-i-expose-to-prometheus/#recommended-for-binary-outcomes-exposing-errors-and-totals
In summary:
- Add a counter for total connection checks
- Keep the counter for connection checks with errors (exists right now)
- This allows you to track the absolute error rate but also the relative error ratio which is in my experience more important than an absolute number
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "errors + totals" pattern makes sense for variable-rate operations like HTTP requests.
However, for this connection health check, I'm not sure the totals counter adds value because:
- Check frequency is fixed (every 5s) - the rate is predictable/constant
- Any error is actionable, we don't need a ratio to decide if the connection is healthy
- The error counter alone lets us alert on rate(connection_errors_total[5m]) > 0, which captures "autoscaler became unreachable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partially agree and also thought about that but:
- there can be multiple activator pods so there is a difference if you have 1 error with 1 or 10 activators
- there are outages/hickups in the environment on every level if your environment is sufficiently large, it's just a statistical certainty with network partitions, disk errors, memory errors, etc. happening at some point in time. Not every error is actionable as errors are expected in large environments, this is also why error budgets are common.
- following common patterns (error + total metric) makes it easier for infra teams to write their alerts based on error budget burn rate.
This is at least my experience from working in some large environments handling PBs of data and working on the German railroads sales platform to give my statements some credibility 🙂 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@linkvt thank you for the detailed feedback and context, this is really helpful! Your points about multiple activator pods and error budgets in large environments make a lot of sense. Following the error + total pattern for burn rate alerting is definitely the right approach for production observability.
Could I accommodate this in a follow-up PR? I'd like to get the current changes merged first and then add the connection_checks_total counter alongside the existing connection_errors_total in a subsequent PR.
pkg/activator/metrics.go
Outdated
|
|
||
| meter := provider.Meter(scopeName) | ||
|
|
||
| m.autoscalerReachable, err = meter.Int64Gauge( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question - @prashanthjos do you have an opinion here
I'm wondering if adding a second metric that tracks reconnects is sufficient to answer @linkvt's question
|
/retest |
|
Chatted on slack - I think we should do the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor thing on my end (re: total in metric name).
pkg/activator/metrics.go
Outdated
| } | ||
|
|
||
| m.autoscalerConnectionErrors, err = meter.Int64Counter( | ||
| "kn.activator.autoscaler.connection_errors_total", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
according to OTel documentation total shouldn't be in the name
https://opentelemetry.io/docs/specs/semconv/general/naming/#do-not-use-total
pkg/activator/metrics.go
Outdated
| meter := provider.Meter(scopeName) | ||
|
|
||
| m.autoscalerReachable, err = meter.Int64Gauge( | ||
| "kn.activator.autoscaler.reachable", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a huge deal in my mind but curious if we should make this generic.
For example kn.activator.open_connections and have a peer attribute of autoscaler
Thus if the activator were to ever connect to anything else we would just add a new peer attribute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And likewise tweak the metric name below
|
@dprotaso done addressed your comments! |
|
thanks for the change @prashanthjos /lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dprotaso, prashanthjos The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Description
This PR adds observability for the websocket connection between the activator and autoscaler components. When the autoscaler is not reachable, operators currently have no easy way to identify this issue, which can lead to autoscaling failures.
Changes
New Metric
kn.activator.autoscaler.reachable1(reachable),0(not reachable)New Logging
"Autoscaler is not reachable from activator. Stats were not sent."(on send failure)"Autoscaler is not reachable from activator."(on connection check failure)How It Works
The metric is recorded in two scenarios:
Periodic check (every 5s):
On stat send:
Testing
go test ./pkg/activator/... -v)Release Note