-
Notifications
You must be signed in to change notification settings - Fork 101
Runtime performance metrics #3276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This comment has been minimized.
This comment has been minimized.
eabb945 to
97a5cfc
Compare
gdorsi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the feature!
I think we should start way more slowly, and minimize the information exposed as much as possible.
Other than the performance impact concerns I think we should carefully expose only the extremely valuable informations, otherwise we would end up like the React's performance marks that I myself never got to understand.
Advanced users can use the performance profiler that gives detailed info anyway.
Also, React Native is failing with failure: com.facebook.react.common.JavascriptException: TypeError: undefined is not a function
homepage/homepage/content/docs/code-snippets/tooling-and-resources/inspector/index.tsx
Outdated
Show resolved
Hide resolved
| loadTime: number; | ||
| loadFromStorage?: number; | ||
| loadFromPeer?: number; | ||
| transactionParsing?: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of measuring storage vs network load time, but instead of exposing it this way I think it would be easier to understand if we give people a type flag that's storage if the value was found on storage and network otherwise.
A single loadTime number is more than enough to catch slow queries, the rest is for advanced investigations, but I think that even in that case it would be hard to gain good insights given their very small granularity.
When things are going to be slow (I expect on thousands of CoValues) we need to show only the meaningful data, and avoid polluting the devtools with too many marks.
|
PR updated with latest feedback:
|
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is being reviewed by Cursor Bugbot
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| | typeof CoValueLoadingState.UNAVAILABLE | ||
| | undefined; | ||
|
|
||
| trackPerformanceMark("subscriptionLoadStart", id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Performance marks leak when subscription destroyed early
The subscriptionLoadStart performance mark is created unconditionally in the constructor, but cleanup only occurs in extractStartEndMarks when both start and end marks are synchronized. If a subscription is destroyed before trackFirstLoad() is called (e.g., unmounted while still loading), the start mark is never cleaned up. The destroy() method doesn't clear these orphaned performance marks, causing a minor memory leak in the performance timeline.
Additional Locations (1)
| id: this.id, | ||
| source_id: this.sourceId, | ||
| resolve: JSON.stringify(this.resolve), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON.stringify runs unconditionally despite opt-in documentation
The activeSubCounter.add() calls include JSON.stringify(this.resolve) which executes unconditionally on every subscription creation and destruction, even when OpenTelemetry instrumentation is disabled. The PR discussion notes that performance.mark is too slow for performance-critical code paths, yet JSON.stringify on potentially complex nested resolve objects is more expensive. The span creation at line 143 is correctly guarded by isOpenTelemetryInstrumentationEnabled(), but the counter operations are not, contradicting the documentation stating metrics are "opt-in and disabled by default for performance reasons."
Additional Locations (1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This OTel counter is also used to track the number of active subscriptions in the inspector. We need an additional check if the inspector is installed or not.
|
PR Updated:
|
|
|
||
| // Assuming they are all sync, pick the last endMark entry position | ||
| const startMarkEntry = startMarks.at(startMarks.length - 1); | ||
| const endMarkEntry = endMarks.at(-1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Performance marks mismatch for concurrent subscriptions
The extractStartEndMarks function uses startMarks.at(startMarks.length - 1) and endMarks.at(-1) to get the last entries from each array. When multiple SubscriptionScope instances exist for the same coId concurrently, this causes mismatched start/end mark pairing. For example, if subscription A and B both start for the same coId (adding two start marks), and A completes first (adding one end mark), the measurement pairs B's start mark with A's end mark, producing an incorrect (shorter) duration. Each subscription should track its own unique marks rather than relying on global mark arrays keyed only by coId.
| export function recordMetrics() { | ||
| const globalMeterHasBeenSet = !metrics | ||
| .getMeterProvider() | ||
| .constructor.name.startsWith("Noop"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Constructor name check fragile under minification
The recordMetrics function checks whether a global meter provider is configured by testing if constructor.name.startsWith("Noop"). In production builds with minification enabled, class names can be mangled (e.g., NoopMeterProvider becomes "n" or "t"). When minified, the check would incorrectly return true for globalMeterHasBeenSet (since a minified name like "n" doesn't start with "Noop"), causing the function to exit early without setting up the Jazz meter provider. This would silently break metrics collection in minified production builds.


This PR introduces comprehensive observability support for Jazz using the OpenTelemetry protocol. It instruments the SubscriptionScope with metrics and tracing spans, allowing developers to integrate Jazz's internal performance data with their existing observability stack.
Metrics are based on
performance.marksandperformance.measure, and they are visible in the DevTools.A new
performancepage has been added to the Inspector, displaying load times with detailed breakdown (storage, peer, parsing). It is available only for in-app inspector, as it uses the same context.Note
Adds end-to-end observability for subscription performance and exposes it in-app.
SubscriptionScopewithjazz.subscription.active(UpDownCounter) andjazz.subscription.first_load(Histogram), plus tracing spans (jazz.subscription.*). Newunstable_setOpenTelemetryInstrumentationEnabledtoggle. CoJSON addsperformancemarks for storage/peer loads and exportsperf.performanceMarks.jazzMetricReaderand auto meter setup inJazzInspector.JazzInspectorwithin provider.Written by Cursor Bugbot for commit a11bedb. This will update automatically on new commits. Configure here.