Skip to content

Performance observability gaps (benchmarks, profiling, leak/race detection) #1015

@emil14

Description

@emil14

Problem

We currently lack systematic performance observability. There are very few benchmarks, little/no profiling, and limited detection for goroutine leaks, races, memory leaks, or CPU regressions. This makes it hard to catch performance regressions early or build confidence in runtime/compiler changes.

Why this matters

  • Runtime performance is a top priority, but we don’t have consistent signals to track it.
  • Compiler performance is also important, yet we lack easy ways to measure and compare changes.
  • E2E tests are valuable but don’t replace targeted perf coverage.

What we want (open questions, 80/20 focus)

Please propose low‑effort, high‑impact ways to improve observability without committing to large or risky changes. Ideas might include:

  • Minimal benchmark set (runtime + compiler) that gives useful baselines.
  • A lightweight CI lane or manual workflow for perf/race/leak checks.
  • Simple profiling entry points (pprof or similar) to debug regressions when they appear.
  • Suggestions for keeping runtime stdlib‑only while still catching goroutine leaks in tests.

Constraints

  • Avoid heavy dependencies or big refactors.
  • Favor stable, battle‑tested tools if any are used.
  • Keep changes incremental and safe for a team without deep perf expertise.

Success criteria

  • Clear, practical next steps with small surface area.
  • A minimal set of metrics/checks that deliver meaningful signal.

(If you have concrete suggestions, please share—but prioritize low‑hanging fruit and leave room for discussion.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    mediumDaysoptimisationMake it fastp1We can live without it but it's very important

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions