Skip to content

Conversation

@mfinean
Copy link
Collaborator

@mfinean mfinean commented Dec 26, 2025

Summary

  • Split the large bench_cfg.py into 10 focused modules under bench_cfg/
  • Group related parameters: cache, execution, display, visualization, time, server
  • Use composition instead of deep inheritance for BenchRunCfg
  • Maintain full backward compatibility via dynamic delegation

Changes

  • New bencher/bench_cfg/ package with focused config classes
  • Original bench_cfg.py becomes a thin re-export layer
  • No changes to public API or parameter names/defaults

Summary by Sourcery

Modularize benchmark configuration into focused submodules while preserving the existing public API via a thin compatibility layer.

New Features:

  • Introduce dedicated configuration classes for caching, execution, display, visualization, time, and plot server settings under the new bencher.bench_cfg package.
  • Add a composed BenchRunCfg that groups the sub-configuration objects while still supporting flat parameter access and command-line construction.
  • Expose BenchCfg, BenchRunCfg, DimsCfg, and the new sub-config classes through the bencher.bench_cfg package for both grouped and flat configuration usage.

Enhancements:

  • Refactor the previous monolithic bench_cfg.py into smaller, concern-focused modules, improving structure and maintainability without changing parameter names or defaults.

Tests:

  • Add unit tests covering default values of the new sub-config classes, BenchRunCfg composition and delegation behavior, BenchCfg inheritance, and backward-compatible import paths.

@mfinean mfinean requested a review from blooop as a code owner December 26, 2025 22:08
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 26, 2025

Reviewer's Guide

Refactors the monolithic bencher.bench_cfg module into a bench_cfg package of focused configuration classes, introduces a compositional BenchRunCfg with dynamic delegation to sub-configs, and preserves backward compatibility via re-exports and attribute delegation, with tests covering defaults, composition, and import behavior.

Sequence diagram for BenchRunCfg initialization and delegated attribute access

sequenceDiagram
    actor User
    participant BenchRunCfg
    participant BenchPlotSrvCfg as ServerCfg
    participant CacheCfg
    participant ExecutionCfg
    participant DisplayCfg
    participant VisualizationCfg
    participant TimeCfg

    User->>BenchRunCfg: new BenchRunCfg(cache_results=True, repeats=5)
    activate BenchRunCfg
    Note over BenchRunCfg: __init__(**params)

    BenchRunCfg->>BenchPlotSrvCfg: create default instance
    BenchRunCfg->>CacheCfg: create default instance
    BenchRunCfg->>ExecutionCfg: create default instance
    BenchRunCfg->>DisplayCfg: create default instance
    BenchRunCfg->>VisualizationCfg: create default instance
    BenchRunCfg->>TimeCfg: create default instance

    Note over BenchRunCfg: Route legacy flat params via _DELEGATION_MAP
    BenchRunCfg->>CacheCfg: set cache_results=True
    BenchRunCfg->>ExecutionCfg: set repeats=5

    BenchRunCfg-->>User: BenchRunCfg instance
    deactivate BenchRunCfg

    User->>BenchRunCfg: cfg.cache_results
    activate BenchRunCfg
    Note over BenchRunCfg: __getattr__("cache_results")
    BenchRunCfg->>CacheCfg: get cache_results
    CacheCfg-->>BenchRunCfg: True
    BenchRunCfg-->>User: True
    deactivate BenchRunCfg

    User->>BenchRunCfg: cfg.repeats = 10
    activate BenchRunCfg
    Note over BenchRunCfg: __setattr__("repeats", 10)
    BenchRunCfg->>ExecutionCfg: set repeats=10
    deactivate BenchRunCfg
Loading

Class diagram for refactored benchmark configuration classes

classDiagram
    class BenchPlotSrvCfg {
        <<param.Parameterized>>
        +int port
        +bool allow_ws_origin
        +bool show
    }

    class CacheCfg {
        <<param.Parameterized>>
        +bool cache_results
        +bool clear_cache
        +bool cache_samples
        +bool only_hash_tag
        +bool clear_sample_cache
        +bool overwrite_sample_cache
        +bool only_plot
    }

    class ExecutionCfg {
        <<param.Parameterized>>
        +int repeats
        +int level
        +Executors executor
        +bool nightly
        +bool headless
    }

    class DisplayCfg {
        <<param.Parameterized>>
        +bool summarise_constant_inputs
        +bool print_bench_inputs
        +bool print_bench_results
        +bool print_pandas
        +bool print_xarray
        +bool serve_pandas
        +bool serve_pandas_flat
        +bool serve_xarray
    }

    class VisualizationCfg {
        <<param.Parameterized>>
        +bool auto_plot
        +bool use_holoview
        +bool use_optuna
        +bool render_plotly
        +bool raise_duplicate_exception
        +int plot_size
        +int plot_width
        +int plot_height
    }

    class TimeCfg {
        <<param.Parameterized>>
        +bool over_time
        +bool clear_history
        +str time_event
        +str run_tag
        +datetime run_date
    }

    class BenchRunCfg {
        <<param.Parameterized>>
        +BenchPlotSrvCfg server
        +CacheCfg cache
        +ExecutionCfg execution
        +DisplayCfg display
        +VisualizationCfg visualization
        +TimeCfg time
        +deep()
        +from_cmd_line()
        +__getattr__(name)
        +__setattr__(name, value)
    }

    class BenchCfg {
        <<param.Parameterized>>
        +list input_vars
        +list result_vars
        +list const_vars
        +list result_hmaps
        +list meta_vars
        +list all_vars
        +list iv_time
        +list iv_time_event
        +str name
        +str title
        +bool raise_duplicate_exception
        +str bench_name
        +str description
        +str post_description
        +bool has_results
        +bool pass_repeat
        +str tag
        +str hash_value
        +list plot_callbacks
        +hash_persistent(include_repeats)
        +inputs_as_str()
        +to_latex()
        +describe_sweep(width, accordion)
        +sweep_sentence()
        +describe_benchmark()
        +to_title(panel_name)
        +to_description(width)
        +to_post_description(width)
        +to_sweep_summary(name, description, describe_sweep, results_suffix, title)
        +optuna_targets(as_var)
    }

    class DimsCfg {
        +list dims_name
        +list dim_ranges
        +list dims_size
        +list dim_ranges_index
        +list dim_ranges_str
        +dict coords
    }

    BenchRunCfg o-- BenchPlotSrvCfg : server
    BenchRunCfg o-- CacheCfg : cache
    BenchRunCfg o-- ExecutionCfg : execution
    BenchRunCfg o-- DisplayCfg : display
    BenchRunCfg o-- VisualizationCfg : visualization
    BenchRunCfg o-- TimeCfg : time

    BenchCfg --|> BenchRunCfg
    DimsCfg ..> BenchCfg : uses
Loading

File-Level Changes

Change Details Files
Convert bencher.bench_cfg from a large implementation module into a thin compatibility layer that re-exports configuration classes from the new bench_cfg package.
  • Remove in-module definitions of BenchPlotSrvCfg, BenchRunCfg, BenchCfg, and DimsCfg and all related logic from bencher.bench_cfg
  • Import configuration classes from the new bencher.bench_cfg package and expose them via all
  • Document recommended new import style while maintaining backward-compatible symbols for existing code
bencher/bench_cfg.py
Introduce a bench_cfg package that organizes configuration concerns into focused submodules while reassembling them via BenchRunCfg and BenchCfg.
  • Create bench_cfg/init.py to export all configuration classes and document grouped vs flat usage
  • Add server_cfg, cache_cfg, execution_cfg, display_cfg, visualization_cfg, and time_cfg modules each defining a dedicated param.Parameterized config class for its concern
  • Add dims_cfg module to host DimsCfg unchanged in behavior but relocated into the package
  • Add bench_cfg_class.BenchCfg that subclasses BenchRunCfg and contains the benchmark variable/metadata logic previously in BenchCfg
bencher/bench_cfg/__init__.py
bencher/bench_cfg/server_cfg.py
bencher/bench_cfg/cache_cfg.py
bencher/bench_cfg/execution_cfg.py
bencher/bench_cfg/display_cfg.py
bencher/bench_cfg/visualization_cfg.py
bencher/bench_cfg/time_cfg.py
bencher/bench_cfg/dims_cfg.py
bencher/bench_cfg/bench_cfg_class.py
Redesign BenchRunCfg to use composition of sub-configs with dynamic attribute delegation for backward-compatible flat access.
  • Implement BenchRunCfg as a param.Parameterized that owns sub-config instances (server, cache, execution, display, visualization, time) as ClassSelectors
  • Add a delegation map that routes legacy flat attribute names (e.g., cache_results, repeats, over_time) to the appropriate sub-config
  • Override init to accept both sub-config objects and flat keyword arguments, routing flat parameters into sub-configs
  • Override getattr and setattr to dynamically delegate known attributes to the correct sub-config, preserving the old BenchRunCfg interface
  • Reimplement from_cmd_line to construct BenchRunCfg using the new sub-config classes and to keep CLI flags compatible
  • Provide deep() helper that returns a deepcopy of the configuration
bencher/bench_cfg/run_cfg.py
Add tests ensuring sub-config defaults, BenchRunCfg composition/delegation, BenchCfg inheritance, and import/backward compatibility.
  • Verify default values of each sub-config class match prior behavior (cache, execution, display, visualization, time, server)
  • Test that BenchRunCfg instantiates all sub-configs and supports both flat and grouped parameter access and initialization
  • Ensure flat and grouped attributes stay synchronized and that deep() produces independent copies
  • Confirm that BenchCfg inherits flat parameter access from BenchRunCfg and that key parameters like over_time work as expected
  • Add backward-compatibility tests for importing configuration classes from bencher.bench_cfg and bencher, including new sub-configs
test/test_bench_cfg.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `bencher/bench_cfg/run_cfg.py:147-149` </location>
<code_context>
+            **params,
+        )
+
+    def __getattr__(self, name):
+        """Delegate attribute access to sub-configs for backward compatibility."""
+        if name in _DELEGATION_MAP:
+            sub_name = _DELEGATION_MAP[name]
+            sub_cfg = object.__getattribute__(self, sub_name)
</code_context>

<issue_to_address>
**issue (bug_risk):** Fallback in __getattr__ should delegate to super().__getattr__ to avoid breaking Parameterized behavior.

Because this implementation raises `AttributeError` for any name not in `_DELEGATION_MAP`, it bypasses `param.Parameterized`'s own `__getattr__` handling and can break dynamic parameters or other attributes `Parameterized` normally resolves. Instead, delegate non-mapped names to `super().__getattr__(name)` and only raise if that also fails, e.g.

```python
def __getattr__(self, name):
    if name in _DELEGATION_MAP:
        sub_name = _DELEGATION_MAP[name]
        sub_cfg = object.__getattribute__(self, sub_name)
        return getattr(sub_cfg, name)
    return super().__getattr__(name)
```
</issue_to_address>

### Comment 2
<location> `test/test_bench_cfg.py:57-66` </location>
<code_context>
+class TestBenchRunCfgComposition(unittest.TestCase):
</code_context>

<issue_to_address>
**suggestion (testing):** Add tests to verify that sub-config instances are not shared between BenchRunCfg copies

The `repeats` parameter exercises the `deep()` helper, but only at the top level. Please also assert that `cfg.cache is not copy.cache` (and likewise for other sub-configs) and that mutating `copy.cache.cache_results` does not affect `cfg.cache.cache_results`. This will ensure sub-configs are truly deep-copied and protect against future shared-state regressions.

Suggested implementation:

```python
        self.assertIsInstance(cfg.display, DisplayCfg)
        self.assertIsInstance(cfg.visualization, VisualizationCfg)
        self.assertIsInstance(cfg.time, TimeCfg)

    def test_sub_configs_not_shared_between_copies(self):
        """Sub-config instances are deep-copied, not shared, when using repeats."""
        cfg = BenchRunCfg()
        # Set a known value so we can verify it does not change when mutating the copy.
        cfg.cache.cache_results = False

        # Create a copy that exercises the deep() helper via the repeats parameter.
        cfg_copy = cfg.copy(repeats=2)

        # Sub-config objects themselves must not be shared between cfg and cfg_copy.
        self.assertIsNot(cfg.cache, cfg_copy.cache)
        self.assertIsNot(cfg.execution, cfg_copy.execution)
        self.assertIsNot(cfg.display, cfg_copy.display)
        self.assertIsNot(cfg.visualization, cfg_copy.visualization)
        self.assertIsNot(cfg.time, cfg_copy.time)

        # Mutating a sub-config on the copy must not affect the original.
        cfg_copy.cache.cache_results = True
        self.assertTrue(cfg_copy.cache.cache_results)
        self.assertFalse(cfg.cache.cache_results)

```

If `BenchRunCfg` does not expose a `copy(repeats=...)` API, update the line `cfg_copy = cfg.copy(repeats=2)` to call whichever method/constructor actually uses the `repeats` parameter and internally invokes the `deep()` helper (for example, `cfg_copy = cfg.with_repeats(2)` or similar). The rest of the test assertions can remain as-is.
</issue_to_address>

### Comment 3
<location> `test/test_bench_cfg.py:80-87` </location>
<code_context>
+        _ = cfg.over_time
+        _ = cfg.port
+
+    def test_flat_parameter_init(self):
+        """Flat parameters can be set via constructor."""
+        cfg = BenchRunCfg(cache_results=True, repeats=5, auto_plot=False)
+        self.assertTrue(cfg.cache_results)
+        self.assertEqual(cfg.repeats, 5)
+        self.assertFalse(cfg.auto_plot)
+
+    def test_flat_parameter_setter(self):
</code_context>

<issue_to_address>
**suggestion (testing):** Add a test case for initializing BenchRunCfg with explicit sub-config objects (new-style API)

Current tests only cover flat-parameter construction on `BenchRunCfg`, not the new composition API where callers pass `cache=CacheCfg(...)`, `execution=ExecutionCfg(...)`, etc. Please add a test that builds `BenchRunCfg(cache=CacheCfg(cache_results=True), execution=ExecutionCfg(repeats=7))` and asserts both the grouped (`cfg.cache.cache_results`, `cfg.execution.repeats`) and flat (`cfg.cache_results`, `cfg.repeats`) attributes to validate this path.

```suggestion
    def test_flat_parameter_init(self):
        """Flat parameters can be set via constructor."""
        cfg = BenchRunCfg(cache_results=True, repeats=5, auto_plot=False)
        self.assertTrue(cfg.cache_results)
        self.assertEqual(cfg.repeats, 5)
        self.assertFalse(cfg.auto_plot)

    def test_subconfig_init(self):
        """Sub-config objects can be passed explicitly and are reflected in flat attributes."""
        cfg = BenchRunCfg(
            cache=CacheCfg(cache_results=True),
            execution=ExecutionCfg(repeats=7),
        )
        # grouped access
        self.assertTrue(cfg.cache.cache_results)
        self.assertEqual(cfg.execution.repeats, 7)
        # flat access
        self.assertTrue(cfg.cache_results)
        self.assertEqual(cfg.repeats, 7)

    def test_flat_parameter_setter(self):
```
</issue_to_address>

### Comment 4
<location> `test/test_bench_cfg.py:70-85` </location>
<code_context>
+        self.assertIsInstance(cfg.time, TimeCfg)
+        self.assertIsInstance(cfg.server, BenchPlotSrvCfg)
+
+    def test_flat_parameter_access(self):
+        """Parameters accessible directly on BenchRunCfg."""
+        cfg = BenchRunCfg()
+        # Should not raise
+        _ = cfg.cache_results
+        _ = cfg.repeats
+        _ = cfg.auto_plot
+        _ = cfg.over_time
+        _ = cfg.port
+
+    def test_flat_parameter_init(self):
</code_context>

<issue_to_address>
**suggestion (testing):** Consider testing error behavior of delegated attribute access for unknown attributes

Current tests only verify successful delegation. Please add a test that accessing/setting an unknown attribute (e.g. `cfg.nonexistent_param`) raises `AttributeError`, and confirm this is consistent for both `BenchRunCfg` and `BenchCfg`, to guard against future delegation-map regressions.

```suggestion
    def test_flat_parameter_access(self):
        """Parameters accessible directly on BenchRunCfg."""
        cfg = BenchRunCfg()
        # Should not raise
        _ = cfg.cache_results
        _ = cfg.repeats
        _ = cfg.auto_plot
        _ = cfg.over_time
        _ = cfg.port

    def test_flat_parameter_init(self):
        """Flat parameters can be set via constructor."""
        cfg = BenchRunCfg(cache_results=True, repeats=5, auto_plot=False)
        self.assertTrue(cfg.cache_results)
        self.assertEqual(cfg.repeats, 5)
        self.assertFalse(cfg.auto_plot)

    def test_unknown_flat_parameter_access_raises(self):
        """Accessing unknown delegated attributes raises AttributeError."""
        for cfg_cls in (BenchRunCfg, BenchCfg):
            with self.subTest(cfg_cls=cfg_cls.__name__):
                cfg = cfg_cls()
                with self.assertRaises(AttributeError):
                    _ = cfg.nonexistent_param

    def test_unknown_flat_parameter_set_raises(self):
        """Setting unknown delegated attributes raises AttributeError."""
        for cfg_cls in (BenchRunCfg, BenchCfg):
            with self.subTest(cfg_cls=cfg_cls.__name__):
                cfg = cfg_cls()
                with self.assertRaises(AttributeError):
                    cfg.nonexistent_param = 42
```
</issue_to_address>

### Comment 5
<location> `test/test_bench_cfg.py:119-128` </location>
<code_context>
+        self.assertEqual(copy.repeats, 10)
+
+
+class TestBenchCfgInheritance(unittest.TestCase):
+    """Test BenchCfg inherits delegation from BenchRunCfg."""
+
+    def test_inherits_flat_access(self):
+        """BenchCfg has flat parameter access."""
+        cfg = BenchCfg(
+            input_vars=[],
+            result_vars=[],
+            const_vars=[],
+            bench_name="test",
+            title="test",
+            cache_results=True,
+            repeats=3,
+        )
+        self.assertTrue(cfg.cache_results)
+        self.assertEqual(cfg.repeats, 3)
+
+    def test_over_time_works(self):
+        """over_time parameter works correctly."""
+        cfg = BenchCfg(
+            input_vars=[],
+            result_vars=[],
+            const_vars=[],
+            bench_name="test",
+            title="test",
+            over_time=True,
+        )
+        self.assertTrue(cfg.over_time)
+
+
</code_context>

<issue_to_address>
**suggestion (testing):** Extend BenchCfg tests to cover methods that depend on delegated fields (e.g., hash_persistent, describe_benchmark)

These tests only verify delegated fields individually. Please add a small config-based test that exercises higher-level helpers that combine delegated and BenchCfg-specific attributes, e.g. asserting:
- `hash_persistent(True) != hash_persistent(False)` when `repeats` differs
- `describe_benchmark()` output includes `run_date`, `run_tag`, `level`, `cache_results`, and `cache_samples`.
This will better protect behavior across refactors.
</issue_to_address>

### Comment 6
<location> `bencher/bench_cfg/run_cfg.py:18` </location>
<code_context>
+from bencher.bench_cfg.time_cfg import TimeCfg
+
+# Mapping of delegated attribute names to their sub-config
+_DELEGATION_MAP = {
+    # Server
+    "port": "server",
</code_context>

<issue_to_address>
**issue (complexity):** Consider generating `_DELEGATION_MAP` from the sub-config classes’ `param` definitions so delegation stays in sync without maintaining a large manual map.

You can keep the current behavior (including backwards-compatible flat attributes) while removing the “second source of truth” and a chunk of manual wiring by generating `_DELEGATION_MAP` from the sub-configs’ `param` definitions.

### 1. Replace manual `_DELEGATION_MAP` with introspection

Instead of hand-maintaining every attribute in `_DELEGATION_MAP`, derive the delegation map from the sub-config classes once, so adding a new param only requires touching the sub-config.

```python
# Before: large static map
_DELEGATION_MAP = {
    # Server
    "port": "server",
    "allow_ws_origin": "server",
    "show": "server",
    # Cache
    "cache_results": "cache",
    "clear_cache": "cache",
    ...
}
```

You can replace that with:

```python
_SUBCONFIG_SPECS = {
    "server": BenchPlotSrvCfg,
    "cache": CacheCfg,
    "execution": ExecutionCfg,
    "display": DisplayCfg,
    "visualization": VisualizationCfg,
    "time": TimeCfg,
}

# Optional: if you only want to expose a subset of parameters for each sub-config,
# define an allow-list per sub-config here instead of the large flat map.
_EXPOSED_PARAMS = {
    # None/empty means "all param parameters" on that sub-config
    "server": None,
    "cache": None,
    "execution": None,
    "display": None,
    "visualization": None,
    "time": None,
}

_DELEGATION_MAP = {}

for sub_name, cfg_cls in _SUBCONFIG_SPECS.items():
    allowed = _EXPOSED_PARAMS[sub_name]
    for pname in cfg_cls.param:
        if allowed is not None and pname not in allowed:
            continue
        # skip param system internals if necessary
        if pname in ("name", "objects", "parameters"):
            continue
        if pname in _DELEGATION_MAP:
            # in case of a clash, you can decide to raise or keep first
            raise ValueError(f"Delegated param '{pname}' defined on multiple sub-configs")
        _DELEGATION_MAP[pname] = sub_name
```

Now:

- Adding/removing parameters happens only in `CacheCfg`, `ExecutionCfg`, etc.
- `_DELEGATION_MAP` is always consistent with the sub-config definitions.
- The existing `__getattr__`, `__setattr__`, and `__init__` routing logic work unchanged.

### 2. Minor cleanup in `__init__` routing

You can also make the legacy-parameter routing slightly clearer and less error-prone by using the same `_SUBCONFIG_SPECS` keys and avoiding repeating the literal strings:

```python
def __init__(self, **params):
    server = params.pop("server", None) or BenchPlotSrvCfg()
    cache = params.pop("cache", None) or CacheCfg()
    execution = params.pop("execution", None) or ExecutionCfg()
    display = params.pop("display", None) or DisplayCfg()
    visualization = params.pop("visualization", None) or VisualizationCfg()
    time_cfg = params.pop("time", None) or TimeCfg()

    sub_configs = {
        "server": server,
        "cache": cache,
        "execution": execution,
        "display": display,
        "visualization": visualization,
        "time": time_cfg,
    }

    # Route legacy flat parameters
    for key in list(params):
        sub_name = _DELEGATION_MAP.get(key)
        if sub_name is not None:
            setattr(sub_configs[sub_name], key, params.pop(key))

    super().__init__(
        server=server,
        cache=cache,
        execution=execution,
        display=display,
        visualization=visualization,
        time=time_cfg,
        **params,
    )
```

This keeps all existing functionality (flat access, sub-configs, CLI wiring) but eliminates a large manual map and makes the system data-driven and easier to extend.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link

codecov bot commented Dec 26, 2025

Codecov Report

❌ Patch coverage is 98.81890% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.58%. Comparing base (bb6ba75) to head (d57f57d).
⚠️ Report is 6 commits behind head on main.

Files with missing lines Patch % Lines
bencher/bench_cfg/bench_cfg_class.py 97.58% 3 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #688      +/-   ##
==========================================
+ Coverage   89.39%   89.58%   +0.18%     
==========================================
  Files          91       99       +8     
  Lines        4432     4503      +71     
==========================================
+ Hits         3962     4034      +72     
+ Misses        470      469       -1     
Files with missing lines Coverage Δ
bencher/bench_cfg/cache_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/dims_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/display_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/execution_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/run_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/server_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/time_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/visualization_cfg.py 100.00% <100.00%> (ø)
bencher/bench_cfg/bench_cfg_class.py 97.58% <97.58%> (ø)

Impacted file tree graph

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

only_hash_tag: bool = param.Boolean(
False,
doc="By default when checking if a sample has been calculated before it includes the "
"hash of the greater benchmarking context. This is safer because it means that data "
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird multiple line strings?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thknk the multi-line strings were from ruff auto-formatting - can revert to single lines if preferred

default=0,
bounds=[0, 12],
doc="The level parameter is a method of defining the number samples to sweep over in a "
"variable agnostic way, i.e you don't need to specify the number of samples for each "
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More

@blooop
Copy link
Owner

blooop commented Dec 26, 2025

I like the look of this in principle, but less sure when it comes to the implementation. Ie before the code was "bad" because it was monolithic, but was relatively simple because it was all flat and made all the hashing logic easy to verify. Now it's "better" and more modular, but seems there there is more room for error and hashing bugs. There is also more logic now to go wrong. Maybe this is covered in the tests? I'm not sure if this will be easier to maintain.

@mfinean
Copy link
Collaborator Author

mfinean commented Dec 27, 2025

I like the look of this in principle, but less sure when it comes to the implementation. Ie before the code was "bad" because it was monolithic, but was relatively simple because it was all flat and made all the hashing logic easy to verify. Now it's "better" and more modular, but seems there there is more room for error and hashing bugs. There is also more logic now to go wrong. Maybe this is covered in the tests? I'm not sure if this will be easier to maintain.

Your call :) The hashing logic is unchanged - hash_persistent still accesses self.repeats, self.over_time etc. which now delegate to sub-configs but return identical values. The tests verify delegation works and hash behavior is preserved

@blooop
Copy link
Owner

blooop commented Dec 27, 2025

Can't test on computer, but does bench_cfg.xxx still auto complete as before? If it does then happy to try this. If it doesn't would an inheritance approach work?

@blooop blooop enabled auto-merge December 28, 2025 10:40
@blooop blooop merged commit 61917e2 into blooop:main Dec 28, 2025
8 of 9 checks passed
@blooop
Copy link
Owner

blooop commented Dec 28, 2025

Now I have been able to check out on my computer and see in more detail, looks good.

blooop pushed a commit that referenced this pull request Dec 29, 2025
…rameter access

This commit completes the config migration started in PR #688 by fully converting
to the new grouped config structure and removing backwards compatibility infrastructure.

Key changes:
- Converted all flat parameter accesses (e.g., cfg.cache_results) to grouped structure (e.g., cfg.cache.cache_results)
- Removed __getattr__ and __setattr__ delegation methods from BenchRunCfg
- Removed _build_delegation_map and _DELEGATION_MAP
- Simplified BenchRunCfg.__init__ by removing legacy flat parameter routing
- Updated all core files: bencher.py, bench_runner.py, sweep_executor.py, result_collector.py
- Updated result and plotting files: bench_result_base.py, optuna_result.py, surface_result.py, plt_cnt_cfg.py
- Updated bench_cfg_class.py to use grouped access in hash_persistent() and describe_benchmark()
- Updated from_cmd_line() to create sub-config objects
- Updated documentation to reflect new structure
- Updated tests to use new config structure

All config parameters now accessed via sub-configs:
- cfg.cache.cache_results, cfg.cache.cache_samples, cfg.cache.only_plot
- cfg.execution.repeats, cfg.execution.level, cfg.execution.executor
- cfg.display.print_bench_inputs, cfg.display.print_pandas
- cfg.visualization.auto_plot, cfg.visualization.plot_size
- cfg.time.over_time, cfg.time.run_tag, cfg.time.run_date
- cfg.server.port, cfg.server.show

This simplifies the codebase by removing the complexity of maintaining dual access patterns.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants