Skip to content

Conversation

@sdhiscocks
Copy link
Member

This has components which can be used to estimate sensor bias in time, translation and/or orientation by using model wrappers, and feeders.

@sdhiscocks sdhiscocks requested a review from a team as a code owner September 11, 2025 16:26
@sdhiscocks sdhiscocks requested review from jswright-dstl and removed request for a team September 11, 2025 16:26
@codecov
Copy link

codecov bot commented Sep 12, 2025

Codecov Report

❌ Patch coverage is 99.11504% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 94.13%. Comparing base (c65bd69) to head (96d5420).
⚠️ Report is 245 commits behind head on main.

Files with missing lines Patch % Lines
stonesoup/updater/bias.py 97.22% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1217      +/-   ##
==========================================
+ Coverage   94.05%   94.13%   +0.07%     
==========================================
  Files         227      230       +3     
  Lines       15754    15979     +225     
  Branches     2175     2203      +28     
==========================================
+ Hits        14818    15042     +224     
  Misses        652      652              
- Partials      284      285       +1     
Flag Coverage Δ
integration 66.70% <66.81%> (+0.14%) ⬆️
unittests 91.55% <98.67%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

This moves update logic from feeder to updater class which is more
fitting to the wider design. This also now includes measurement
prediction which factors in sensor bias, and such better handles data
association, etc.

This enables example now to include a prior bias making bias more
visible and demonstrating this functionality.
@sdhiscocks sdhiscocks requested review from csherman-dstl and removed request for spike-dstl October 24, 2025 13:32
Copy link
Contributor

@csherman-dstl csherman-dstl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All looks good just needs some docstrings

class TimeBiasFeeder(_BiasFeeder):
"""Time Bias Feeder
Apply bias to detection timestamp and overall timestamp yielded.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that these feeders remove the bias from the detection, but the docstrings for all of them state that they add the bias

Comment on lines 108 to 109
expected_rotated = build_rotation_matrix(-feeder.bias) \
@ (det.state_vector - det.measurement_model.translation_offset)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
expected_rotated = build_rotation_matrix(-feeder.bias) \
@ (det.state_vector - det.measurement_model.translation_offset)
expected_rotated = build_rotation_matrix(-feeder.bias[:3]) \
@ (det.state_vector + feeder.bias[3:])



class BiasModelWrapper(MeasurementModel):
"""Abstract wrapper that applies bias values to an existing MeasurementModel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly to comment on feeders, are these not taking in an already biased sensor and removing the bias?

Comment on lines 86 to 89
new_bias_state = self.bias_predictor.predict(self.bias_state, timestamp=pred_time)
self.bias_state.state_vector = new_bias_state.state_vector
self.bias_state.covar = new_bias_state.covar
self.bias_state.timestamp = new_bias_state.timestamp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is presumably only required due to the fact that the bias_feeder and bias_updater both use the bias_state but don't get it from each other.
Does it make more sense for the bias_feeder to take the updater as a property instead of the state, or to use a bias_track (like in the example) to just add the updated bias to

Copy link
Contributor

@hpritchett-dstl hpritchett-dstl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Minor changes in wording, especially for the updater, as it's a bit more involved than a typical updater.

Comment on lines 81 to 132
# Predict bias
pred_time = max(hypothesis.prediction.timestamp for hypothesis in hypotheses)
if self.bias_state.timestamp is None:
self.bias_state.timestamp = pred_time
else:
new_bias_state = self.bias_predictor.predict(self.bias_state, timestamp=pred_time)
self.bias_state.state_vector = new_bias_state.state_vector
self.bias_state.covar = new_bias_state.covar
self.bias_state.timestamp = new_bias_state.timestamp

# Create joint state
states = [hypothesis.prediction for hypothesis in hypotheses]
applied_bias = next(
(h.measurement.measurement_model.applied_bias
for h in hypotheses
if hasattr(h.measurement.measurement_model, 'applied_bias')),
np.zeros((ndim_bias, 1)))
delta_bias = self.bias_state.state_vector - applied_bias
states.append(GaussianState(delta_bias, self.bias_state.covar))
combined_pred = GaussianState(
np.vstack([state.state_vector for state in states]).view(StateVector),
block_diag(*[state.covar for state in states]).view(CovarianceMatrix),
timestamp=pred_time,
)

# Create joint measurement
offset = 0
models = []
for prediction, measurement in (
(hypothesis.prediction, hypothesis.measurement) for hypothesis in hypotheses):
models.append(self.bias_model_wrapper(
ndim_state=combined_pred.state_vector.shape[0],
measurement_model=measurement.measurement_model,
state_mapping=[offset + n for n in range(prediction.ndim)],
bias_mapping=list(range(-ndim_bias, 0))
))
offset += prediction.ndim
combined_meas = Detection(
np.vstack([hypothesis.measurement.state_vector for hypothesis in hypotheses]),
timestamp=pred_time,
measurement_model=CombinedReversibleGaussianMeasurementModel(models))

# Update bias
update = self.updater.update(SingleHypothesis(combined_pred, combined_meas), **kwargs)
rel_delta_bias = update.state_vector[-ndim_bias:, :] - delta_bias
self.bias_state.state_vector = self.bias_state.state_vector + rel_delta_bias
if self.max_bias is not None:
self.bias_state.state_vector = \
np.min([abs(self.bias_state.state_vector), self.max_bias], axis=0) \
* np.sign(self.bias_state.state_vector)
self.bias_state.covar = update.covar[-ndim_bias:, -ndim_bias:]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest using an intermediate predicted_bias_state variable to help with clarity when reading the code. something like:

Suggested change
# Predict bias
pred_time = max(hypothesis.prediction.timestamp for hypothesis in hypotheses)
if self.bias_state.timestamp is None:
self.bias_state.timestamp = pred_time
else:
new_bias_state = self.bias_predictor.predict(self.bias_state, timestamp=pred_time)
self.bias_state.state_vector = new_bias_state.state_vector
self.bias_state.covar = new_bias_state.covar
self.bias_state.timestamp = new_bias_state.timestamp
# Create joint state
states = [hypothesis.prediction for hypothesis in hypotheses]
applied_bias = next(
(h.measurement.measurement_model.applied_bias
for h in hypotheses
if hasattr(h.measurement.measurement_model, 'applied_bias')),
np.zeros((ndim_bias, 1)))
delta_bias = self.bias_state.state_vector - applied_bias
states.append(GaussianState(delta_bias, self.bias_state.covar))
combined_pred = GaussianState(
np.vstack([state.state_vector for state in states]).view(StateVector),
block_diag(*[state.covar for state in states]).view(CovarianceMatrix),
timestamp=pred_time,
)
# Create joint measurement
offset = 0
models = []
for prediction, measurement in (
(hypothesis.prediction, hypothesis.measurement) for hypothesis in hypotheses):
models.append(self.bias_model_wrapper(
ndim_state=combined_pred.state_vector.shape[0],
measurement_model=measurement.measurement_model,
state_mapping=[offset + n for n in range(prediction.ndim)],
bias_mapping=list(range(-ndim_bias, 0))
))
offset += prediction.ndim
combined_meas = Detection(
np.vstack([hypothesis.measurement.state_vector for hypothesis in hypotheses]),
timestamp=pred_time,
measurement_model=CombinedReversibleGaussianMeasurementModel(models))
# Update bias
update = self.updater.update(SingleHypothesis(combined_pred, combined_meas), **kwargs)
rel_delta_bias = update.state_vector[-ndim_bias:, :] - delta_bias
self.bias_state.state_vector = self.bias_state.state_vector + rel_delta_bias
if self.max_bias is not None:
self.bias_state.state_vector = \
np.min([abs(self.bias_state.state_vector), self.max_bias], axis=0) \
* np.sign(self.bias_state.state_vector)
self.bias_state.covar = update.covar[-ndim_bias:, -ndim_bias:]
# Predict bias
prediction_time = max(hypothesis.prediction.timestamp for hypothesis in hypotheses)
if self.bias_state.timestamp is not None:
predicted_bias_state = self.bias_predictor.predict(self.bias_state, timestamp=prediction_time)
else:
predicted_bias_state = self.bias_state
predicted_bias_state.timestamp = prediction_time
# Create joint state
states = [hypothesis.prediction for hypothesis in hypotheses]
applied_bias = next(
(h.measurement.measurement_model.applied_bias
for h in hypotheses
if hasattr(h.measurement.measurement_model, 'applied_bias')),
np.zeros((ndim_bias, 1)))
delta_bias = predicted_bias_state.state_vector - applied_bias
states.append(GaussianState(delta_bias, predicted_bias_state.covar))
combined_pred = GaussianState(
np.vstack([state.state_vector for state in states]).view(StateVector),
block_diag(*[state.covar for state in states]).view(CovarianceMatrix),
timestamp=prediction_time,
)
# Create joint measurement
offset = 0
models = []
for prediction, measurement in (
(hypothesis.prediction, hypothesis.measurement) for hypothesis in hypotheses):
models.append(self.bias_model_wrapper(
ndim_state=combined_pred.state_vector.shape[0],
measurement_model=measurement.measurement_model,
state_mapping=[offset + n for n in range(prediction.ndim)],
bias_mapping=list(range(-ndim_bias, 0))
))
offset += prediction.ndim
combined_meas = Detection(
np.vstack([hypothesis.measurement.state_vector for hypothesis in hypotheses]),
timestamp=prediction_time,
measurement_model=CombinedReversibleGaussianMeasurementModel(models))
# Update bias
update = self.updater.update(SingleHypothesis(combined_pred, combined_meas), **kwargs)
rel_delta_bias = update.state_vector[-ndim_bias:, :] - delta_bias
self.bias_state.state_vector = predicted_bias_state.state_vector + rel_delta_bias
if self.max_bias is not None:
self.bias_state.state_vector = \
np.min([abs(self.bias_state.state_vector), self.max_bias], axis=0) \
* np.sign(self.bias_state.state_vector)
self.bias_state.covar = update.covar[-ndim_bias:, -ndim_bias:]
self.bias_state.timestamp = predicted_bias_state.timestamp

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you confirm if this is accounted for in c09ac5a ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly, i think it's still worth labelling self.bias_state as "prior_bias_state" on line 79, which then gets changed to predicted_bias_state after the prediction step on line 87, then we created an updated state which is appended to the track.


# Update bias
update = self.updater.update(SingleHypothesis(combined_pred, combined_meas), **kwargs)
rel_delta_bias = update.state_vector[-ndim_bias:, :] - delta_bias
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does rel stand for?

bias i.e. all measurements from the same sensor.
"""
measurement_model = None
bias_state: GaussianState = Property(doc="Prior bias")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like there should be a bit more explicit documentation somewhere (possibly not in this exact line) expanding on that this object is manipulated within this class, and is not returned, and so you must point to the same object utilized in your feeder

Comment on lines 145 to 151
delta_orient = state_vector[self.bias_mapping[:3], :]
delta_trans = state_vector[self.bias_mapping[3:], :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orient
bias_model.translation_offset = bias_model.translation_offset - delta_trans
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
delta_orient = state_vector[self.bias_mapping[:3], :]
delta_trans = state_vector[self.bias_mapping[3:], :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orient
bias_model.translation_offset = bias_model.translation_offset - delta_trans
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))
delta_orientation = state_vector[self.bias_mapping[:3], :]
delta_translation = state_vector[self.bias_mapping[3:], :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orientation
bias_model.translation_offset = bias_model.translation_offset - delta_translation
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))

Comment on lines 119 to 127
delta_trans = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.translation_offset = bias_model.translation_offset - delta_trans
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))
if len(state_vectors) == 1:
return state_vectors[0]
else:
return StateVectors(state_vectors)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
delta_trans = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.translation_offset = bias_model.translation_offset - delta_trans
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))
if len(state_vectors) == 1:
return state_vectors[0]
else:
return StateVectors(state_vectors)
delta_translation = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.translation_offset = bias_model.translation_offset - delta_translation
state_vectors.append(bias_model.function(
State(state_vector[self.state_mapping, :]), noise=noise, **kwargs))
if len(state_vectors) == 1:
return state_vectors[0]
else:
return StateVectors(state_vectors)

Comment on lines 94 to 96
delta_orient = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orient
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
delta_orient = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orient
delta_orientation = state_vector[self.bias_mapping, :]
bias_model = copy.copy(self.measurement_model)
bias_model.rotation_offset = bias_model.rotation_offset - delta_orientation

Comment on lines 68 to 71
delta_t = state_vector[self.bias_mapping[0], 0]
predicted_state_vectors.append(self.transition_model.function(
State(state_vector[self.state_mapping, :]),
time_interval=datetime.timedelta(seconds=-delta_t),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
delta_t = state_vector[self.bias_mapping[0], 0]
predicted_state_vectors.append(self.transition_model.function(
State(state_vector[self.state_mapping, :]),
time_interval=datetime.timedelta(seconds=-delta_t),
delta_time = state_vector[self.bias_mapping[0], 0]
predicted_state_vectors.append(self.transition_model.function(
State(state_vector[self.state_mapping, :]),
time_interval=datetime.timedelta(seconds=-delta_time),

# %%
# This example demonstrates how to simulate and estimate a drifting bias in the position of a sensor platform.
# Specifically, the platform at index 0 (and its sensor) will have a time-varying bias applied to its position.
# We use Stone-Soup's bias wrappers and feeders to estimate this changing bias from sensor measurements.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# We use Stone-Soup's bias wrappers and feeders to estimate this changing bias from sensor measurements.
# We use Stone-Soup's bias wrappers, feeders, and updater to estimate this changing bias from sensor measurements.


# sphinx_gallery_thumbnail_number = 3

plotter = Plotterly(dimension=1, axis_labels=['Bias', 'Time'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
plotter = Plotterly(dimension=1, axis_labels=['Bias', 'Time'])
plotter = Plotterly(dimension=1, axis_labels=['Time', 'Bias'])

text=[self._format_state_text(state) for state in track],
**scatter_kwargs)

if uncertainty:
Copy link
Contributor

@gawebb-dstl gawebb-dstl Nov 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work on implementing the uncertainty in 1D, it's definitely a needed addition and looks good.

I just wanted to share an alternative approach I’ve used locally: plotting error bars. They give a slightly different visual cue for uncertainty.

Here’s a snippet in case you want to explore it:

                if particle:
                    raise NotImplementedError

                # Create copy of scatter kwargs
                track_scatter_kwargs = {**scatter_kwargs}
                if uncertainty:
                    mapping_0 = mapping[0]
                    err_y = [np.sqrt(state.covar[mapping_0, mapping_0])
                             for state in track.states]

                    track_scatter_kwargs['error_y'] = dict(
                        type='data', # value of error bar given in data coordinates
                        array=err_y,
                        visible=True
                    )

                self.fig.add_scatter(
                    x=[state.timestamp for state in track],
                    y=[float(getattr(state, 'mean', state.state_vector)[mapping[0]])
                       for state in track],
                    text=[self._format_state_text(state) for state in track],
                    **track_scatter_kwargs)

Your current implementation works well, but thought I’d offer it as another option as I'd already written the code but hadn't got around to a PR. Let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants