Skip to content

Conversation

@raphaelschwinger
Copy link
Contributor

No description provided.

- Implementing it based on torchmetrics.BinaryClassificationError
- Support marginal(a.k.a. class wise computation of ECE and averaging), weighted (class wise weighted by number of samples per class) and global (flatten predictions and therefore ignore classes)
- Top-k by selecting classes based on max prediction scores, predicted classes and target classes
- Tidy up code by removing unused code
…king

This PR adds config parameters (module.metrics.num_labels and module.mask_logits) to run evaluation without masking / restricting the logits to the classes in the test set.
This enables aggregating the predictions over all BirdSet test sets.

Bonus:

Add launch.json for debugging.
Analyse calibration of ConvNext_BS on BirdSet
add metric config without metrics for fast computation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants