Skip to content

Visualize metrics

Compare
Choose a tag to compare
@Borda Borda released this 05 Jul 08:54
· 693 commits to master since this release

We are happy to announce that the first major release of Torchmetrics, version v1.0, is publicly available. We have
worked hard on a couple of new features for this milestone release, but for v1.0.0, we have also managed to implement
over 100 metrics in torchmetrics.

Plotting

The big new feature of v1.0 is a built-in plotting feature. As the old saying goes: "A picture is worth a thousand words". Within machine learning, this is definitely also true for many things.
Metrics are one area that, in some cases, is definitely better showcased in a figure than as a list of floats. The only requirement for getting started with the plotting feature is installing matplotlib. Either install with pip install matplotlib or pip install torchmetrics[visual] (the latter option also installs Scienceplots and uses that as the default plotting style).

The basic interface is the same for any metric. Just call the new .plot method:

metric = AnyMetricYouLike()
for _ in range(num_updates):
    metric.update(preds[i], target[i])
fig, ax = metric.plot()

The plot method by default does not require any arguments and will automatically call metric.compute internally on
whatever metric states have been accumulated.

[1.0.0] - 2022-07-04

Added

  • Added prefix and postfix arguments to ClasswiseWrapper (#1866)
  • Added speech-to-reverberation modulation energy ratio (SRMR) metric (#1792, #1872)
  • Added new global arg compute_with_cache to control caching behaviour after compute method (#1754)
  • Added ComplexScaleInvariantSignalNoiseRatio for audio package (#1785)
  • Added Running wrapper for calculate running statistics (#1752)
  • AddedRelativeAverageSpectralError and RootMeanSquaredErrorUsingSlidingWindow to image package (#816)
  • Added support for SpecificityAtSensitivity Metric (#1432)
  • Added support for plotting of metrics through .plot() method (#1328, #1481, #1480, #1490, #1581, #1585, #1593, #1600, #1605, #1610, #1609, #1621, #1624, #1623, #1638, #1631, #1650, #1639, #1660, #1682, #1786)
  • Added support for plotting of audio metrics through .plot() method (#1434)
  • Added classes to output from MAP metric (#1419)
  • Added Binary group fairness metrics to classification package (#1404)
  • Added MinkowskiDistance to regression package (#1362)
  • Added pairwise_minkowski_distance to pairwise package (#1362)
  • Added new detection metric PanopticQuality (#929, #1527)
  • Added PSNRB metric (#1421)
  • Added ClassificationTask Enum and use in metrics (#1479)
  • Added ignore_index option to exact_match metric (#1540)
  • Add parameter top_k to RetrievalMAP (#1501)
  • Added support for deterministic evaluation on GPU for metrics that uses torch.cumsum operator (#1499)
  • Added support for plotting of aggregation metrics through .plot() method (#1485)
  • Added support for python 3.11 (#1612)
  • Added support for auto clamping of input for metrics that uses the data_range (#1606)
  • Added ModifiedPanopticQuality metric to detection package (#1627)
  • Added PrecisionAtFixedRecall metric to classification package (#1683)
  • Added multiple metrics to detection package (#1284)
    • IntersectionOverUnion
    • GeneralizedIntersectionOverUnion
    • CompleteIntersectionOverUnion
    • DistanceIntersectionOverUnion
  • Added MultitaskWrapper to wrapper package (#1762)
  • Added RelativeSquaredError metric to regression package (#1765)
  • Added MemorizationInformedFrechetInceptionDistance metric to image package (#1580)

Changed

  • Changed permutation_invariant_training to allow using a 'permutation-wise' metric function (#1794)
  • Changed update_count and update_called from private to public methods (#1370)
  • Raise exception for invalid kwargs in Metric base class (#1427)
  • Extend EnumStr raising ValueError for invalid value (#1479)
  • Improve speed and memory consumption of binned PrecisionRecallCurve with large number of samples (#1493)
  • Changed __iter__ method from raising NotImplementedError to TypeError by setting to None (#1538)
  • FID metric will now raise an error if too few samples are provided (#1655)
  • Allowed FID with torch.float64 (#1628)
  • Changed LPIPS implementation to no more rely on third-party package (#1575)
  • Changed FID matrix square root calculation from scipy to torch (#1708)
  • Changed calculation in PearsonCorrCoeff to be more robust in certain cases (#1729)
  • Changed MeanAveragePrecision to pycocotools backend (#1832)

Deprecated

Removed

  • Support for python 3.7 (#1640)

Fixed

  • Fixed support in MetricTracker for MultioutputWrapper and nested structures (#1608)
  • Fixed restrictive check in PearsonCorrCoef (#1649)
  • Fixed integration with jsonargparse and LightningCLI (#1651)
  • Fixed corner case in calibration error for zero confidence input (#1648)
  • Fix precision-recall curve based computations for float target (#1642)
  • Fixed missing kwarg squeeze in MultiOutputWrapper (#1675)
  • Fixed padding removal for 3d input in MSSSIM (#1674)
  • Fixed max_det_threshold in MAP detection (#1712)
  • Fixed states being saved in metrics that use register_buffer (#1728)
  • Fixed states not being correctly synced and device transfered in MeanAveragePrecision for iou_type="segm" (#1763)
  • Fixed use of prefix and postfix in nested MetricCollection (#1773)
  • Fixed ax plotting logging in `MetricCollection (#1783)
  • Fixed lookup for punkt sources being downloaded in RougeScore (#1789)
  • Fixed integration with lightning for CompositionalMetric (#1761)
  • Fixed several bugs in SpectralDistortionIndex metric (#1808)
  • Fixed bug for corner cases in MatthewsCorrCoef (#1812, #1863)
  • Fixed support for half precision in PearsonCorrCoef (#1819)
  • Fixed number of bugs related to average="macro" in classification metrics (#1821)
  • Fixed off-by-one issue when ignore_index = num_classes + 1 in Multiclass-jaccard (#1860)

New Contributors

Contributors

@alexkrz, @AndresAlgaba, @basveeling, @Bomme, @Borda, @Callidior, @clueless-skywatcher, @Dibz15, @EPronovost, @fkroeber, @ItamarChinn, @marcocaccin, @martinmeinke, @niberger, @Piyush-97, @quancs, @relativityhd, @shenoynikhil, @shhs29, @SkafteNicki, @soma2000-lang, @srishti-git1110, @stancld, @twsl, @ValerianRey, @venomouscyanide, @wbeardall

If we forgot someone due to not matching commit email with GitHub account, let us know :]