v0.1.4
- Adds support for bfloat16 mixed precision training via fastxtend's
MixedPrecision
callback - Adds two callback utilities for callback developers:
CallbackScheduler
: a mixin for callback scheduling values during trainingLogDispatch
: a new default callback for logging values from callbacks to WandBCallback & TensorBoardCallback
- Adds
GradientAccumulation
callback which logs full batches instead of micro-batches - Adds
GradientAccumulationSchedule
callback which supports batch size warmup via a schedulable accumulation batch size
Full Changelog: v0.1.3...v0.1.4