Skip to content
This repository was archived by the owner on Dec 16, 2022. It is now read-only.

Commit 1e365b1

Browse files
committed
Prepare for release v2.5.0
1 parent b92fd9a commit 1e365b1

File tree

1 file changed

+15
-12
lines changed

1 file changed

+15
-12
lines changed

CHANGELOG.md

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,18 +8,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
88
## Unreleased
99

1010

11-
### Changed
12-
13-
- Use `dist_reduce_sum` in distributed metrics.
14-
- Allow Google Cloud Storage paths in `cached_path` ("gs://...").
15-
- Renamed `nn.util.load_state_dict()` to `read_state_dict` to avoid confusion with `torch.nn.Module.load_state_dict()`.
16-
- `TransformerModule.from_pretrained_module` now only accepts a pretrained model ID (e.g. "bert-base-case") instead of
17-
an actual `torch.nn.Module`. Other parameters to this method have changed as well.
18-
- Print the first batch to the console by default.
19-
- Renamed `sanity_checks` to `confidence_checks` (`sanity_checks` is deprecated and will be removed in AllenNLP 3.0).
20-
- Trainer callbacks can now store and restore state in case a training run gets interrupted.
21-
- VilBERT backbone now rolls and unrolls extra dimensions to handle input with > 3 dimensions.
22-
- `BeamSearch` is now a `Registrable` class.
11+
## [v2.5.0](https://github.com/allenai/allennlp/releases/tag/v2.5.0) - 2021-06-03
2312

2413
### Added
2514

@@ -44,6 +33,19 @@ on a downstream task.
4433
along with a `RepeatedNGramBlockingConstraint` constraint implementation, which allows for preventing repeated n-grams in the output from `BeamSearch`.
4534
- Added `DataCollator` for dynamic operations for each batch.
4635

36+
### Changed
37+
38+
- Use `dist_reduce_sum` in distributed metrics.
39+
- Allow Google Cloud Storage paths in `cached_path` ("gs://...").
40+
- Renamed `nn.util.load_state_dict()` to `read_state_dict` to avoid confusion with `torch.nn.Module.load_state_dict()`.
41+
- `TransformerModule.from_pretrained_module` now only accepts a pretrained model ID (e.g. "bert-base-case") instead of
42+
an actual `torch.nn.Module`. Other parameters to this method have changed as well.
43+
- Print the first batch to the console by default.
44+
- Renamed `sanity_checks` to `confidence_checks` (`sanity_checks` is deprecated and will be removed in AllenNLP 3.0).
45+
- Trainer callbacks can now store and restore state in case a training run gets interrupted.
46+
- VilBERT backbone now rolls and unrolls extra dimensions to handle input with > 3 dimensions.
47+
- `BeamSearch` is now a `Registrable` class.
48+
4749
### Fixed
4850

4951
- When `PretrainedTransformerIndexer` folds long sequences, it no longer loses the information from token type ids.
@@ -56,6 +58,7 @@ on a downstream task.
5658
- Fixed `wandb` callback to work in distributed training.
5759
- Fixed `tqdm` logging into multiple files with `allennlp-optuna`.
5860

61+
5962
## [v2.4.0](https://github.com/allenai/allennlp/releases/tag/v2.4.0) - 2021-04-22
6063

6164
### Added

0 commit comments

Comments
 (0)