You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 16, 2022. It is now read-only.
along with a `RepeatedNGramBlockingConstraint` constraint implementation, which allows for preventing repeated n-grams in the output from `BeamSearch`.
45
34
- Added `DataCollator` for dynamic operations for each batch.
46
35
36
+
### Changed
37
+
38
+
- Use `dist_reduce_sum` in distributed metrics.
39
+
- Allow Google Cloud Storage paths in `cached_path` ("gs://...").
40
+
- Renamed `nn.util.load_state_dict()` to `read_state_dict` to avoid confusion with `torch.nn.Module.load_state_dict()`.
41
+
-`TransformerModule.from_pretrained_module` now only accepts a pretrained model ID (e.g. "bert-base-case") instead of
42
+
an actual `torch.nn.Module`. Other parameters to this method have changed as well.
43
+
- Print the first batch to the console by default.
44
+
- Renamed `sanity_checks` to `confidence_checks` (`sanity_checks` is deprecated and will be removed in AllenNLP 3.0).
45
+
- Trainer callbacks can now store and restore state in case a training run gets interrupted.
46
+
- VilBERT backbone now rolls and unrolls extra dimensions to handle input with > 3 dimensions.
47
+
-`BeamSearch` is now a `Registrable` class.
48
+
47
49
### Fixed
48
50
49
51
- When `PretrainedTransformerIndexer` folds long sequences, it no longer loses the information from token type ids.
@@ -56,6 +58,7 @@ on a downstream task.
56
58
- Fixed `wandb` callback to work in distributed training.
57
59
- Fixed `tqdm` logging into multiple files with `allennlp-optuna`.
0 commit comments