Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: 8bitmp3 <[email protected]>
  • Loading branch information
marcvanzee and 8bitmp3 authored May 18, 2021
1 parent 7788281 commit e54f8e7
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,18 +44,19 @@ Other changes:
- Added an NLP text classification example (on the SST-2 dataset) to
[`examples/sst2`](https://github.com/google/flax/tree/master/examples/sst2).
that uses a bidirectional LSTM (BiLSTM) to encode the input text.
- Added flax.training.train_state to simplifying using Optax optimizers.
- Added `flax.training.train_state` to simplify using Optax optimizers.
- `mutable` argument is now available on `Module.init` and `Module.init_with_outputs`
- Bug Fix: Correctly handle non-default parameters of Linen Modules with nested inheritance.
- Expose dot_product_attention_weights, allowing access to attention weights.
- Bug fix: Correctly handle non-default parameters of Linen Modules with nested inheritance.
- Expose `dot_product_attention_weights`, allowing access to attention weights.
- `BatchNorm` instances will behave correctly during init when called multiple times.
- Added a more extensive "how to contribute" guide in `contributing.md`.
- Add proper cache behavior for lift.jit, fixing cache misses.
- Add proper cache behavior for [`lift.jit`](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.jit.html#flax.linen.jit),
fixing cache misses.
- Fix bug in Embed layer: make sure it behaves correctly when embedding is np.array.
- Fix linen.Module for deep inheritance chains.
- Fix `linen.Module` for deep inheritance chains.
- Fix bug in DenseGeneral: correctly expand bias to account for batch & noncontracting dimensions.
- Allow Flax lifted transforms to work on partially applied Modules.
- Make MultiOptimizer use apply_gradient instead of apply_param_gradient
- Make `MultiOptimizer` use `apply_gradient` instead of `apply_param_gradient`.

0.3.3
------
Expand Down

0 comments on commit e54f8e7

Please sign in to comment.