Release 2.0.0
Breaking API changes
-
sim.train
andsim.loss
now accept a singledata
argument, which combines the previousinputs
andtargets
arguments. For example,sim.train({my_node: x}, {my_probe: y}, ...)
is now equivalent to
sim.train({my_node: x, my_probe: y}, ...)
The motivation for this change is that not all objective functions require target values. Switching to the more generic
data
argument simplifies the API and makes it more flexible, allowing users to specify whatever training/loss data is actually required. -
The
objective
argument insim.train
/sim.loss
is now always specified as a dictionary mapping probes to objective functions. Note that this was available but optional previously; it was also possible to pass a single value for the objective function, which would be applied to all probes intargets
. The latter is no longer supported. For example,sim.train(..., objective="mse")
must now be explicitly specified as
sim.train(..., objective={my_probe: "mse"})
The motivation for this change is that, especially with the other new features introduced in the 2.0 update, there were a lot of different ways to specify the
objective
argument. This made it somewhat unclear how exactly this argument worked, and the automatic "broadcasting" was also ambiguous (e.g., should the single objective be applied to each probe individually, or to all of them together?). Making the argument explicit helps clarify the mental model.
Added
- An integer number of steps can now be passed for the
sim.loss
/sim.train
data argument, if no input/target data is required. - The
objective
dict insim.train
/sim.loss
can now contain tuples of probes as the keys, in which case the objective function will be called with a corresponding tuple of probe/target values as each argument. - Added the
sim.run_batch
function. This exposes all the functionality that thesim.run
/sim.train
/sim.loss
functions are based on, allowing advanced users full control over how to run a NengoDL simulation. - Added option to disable progress bar in
sim.train
andsim.loss
. - Added
training
argument tosim.loss
to control whether the loss is evaluated in training or inference mode. - Added support for the new Nengo
Transform
API (see nengo/nengo#1481).
Changed
- Custom objective functions passed to
sim.train
/sim.loss
can now accept a single argument (my_objective(outputs): ...
instead ofmy_objective(outputs, targets): ...
) if no target values are required. utils.minibatch_generator
now accepts a singledata
argument rather thaninputs
andtargets
(see discussion in "Breaking API changes").sim.training_step
is now the same astf.train.get_or_create_global_step()
.- Switched documentation to new nengo-sphinx-theme.
- Reorganized documentation into "User guide" and "API reference" sections.
- Improve build speed of models with large constants (#69)
- Moved op-specific merge logic into the
OpBuilder
classes.
Fixed
- Ensure that training step is always updated before TensorBoard events are added (previously it could update before or after depending on the platform).
Deprecated
- The
sim.run
input_feeds
argument has been renamed todata
(for consistency with other simulator functions).
Removed
- NengoDL no longer supports Python 2 (see https://python3statement.org/ for more information)