Skip to content

Releases: pytorch/tensordict

v0.2.1

26 Oct 20:57
c3caa76
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.0...v0.2.1

0.2.0

05 Oct 06:54
Compare
Choose a tag to compare

New features

What's Changed

Read more

v0.1.2

09 May 15:38
8913d81
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.1...v0.1.2

v0.1.1

06 May 21:08
f60e078
Compare
Choose a tag to compare

What's Changed

  • [CI] Added workflow to let contributors self-assign issues by @sugatoray in #281
  • [BugFix] Fix reshape with non-expanded sizes by @vmoens in #283
  • [BugFix] Fix reshape with empty shape by @vmoens in #284
  • [BugFix] Improve utils (pad_sequence and make_tensordict) by @vmoens in #285
  • [BugFix] make_tensordict batch-size with tuple keys by @vmoens in #286
  • [BugFix] Fix memmap ownership to make it process-wise and allow for indexed memmap persistance by @vmoens in #288
  • [Refactor] Deprecate set_default by @tcbegley in #236
  • [BugFix] Fix get_functional and functional call with stateful envs by @vmoens in #287
  • [BugFix] Fix irecv for lazy tensordicts by @vmoens in #274
  • [Feature] h5 compatibility by @vmoens in #289
  • [Test, Refactor, Doc] add explicit test on set + remove hardcoded values by @apbard in #294
  • [Refactor] Make TensorDictBase available at root by @vmoens in #295
  • [Test, BugFix] execute h5 tests only if h5py is installed by @apbard in #298
  • [Refactor] Some improvement on modules by @vmoens in #296
  • [BugFix] Remove functionalized check in _decorate_funs by @vmoens in #300
  • [BugFix] deprecate CLASSES_DICT and _get_typed_output by @apbard in #299
  • [BugFix] add set/get and set_at,get_at methods to tensorclass by @apbard in #293
  • [BugFix] Fix function signature by @vmoens in #304
  • [Refactor] Faster functional module by @vmoens in #303
  • [Feature] forward getattr to wrapped module by @apbard in #290
  • [Feature] support tensorclasses in call by @apbard in #291
  • [Test] consolidate test_tensorclass and test_tensorclass_nofuture by @apbard in #302
  • [Test, Validation] Validate input model and add tests on input checks by @apbard in #305
  • [BugFix] Fix sequential calls to make_functional by @vmoens in #306
  • [Refactor] avoid adding _TENSORCLASS flag by @apbard in #301
  • [BugFix] Fix slow functional calls by @vmoens in #309
  • [BugFix] Fix deepcopy in benchmarks by @vmoens in #310
  • [BugFix] Fix sub-stack of td modules by @vmoens in #311
  • [Feature] Promote tensorclass by @vmoens in #307
  • [Test] Increase timeout for distributed and memmap tests by @apbard in #312
  • [BugFix] dispatch with empty batch-size by @vmoens in #315
  • [BugFix] Fix __setitem__ with broadcasting of tensordicts by @vmoens in #316
  • [BugFix] Allow for optional disabling of auto-batch size determination in dispatch by @vmoens in #317
  • [Refactor] td.set and sampling efficiency by @vmoens in #318
  • [CI] RL pipeline by @vmoens in #319
  • [BugFix] Fix sub-tensordict indexing and updating by @vmoens in #320
  • [Benchmark] More item benchmarks by @vmoens in #323
  • [Refactor] Use slots for faster creation by @vmoens in #321
  • [CI] Continuous benchmark trigger by @vmoens in #325
  • [CI] Continuous benchmark trigger (2) by @vmoens in #326
  • [Refactor] No check on batch-size when _run_checks=False by @vmoens in #322
  • [BugFix,CI] Codecov SHA error by @vmoens in #330
  • [Doc] Updated Docs with conda installation instruction by @sugatoray in #329
  • [Refactor] Compatibility with np.bool_ by @vmoens in #331
  • Deprecate interaction_mode with interaction_type by @Goldspear in #332
  • [CI] add benchmark test under regular pipeline by @apbard in #327
  • [Refactor] Make NormalParamExtractor available at tensordict.nn level by @vmoens in #334
  • [Refactor] Introduce InteractoinType Enum by @Goldspear in #333
  • [Feature] Recursive key selection for sequences by @vmoens in #335
  • [BugFix] nested tds in persistent tds may have the wrong batch-size by @vmoens in #336
  • [Refactor] TensorDictModuleBase by @vmoens in #337
  • [Minor] Doc and vmap fixes by @vmoens in #338
  • [Feature] Close for h5 tds by @vmoens in #339
  • [Benchmark] TDModule benchmarks by @vmoens in #343
  • [BugFix] Key checks in TensorDictSequential by @tcbegley in #340
  • [Feature] set_skip_existing and related by @vmoens in #342
  • [Refactor] copy _contextlib by @vmoens in #344
  • [BugFix] Add dispatch decorator to probabilistic modules by @tcbegley in #345
  • [BugFix] Add sample_log_prob to out_keys when return_log_prob=True by @tcbegley in #346
  • [BugFix] Fix missing "sample_log_prob" when no sample is needed by @vmoens in #347
  • [Doc] Fix doc workflow by @vmoens in #348
  • [Feature] select_out_keys by @vmoens in #350
  • [BugFix] Fix ModuleBase __new__ attribute and property creation by @vmoens in #353
  • [Feature] tensordict.flatten by @vmoens in #354
  • [BugFix] Fix none indexing by @vmoens in #357
  • [Feature] Named dims by @vmoens in #356
  • [BugFix] Fixing set_at_ with names by @vmoens in #359
  • [BugFix] Changing tensordict batch size with names by @vmoens in #360
  • [BugFix] Populate tensordict without names by @vmoens in #361
  • [BugFix] Fix nested names, to(device) names and other bugs by @vmoens in #362
  • [Refactor] Upgrade vmap imports by @vmoens in #308
  • [Feature] as_tensor by @vmoens in #363
  • [BugFix] Fix contiguous names by @vmoens in #364
  • [CI] Upgrade ubuntu version in GHA by @vmoens in #365
  • [Feature] tensordist.reduce by @vmoens in #366
  • [BugFix] Assigning None to names in lazy stacked td by @vmoens in #367
  • [Feature] Modules that output dicts by @vmoens in #368
  • [BugFix] Fix functional calls by @vmoens in #369
  • [BugFix] Fix functional check for non TensorDictModuleBase modules by @vmoens in #370
  • [BugFix] Fix pop for stacked tds by @vmoens in #371
  • [Feature] Keep dimension names in vmap by @vmoens in #372
  • [Versioning] v0.1.1 by @vmoens in #373

New Contributors

Full Changelog: 0.1.0...v0.1.1

0.1.0 [Beta]

16 Mar 12:20
5c277af
Compare
Choose a tag to compare

First official release of tensordict!

What's Changed

Full Changelog: 0.0.3...v0.1.0

0.0.3

08 Mar 21:02
Compare
Choose a tag to compare

What's Changed

  • [BugFix] tensordict.set(nested_key, value) points to the wrong metadata dict by @vmoens in #40
  • [Feature] nested LazyStack indexing by @vmoens in #42
  • [Feature] Support nested keys in select method by @tcbegley in #39
  • [Feature] Support nested keys in exclude method by @tcbegley in #41
  • [Feature] non-strict select by @vmoens in #44
  • [BugFix] Use GitHub for flake8 pre-commit hook by @tcbegley in #47
  • [BugFix] Accept nested dicts for update by @vmoens in #46
  • [BugFix] Fix exclude by @vmoens in #50
  • [Feature] Nested keys support for set_default by @tcbegley in #45
  • [Feature] TensorDictSequential nested keys by @tcbegley in #49
  • [BugFix] Typo in README by @tcbegley in #53
  • [Feature] Faster construction by @vmoens in #54
  • [BugFix] Nested tensordict collision by @khundman in #51
  • [BugFix] Fix memmap creation for nested TDs by @vmoens in #57
  • [BugFix] Fix locking mechanism by @vmoens in #58
  • [Test] test that cloned locked TDs aren't locked by @vmoens in #59
  • [Bugfix] Fix docs build workflow by @tcbegley in #61
  • [BugFix] Exclude potentially top-level packages from setup.py by @vmoens in #63
  • [Doc] Doc Badge by @vmoens in #64
  • [BugFix] Default version to None if not found by @vmoens in #65
  • [NOMERGE] Migrate TorchRL to tensordict.nn.TensorDictModule by @tcbegley in #66
  • [Formatting] FBCode formatting by @vmoens in #67
  • [Feature] TorchRec support: indexing keyedjaggedtensors by @vmoens in #60
  • [BugFix] Optional functorch dependency by @vmoens in #68
  • [Feature] TensorDict.split method by @wonnor-pro in #56
  • [Doc, Test] Some more formatting for FBCode by @vmoens in #71
  • [Formatting] Final FBCode formatting by @vmoens in #72
  • [Doc] Some more badges by @vmoens in #69
  • [Formatting] ignore formatting commits by @vmoens in #73
  • [BugFix] Mutable default arguments by @vmoens in #74
  • [Formatting] Minor formatting improvements by @vmoens in #75
  • [BugFix] functorch fixes for old-deps by @tcbegley in #76
  • [Formatting] Fix expand by @vmoens in #78
  • Revert "[Formatting] Fix expand" by @vmoens in #79
  • [Formatting] Fix expand by @vmoens in #80
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #81
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #82
  • [Feature] In-place functionalization by @vmoens in #11
  • [BugFix] Ensure that all modules are visited during module population by @vmoens in #83
  • [Feature] Add leaves_only kwarg to keys / values / items methods by @tcbegley in #84
  • [Perf] Faster functional modules by @vmoens in #89
  • [Feature] Benchmarks by @vmoens in #90
  • [Feature] Lazystack insert / append by @tcbegley in #85
  • Revert "[Feature] Benchmarks" by @vmoens in #91
  • [Feature] Create a TensorDict from kwargs in TensorDictModule.forward() by @alexanderlobov in #87
  • [Docs] Add TensorDict overview to docs by @tcbegley in #93
  • [Feature] Benchmarks by @vmoens in #92
  • [Versioning] v0.0.1b by @vmoens in #95
  • docs: fix esemble -> ensemble typo by @rmax in #96
  • [BugFix] Functorch fix by @vmoens in #103
  • [Feature] Refactor probabilistic module wrapper by @tcbegley in #104
  • [BugFix] patched nn.Module deserialization by @vmoens in #108
  • [Feature] Prototype of dataclass like interface for TensorDict by @roccajoseph in #107
  • [Feature] Add TensorDict.pop() by @salaxieb in #111
  • [Feature] Allow for tensorclass construction with unnamed args by @vmoens in #112
  • [Refactor] Refactor dispatch_kwargs for easier usage by @vmoens in #113
  • [Feature] Add metaclass for TensorClass instances by @tcbegley in #114
  • [Feature] Replace ProbablilisticTensorDictModule with prototype by @tcbegley in #109
  • [Refactor] Speedup select method by @vmoens in #120
  • [Typo] fist -> first by @tcbegley in #123
  • [Feature] Tensorclass device by @tcbegley in #122
  • [Feature] Tensorclass updates by @tcbegley in #124
  • [BugFix] Fix items_meta, values_meta by @tcbegley in #125
  • [Refactor] Various speed improvements by @vmoens in #121
  • [BugFix] TorchRec test failure by @vmoens in #126
  • [BugFix] Fix functorch vmap imports by @vmoens in #127
  • [BugFix] Fix functorch imports by @vmoens in #128
  • [Refactor] Remove the unsqueeze for tensors that match the tensordict in shape by @vmoens in #115
  • [BugFix] Fix functorch test import by @vmoens in #129
  • [Refactor] torch.cat with destination td refactoring by @vmoens in #130
  • [BugFix] Fix test_cat device error by @vmoens in #131
  • [BugFix] add skipif not _has_functorch by @apbard in #133
  • [Formatting] Fix F401 lint advisory by @apbard in #134
  • [Feature] Stacking tensors of different shape by @vmoens in #135
  • [BugFix] LazyStackedTensorDict indexing along stack_dim by @tcbegley in #138
  • [Feature] Support range in indexing operations by @tcbegley in #139
  • [BugFix] improving select for LazyStackedTD by @vmoens in #137
  • [BugFix] Unbind for lazy stacked TD by @vmoens in #140
  • [BugFix] Stack dimension indexing by @tcbegley in #141
  • [BugFix] Fix LazyStackedTensorDict.update for LazyStackedTensorDict -> LazyStackedTensorDict by @vmoens in #142
  • [BugFix] Prevent calls to get_nestedtensor when stack_dim is not 0 by @tcbegley in #143
  • [Doc] Some more doc for tensorclass by @vmoens in #136
  • [Versioning] Version 0.0.1c by @vmoens in #144
  • [Refactor] Remove prototype import patch by @tcbegley in #117
  • [Feature] Add codecov checks by @tcbegley in #86
  • [BugFix] Add len method to tensorclass by @tcbegley in #150
  • [Doc] Tutorials with sphinx-gallery by @vmoens in #147
  • [Doc] More tutorials: ImageNet by @vmoens in #152
  • [Doc] Adding external link badges to the documentation by @se-yi in #156
  • [Doc] Badges by @vmoens in #158
  • [Docs] Typo and formatting in overview example by @tcbegley in ht...
Read more

v0.0.2-beta

11 Feb 10:29
55d08e5
Compare
Choose a tag to compare
v0.0.2-beta Pre-release
Pre-release

What's Changed

  • [BugFix] tensordict.set(nested_key, value) points to the wrong metadata dict by @vmoens in #40
  • [Feature] nested LazyStack indexing by @vmoens in #42
  • [Feature] Support nested keys in select method by @tcbegley in #39
  • [Feature] Support nested keys in exclude method by @tcbegley in #41
  • [Feature] non-strict select by @vmoens in #44
  • [BugFix] Use GitHub for flake8 pre-commit hook by @tcbegley in #47
  • [BugFix] Accept nested dicts for update by @vmoens in #46
  • [BugFix] Fix exclude by @vmoens in #50
  • [Feature] Nested keys support for set_default by @tcbegley in #45
  • [Feature] TensorDictSequential nested keys by @tcbegley in #49
  • [BugFix] Typo in README by @tcbegley in #53
  • [Feature] Faster construction by @vmoens in #54
  • [BugFix] Nested tensordict collision by @khundman in #51
  • [BugFix] Fix memmap creation for nested TDs by @vmoens in #57
  • [BugFix] Fix locking mechanism by @vmoens in #58
  • [Test] test that cloned locked TDs aren't locked by @vmoens in #59
  • [Bugfix] Fix docs build workflow by @tcbegley in #61
  • [BugFix] Exclude potentially top-level packages from setup.py by @vmoens in #63
  • [Doc] Doc Badge by @vmoens in #64
  • [BugFix] Default version to None if not found by @vmoens in #65
  • [NOMERGE] Migrate TorchRL to tensordict.nn.TensorDictModule by @tcbegley in #66
  • [Formatting] FBCode formatting by @vmoens in #67
  • [Feature] TorchRec support: indexing keyedjaggedtensors by @vmoens in #60
  • [BugFix] Optional functorch dependency by @vmoens in #68
  • [Feature] TensorDict.split method by @wonnor-pro in #56
  • [Doc, Test] Some more formatting for FBCode by @vmoens in #71
  • [Formatting] Final FBCode formatting by @vmoens in #72
  • [Doc] Some more badges by @vmoens in #69
  • [Formatting] ignore formatting commits by @vmoens in #73
  • [BugFix] Mutable default arguments by @vmoens in #74
  • [Formatting] Minor formatting improvements by @vmoens in #75
  • [BugFix] functorch fixes for old-deps by @tcbegley in #76
  • [Formatting] Fix expand by @vmoens in #78
  • Revert "[Formatting] Fix expand" by @vmoens in #79
  • [Formatting] Fix expand by @vmoens in #80
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #81
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #82
  • [Feature] In-place functionalization by @vmoens in #11
  • [BugFix] Ensure that all modules are visited during module population by @vmoens in #83
  • [Feature] Add leaves_only kwarg to keys / values / items methods by @tcbegley in #84
  • [Perf] Faster functional modules by @vmoens in #89
  • [Feature] Benchmarks by @vmoens in #90
  • [Feature] Lazystack insert / append by @tcbegley in #85
  • Revert "[Feature] Benchmarks" by @vmoens in #91
  • [Feature] Create a TensorDict from kwargs in TensorDictModule.forward() by @alexanderlobov in #87
  • [Docs] Add TensorDict overview to docs by @tcbegley in #93
  • [Feature] Benchmarks by @vmoens in #92
  • [Versioning] v0.0.1b by @vmoens in #95
  • docs: fix esemble -> ensemble typo by @rmax in #96
  • [BugFix] Functorch fix by @vmoens in #103
  • [Feature] Refactor probabilistic module wrapper by @tcbegley in #104
  • [BugFix] patched nn.Module deserialization by @vmoens in #108
  • [Feature] Prototype of dataclass like interface for TensorDict by @roccajoseph in #107
  • [Feature] Add TensorDict.pop() by @salaxieb in #111
  • [Feature] Allow for tensorclass construction with unnamed args by @vmoens in #112
  • [Refactor] Refactor dispatch_kwargs for easier usage by @vmoens in #113
  • [Feature] Add metaclass for TensorClass instances by @tcbegley in #114
  • [Feature] Replace ProbablilisticTensorDictModule with prototype by @tcbegley in #109
  • [Refactor] Speedup select method by @vmoens in #120
  • [Typo] fist -> first by @tcbegley in #123
  • [Feature] Tensorclass device by @tcbegley in #122
  • [Feature] Tensorclass updates by @tcbegley in #124
  • [BugFix] Fix items_meta, values_meta by @tcbegley in #125
  • [Refactor] Various speed improvements by @vmoens in #121
  • [BugFix] TorchRec test failure by @vmoens in #126
  • [BugFix] Fix functorch vmap imports by @vmoens in #127
  • [BugFix] Fix functorch imports by @vmoens in #128
  • [Refactor] Remove the unsqueeze for tensors that match the tensordict in shape by @vmoens in #115
  • [BugFix] Fix functorch test import by @vmoens in #129
  • [Refactor] torch.cat with destination td refactoring by @vmoens in #130
  • [BugFix] Fix test_cat device error by @vmoens in #131
  • [BugFix] add skipif not _has_functorch by @apbard in #133
  • [Formatting] Fix F401 lint advisory by @apbard in #134
  • [Feature] Stacking tensors of different shape by @vmoens in #135
  • [BugFix] LazyStackedTensorDict indexing along stack_dim by @tcbegley in #138
  • [Feature] Support range in indexing operations by @tcbegley in #139
  • [BugFix] improving select for LazyStackedTD by @vmoens in #137
  • [BugFix] Unbind for lazy stacked TD by @vmoens in #140
  • [BugFix] Stack dimension indexing by @tcbegley in #141
  • [BugFix] Fix LazyStackedTensorDict.update for LazyStackedTensorDict -> LazyStackedTensorDict by @vmoens in #142
  • [BugFix] Prevent calls to get_nestedtensor when stack_dim is not 0 by @tcbegley in #143
  • [Doc] Some more doc for tensorclass by @vmoens in #136
  • [Versioning] Version 0.0.1c by @vmoens in #144
  • [Refactor] Remove prototype import patch by @tcbegley in #117
  • [Feature] Add codecov checks by @tcbegley in #86
  • [BugFix] Add len method to tensorclass by @tcbegley in #150
  • [Doc] Tutorials with sphinx-gallery by @vmoens in #147
  • [Doc] More tutorials: ImageNet by @vmoens in #152
  • [Doc] Adding external link badges to the documentation by @se-yi in #156
  • [Doc] Badges by @vmoens in #158
  • [Docs] Typo and formatting in overview example by @tcbegley in ht...
Read more

v0.0.2-alpha

23 Jan 21:01
Compare
Choose a tag to compare
v0.0.2-alpha Pre-release
Pre-release

What's Changed

  • [BugFix] tensordict.set(nested_key, value) points to the wrong metadata dict by @vmoens in #40
  • [Feature] nested LazyStack indexing by @vmoens in #42
  • [Feature] Support nested keys in select method by @tcbegley in #39
  • [Feature] Support nested keys in exclude method by @tcbegley in #41
  • [Feature] non-strict select by @vmoens in #44
  • [BugFix] Use GitHub for flake8 pre-commit hook by @tcbegley in #47
  • [BugFix] Accept nested dicts for update by @vmoens in #46
  • [BugFix] Fix exclude by @vmoens in #50
  • [Feature] Nested keys support for set_default by @tcbegley in #45
  • [Feature] TensorDictSequential nested keys by @tcbegley in #49
  • [BugFix] Typo in README by @tcbegley in #53
  • [Feature] Faster construction by @vmoens in #54
  • [BugFix] Nested tensordict collision by @khundman in #51
  • [BugFix] Fix memmap creation for nested TDs by @vmoens in #57
  • [BugFix] Fix locking mechanism by @vmoens in #58
  • [Test] test that cloned locked TDs aren't locked by @vmoens in #59
  • [Bugfix] Fix docs build workflow by @tcbegley in #61
  • [BugFix] Exclude potentially top-level packages from setup.py by @vmoens in #63
  • [Doc] Doc Badge by @vmoens in #64
  • [BugFix] Default version to None if not found by @vmoens in #65
  • [NOMERGE] Migrate TorchRL to tensordict.nn.TensorDictModule by @tcbegley in #66
  • [Formatting] FBCode formatting by @vmoens in #67
  • [Feature] TorchRec support: indexing keyedjaggedtensors by @vmoens in #60
  • [BugFix] Optional functorch dependency by @vmoens in #68
  • [Feature] TensorDict.split method by @wonnor-pro in #56
  • [Doc, Test] Some more formatting for FBCode by @vmoens in #71
  • [Formatting] Final FBCode formatting by @vmoens in #72
  • [Doc] Some more badges by @vmoens in #69
  • [Formatting] ignore formatting commits by @vmoens in #73
  • [BugFix] Mutable default arguments by @vmoens in #74
  • [Formatting] Minor formatting improvements by @vmoens in #75
  • [BugFix] functorch fixes for old-deps by @tcbegley in #76
  • [Formatting] Fix expand by @vmoens in #78
  • Revert "[Formatting] Fix expand" by @vmoens in #79
  • [Formatting] Fix expand by @vmoens in #80
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #81
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #82
  • [Feature] In-place functionalization by @vmoens in #11
  • [BugFix] Ensure that all modules are visited during module population by @vmoens in #83
  • [Feature] Add leaves_only kwarg to keys / values / items methods by @tcbegley in #84
  • [Perf] Faster functional modules by @vmoens in #89
  • [Feature] Benchmarks by @vmoens in #90
  • [Feature] Lazystack insert / append by @tcbegley in #85
  • Revert "[Feature] Benchmarks" by @vmoens in #91
  • [Feature] Create a TensorDict from kwargs in TensorDictModule.forward() by @alexanderlobov in #87
  • [Docs] Add TensorDict overview to docs by @tcbegley in #93
  • [Feature] Benchmarks by @vmoens in #92
  • [Versioning] v0.0.1b by @vmoens in #95
  • docs: fix esemble -> ensemble typo by @rmax in #96
  • [BugFix] Functorch fix by @vmoens in #103
  • [Feature] Refactor probabilistic module wrapper by @tcbegley in #104
  • [BugFix] patched nn.Module deserialization by @vmoens in #108
  • [Feature] Prototype of dataclass like interface for TensorDict by @roccajoseph in #107
  • [Feature] Add TensorDict.pop() by @salaxieb in #111
  • [Feature] Allow for tensorclass construction with unnamed args by @vmoens in #112
  • [Refactor] Refactor dispatch_kwargs for easier usage by @vmoens in #113
  • [Feature] Add metaclass for TensorClass instances by @tcbegley in #114
  • [Feature] Replace ProbablilisticTensorDictModule with prototype by @tcbegley in #109
  • [Refactor] Speedup select method by @vmoens in #120
  • [Typo] fist -> first by @tcbegley in #123
  • [Feature] Tensorclass device by @tcbegley in #122
  • [Feature] Tensorclass updates by @tcbegley in #124
  • [BugFix] Fix items_meta, values_meta by @tcbegley in #125
  • [Refactor] Various speed improvements by @vmoens in #121
  • [BugFix] TorchRec test failure by @vmoens in #126
  • [BugFix] Fix functorch vmap imports by @vmoens in #127
  • [BugFix] Fix functorch imports by @vmoens in #128
  • [Refactor] Remove the unsqueeze for tensors that match the tensordict in shape by @vmoens in #115
  • [BugFix] Fix functorch test import by @vmoens in #129
  • [Refactor] torch.cat with destination td refactoring by @vmoens in #130
  • [BugFix] Fix test_cat device error by @vmoens in #131
  • [BugFix] add skipif not _has_functorch by @apbard in #133
  • [Formatting] Fix F401 lint advisory by @apbard in #134
  • [Feature] Stacking tensors of different shape by @vmoens in #135
  • [BugFix] LazyStackedTensorDict indexing along stack_dim by @tcbegley in #138
  • [Feature] Support range in indexing operations by @tcbegley in #139
  • [BugFix] improving select for LazyStackedTD by @vmoens in #137
  • [BugFix] Unbind for lazy stacked TD by @vmoens in #140
  • [BugFix] Stack dimension indexing by @tcbegley in #141
  • [BugFix] Fix LazyStackedTensorDict.update for LazyStackedTensorDict -> LazyStackedTensorDict by @vmoens in #142
  • [BugFix] Prevent calls to get_nestedtensor when stack_dim is not 0 by @tcbegley in #143
  • [Doc] Some more doc for tensorclass by @vmoens in #136
  • [Versioning] Version 0.0.1c by @vmoens in #144
  • [Refactor] Remove prototype import patch by @tcbegley in #117
  • [Feature] Add codecov checks by @tcbegley in #86
  • [BugFix] Add len method to tensorclass by @tcbegley in #150
  • [Doc] Tutorials with sphinx-gallery by @vmoens in #147
  • [Doc] More tutorials: ImageNet by @vmoens in #152
  • [Doc] Adding external link badges to the documentation by @se-yi in #156
  • [Doc] Badges by @vmoens in #158
  • [Docs] Typo and formatting in overview example by @tcbegley in ht...
Read more

v0.0.1-gamma

03 Jan 20:23
Compare
Choose a tag to compare
v0.0.1-gamma Pre-release
Pre-release

Summary

  • Introduces a new @Tensorclass prototype. Check it out here
  • New features for lazy stacked tensordicts, such as insert or append, as well as support for nested-tensors.
  • Faster code execution for most features.
  • In-place functionalization

What's Changed

  • [BugFix] tensordict.set(nested_key, value) points to the wrong metadata dict by @vmoens in #40
  • [Feature] nested LazyStack indexing by @vmoens in #42
  • [Feature] Support nested keys in select method by @tcbegley in #39
  • [Feature] Support nested keys in exclude method by @tcbegley in #41
  • [Feature] non-strict select by @vmoens in #44
  • [BugFix] Use GitHub for flake8 pre-commit hook by @tcbegley in #47
  • [BugFix] Accept nested dicts for update by @vmoens in #46
  • [BugFix] Fix exclude by @vmoens in #50
  • [Feature] Nested keys support for set_default by @tcbegley in #45
  • [Feature] TensorDictSequential nested keys by @tcbegley in #49
  • [BugFix] Typo in README by @tcbegley in #53
  • [Feature] Faster construction by @vmoens in #54
  • [BugFix] Nested tensordict collision by @khundman in #51
  • [BugFix] Fix memmap creation for nested TDs by @vmoens in #57
  • [BugFix] Fix locking mechanism by @vmoens in #58
  • [Test] test that cloned locked TDs aren't locked by @vmoens in #59
  • [Bugfix] Fix docs build workflow by @tcbegley in #61
  • [BugFix] Exclude potentially top-level packages from setup.py by @vmoens in #63
  • [Doc] Doc Badge by @vmoens in #64
  • [BugFix] Default version to None if not found by @vmoens in #65
  • [NOMERGE] Migrate TorchRL to tensordict.nn.TensorDictModule by @tcbegley in #66
  • [Formatting] FBCode formatting by @vmoens in #67
  • [Feature] TorchRec support: indexing keyedjaggedtensors by @vmoens in #60
  • [BugFix] Optional functorch dependency by @vmoens in #68
  • [Feature] TensorDict.split method by @wonnor-pro in #56
  • [Doc, Test] Some more formatting for FBCode by @vmoens in #71
  • [Formatting] Final FBCode formatting by @vmoens in #72
  • [Doc] Some more badges by @vmoens in #69
  • [Formatting] ignore formatting commits by @vmoens in #73
  • [BugFix] Mutable default arguments by @vmoens in #74
  • [Formatting] Minor formatting improvements by @vmoens in #75
  • [BugFix] functorch fixes for old-deps by @tcbegley in #76
  • [Formatting] Fix expand by @vmoens in #78
  • Revert "[Formatting] Fix expand" by @vmoens in #79
  • [Formatting] Fix expand by @vmoens in #80
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #81
  • [BugFix] TensorDictSequential inheritance fix by @tcbegley in #82
  • [Feature] In-place functionalization by @vmoens in #11
  • [BugFix] Ensure that all modules are visited during module population by @vmoens in #83
  • [Feature] Add leaves_only kwarg to keys / values / items methods by @tcbegley in #84
  • [Perf] Faster functional modules by @vmoens in #89
  • [Feature] Benchmarks by @vmoens in #90
  • [Feature] Lazystack insert / append by @tcbegley in #85
  • Revert "[Feature] Benchmarks" by @vmoens in #91
  • [Feature] Create a TensorDict from kwargs in TensorDictModule.forward() by @alexanderlobov in #87
  • [Docs] Add TensorDict overview to docs by @tcbegley in #93
  • [Feature] Benchmarks by @vmoens in #92
  • [Versioning] v0.0.1b by @vmoens in #95
  • docs: fix esemble -> ensemble typo by @rmax in #96
  • [BugFix] Functorch fix by @vmoens in #103
  • [Feature] Refactor probabilistic module wrapper by @tcbegley in #104
  • [BugFix] patched nn.Module deserialization by @vmoens in #108
  • [Feature] Prototype of dataclass like interface for TensorDict by @roccajoseph in #107
  • [Feature] Add TensorDict.pop() by @salaxieb in #111
  • [Feature] Allow for tensorclass construction with unnamed args by @vmoens in #112
  • [Refactor] Refactor dispatch_kwargs for easier usage by @vmoens in #113
  • [Feature] Add metaclass for TensorClass instances by @tcbegley in #114
  • [Feature] Replace ProbablilisticTensorDictModule with prototype by @tcbegley in #109
  • [Refactor] Speedup select method by @vmoens in #120
  • [Typo] fist -> first by @tcbegley in #123
  • [Feature] Tensorclass device by @tcbegley in #122
  • [Feature] Tensorclass updates by @tcbegley in #124
  • [BugFix] Fix items_meta, values_meta by @tcbegley in #125
  • [Refactor] Various speed improvements by @vmoens in #121
  • [BugFix] TorchRec test failure by @vmoens in #126
  • [BugFix] Fix functorch vmap imports by @vmoens in #127
  • [BugFix] Fix functorch imports by @vmoens in #128
  • [Refactor] Remove the unsqueeze for tensors that match the tensordict in shape by @vmoens in #115
  • [BugFix] Fix functorch test import by @vmoens in #129
  • [Refactor] torch.cat with destination td refactoring by @vmoens in #130
  • [BugFix] Fix test_cat device error by @vmoens in #131
  • [BugFix] add skipif not _has_functorch by @apbard in #133
  • [Formatting] Fix F401 lint advisory by @apbard in #134
  • [Feature] Stacking tensors of different shape by @vmoens in #135
  • [BugFix] LazyStackedTensorDict indexing along stack_dim by @tcbegley in #138
  • [Feature] Support range in indexing operations by @tcbegley in #139
  • [BugFix] improving select for LazyStackedTD by @vmoens in #137
  • [BugFix] Unbind for lazy stacked TD by @vmoens in #140
  • [BugFix] Stack dimension indexing by @tcbegley in #141
  • [BugFix] Fix LazyStackedTensorDict.update for LazyStackedTensorDict -> LazyStackedTensorDict by @vmoens in #142
  • [BugFix] Prevent calls to get_nestedtensor when stack_dim is not 0 by @tcbegley in #143

New Contributors

Read more

v0.0.1b alpha release

30 Nov 23:59
Compare
Choose a tag to compare
v0.0.1b alpha release Pre-release
Pre-release

OVERVIEW

TensorDict makes it easy to organise data and write reusable, generic PyTorch code. Originally developed for TorchRL, we’ve spun it out into a separate library.

TensorDict is primarily a dictionary but also a tensor-like class: it supports multiple tensor operations that are mostly shape and storage-related. It is designed to be efficiently serialised or transmitted from node to node or process to process. Finally, it is shipped with its own tensordict.nn module which is compatible with functorch and aims at making model ensembling and parameter manipulation easier.

On this page we will motivate TensorDict and give some examples of what it can do.

Motivation

TensorDict allows you to write generic code modules that are re-usable across paradigms. For instance, the following loop can be re-used across most SL, SSL, UL and RL tasks.

>>> for i, tensordict in enumerate(dataset):
...     # the model reads and writes tensordicts
...     tensordict = model(tensordict)
...     loss = loss_module(tensordict)
...     loss.backward()
...     optimizer.step()
...     optimizer.zero_grad()

With its tensordict.nn module, the package provides many tools to use TensorDict in a code base with little or no effort.

In multiprocessing or distributed settings, tensordict allows you to seamlessly dispatch data to each worker:

>>> # creates batches of 10 datapoints
>>> splits = torch.arange(tensordict.shape[0]).split(10)
>>> for worker in range(workers):
...     idx = splits[worker]
...     pipe[worker].send(tensordict[idx])

Some operations offered by TensorDict can be done via tree_map too, but with a greater degree of complexity:

>>> td = TensorDict(
...     {"a": torch.randn(3, 11), "b": torch.randn(3, 3)}, batch_size=3
... )
>>> regular_dict = {"a": td["a"], "b": td["b"]}
>>> td0, td1, td2 = td.unbind(0)
>>> # similar structure with pytree
>>> regular_dicts = tree_map(lambda x: x.unbind(0))
>>> regular_dict1, regular_dict2 regular_dict3 = [
...     {"a": regular_dicts["a"][i], "b": regular_dicts["b"][i]}
...     for i in range(3)]

The nested case is even more compelling:

>>> td = TensorDict(
...     {"a": {"c": torch.randn(3, 11)}, "b": torch.randn(3, 3)}, batch_size=3
... )
>>> regular_dict = {"a": {"c": td["a", "c"]}, "b": td["b"]}
>>> td0, td1, td2 = td.unbind(0)
>>> # similar structure with pytree
>>> regular_dicts = tree_map(lambda x: x.unbind(0))
>>> regular_dict1, regular_dict2 regular_dict3 = [
...     {"a": {"c": regular_dicts["a"]["c"][i]}, "b": regular_dicts["b"][i]}
...     for i in range(3)

Decomposing the output dictionary in three similarly structured dictionaries after applying the unbind operation quickly becomes significantly cumbersome when working naively with pytree. With tensordict, we provide a simple API for users that want to unbind or split nested structures, rather than computing a nested split / unbound nested structure.

Features

A TensorDict is a dict-like container for tensors. To instantiate a TensorDict, you must specify key-value pairs as well as the batch size. The leading dimensions of any values in the TensorDict must be compatible with the batch size.

>>> import torch
>>> from tensordict import TensorDict
>>> tensordict = TensorDict(
...     {"zeros": torch.zeros(2, 3, 4), "ones": torch.ones(2, 3, 4, 5)},
...     batch_size=[2, 3],
... )

The syntax for setting or retrieving values is much like that for a regular dictionary.

>>> zeros = tensordict["zeros"]
>>> tensordict["twos"] = 2 * torch.ones(2, 3)

One can also index a tensordict along its batch_size which makes it possible to obtain congruent slices of data in just a few characters (notice that indexing the nth leading dimensions with tree_map using an ellipsis would require a bit more coding):

>>> sub_tensordict = tensordict[..., :2]

One can also use the set method with inplace=True or the set_ method to do inplace updates of the contents. The former is a fault-tolerant version of the latter: if no matching key is found, it will write a new one.

The contents of the TensorDict can now be manipulated collectively. For example, to place all of the contents onto a particular device one can simply do

>>> tensordict = tensordict.to("cuda:0")

To reshape the batch dimensions one can do

>>> tensordict = tensordict.reshape(6)

The class supports many other operations, including squeeze, unsqueeze, view, permute, unbind, stack, cat and many more. If an operation is not present, the TensorDict.apply method will usually provide the solution that was needed.

Nested TensorDicts

The values in a TensorDict can themselves be TensorDicts (the nested dictionaries in the example below will be converted to nested TensorDicts).

>>> tensordict = TensorDict(
...     {
...         "inputs": {
...             "image": torch.rand(100, 28, 28),
...             "mask": torch.randint(2, (100, 28, 28), dtype=torch.uint8)
...         },
...         "outputs": {"logits": torch.randn(100, 10)},
...     },
...     batch_size=[100],
... )

Accessing or setting nested keys can be done with tuples of strings

>>> image = tensordict["inputs", "image"]
>>> logits = tensordict.get(("outputs", "logits"))  # alternative way to access
>>> tensordict["outputs", "probabilities"] = torch.sigmoid(logits)

Lazy evaluation

Some operations on TensorDict defer execution until items are accessed. For example stacking, squeezing, unsqueezing, permuting batch dimensions and creating a view are not executed immediately on all the contents of the TensorDict. Instead they are performed lazily when values in the TensorDict are accessed. This can save a lot of unnecessary calculation should the TensorDict contain many values.

>>> tensordicts = [TensorDict({
...     "a": torch.rand(10),
...     "b": torch.rand(10, 1000, 1000)}, [10])
...     for _ in range(3)]
>>> stacked = torch.stack(tensordicts, 0)  # no stacking happens here
>>> stacked_a = stacked["a"]  # we stack the a values, b values are not stacked

It also has the advantage that we can manipulate the original tensordicts in a stack:

>>> stacked["a"] = torch.zeros_like(stacked["a"])
>>> assert (tensordicts[0]["a"] == 0).all()

The caveat is that the get method has now become an expensive operation and, if repeated many times, may cause some overhead. One can avoid this by simply calling tensordict.contiguous() after the execution of stack. To further mitigate this, TensorDict comes with its own meta-data class (MetaTensor) that keeps track of the type, shape, dtype and device of each entry of the dict, without performing the expensive operation.

Lazy pre-allocation

Suppose we have some function foo() -> TensorDict and that we do something like the following:

>>> tensordict = TensorDict({}, batch_size=[N])
>>> for i in range(N):
...     tensordict[i] = foo()

When i == 0 the empty TensorDict will automatically be populated with empty tensors with batch size N. In subsequent iterations of the loop the updates will all be written in-place.

TensorDictModule

To make it easy to integrate TensorDict in one’s code base, we provide a tensordict.nn package that allows users to pass TensorDict instances to nn.Module objects.

TensorDictModule wraps nn.Module and accepts a single TensorDict as an input. You can specify where the underlying module should take its input from, and where it should write its output. This is a key reason we can write reusable, generic high-level code such as the training loop in the motivation section.

>>> from tensordict.nn import TensorDictModule
>>> class Net(nn.Module):
...     def __init__(self):
...         super().__init__()
...         self.linear = nn.LazyLinear(1)
...
...     def forward(self, x):
...         logits = self.linear(x)
...         return logits, torch.sigmoid(logits)
>>> module = TensorDictModule(
...     Net(),
...     in_keys=["input"],
...     out_keys=[("outputs", "logits"), ("outputs", "probabilities")],
... )
>>> tensordict = TensorDict({"input": torch.randn(32, 100)}, [32])
>>> tensordict = module(tensordict)
>>> # outputs can now be retrieved from the tensordict
>>> logits = tensordict["outputs", "logits"]
>>> probabilities = tensordict.get(("outputs", "probabilities"))

To facilitate the adoption of this class, one can also pass the tensors as kwargs:

>>> tensordict = module(input=torch.randn(32, 100))

which will return a TensorDict identical to the one in the previous code box.

A key pain-point of multiple PyTorch users is the inability of nn.Sequential to handle modules with multiple inputs. Working with key-based graphs can easily solve that problem as each node in the sequence knows what data needs to be read and where to write it.

For this purpose, we provide the TensorDictSequential class which passes data through a sequence of TensorDictModules. Each module in the sequence takes its input from, and writes its output to the original TensorDict, meaning it’s possible for modules in the sequence to ignore output from their predecessors, or take additional input from the tensordict as necessary. Here’s an example.

>>> class Net(nn.Module):
...    def __init__(self, input_size=100, hidden_size=50, output_size=10):
...        super().__init__()
...        self.fc1 = nn.Linear(input_size, hidden_size)
...        self.fc2 = nn.Linear(hidden_size, output_size)
...
...    def forward(self, x):
...        x = torch.relu(self.fc1(x))
...        return self.fc2(x)
...
... class Masker(nn.Module):
...     def forward(self, x, mask):
...         return torch.so...
Read more