Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==3.0.6
->==3.1.3
==v1.0.1
->==1.3.0
==0.44.1
->==0.45.1
==8.1.7
->==8.1.8
==43.0.1
->==43.0.3
==3.1.0
->==3.2.0
==8.15.0
->==8.17.1
==2.14.0
->==2.14.1
==0.114.0
->==0.115.7
0.115.8
==2024.9.0
->==2024.12.0
~=2.34.0
->~=2.38.0
==1.65.0
->==1.78.0
1.79.0
~=2.23.0
->~=2.27.2
2.27.3
==2.18.2
->==2.19.0
==5.11.0
->==5.13.0
5.13.2
(+1)==0.2.16
->==0.3.15
0.3.17
(+1)==0.0.2
->==0.0.3
==2.17.2
->==2.20.0
2.20.1
==2.1.1
->==2.2.2
==2.2.2
->==2.2.3
==v0.13.2
->==0.14.0
==0.3.2
->==0.3.6
==6.1.0
->==6.1.1
==2.9.9
->==2.9.10
==1.24.10
->==1.25.2
==1.11.1
->==1.13.2
==5.0.8
->==5.2.1
>=0.6,<=0.6.4
->>=0.9,<=0.9.3
0.9.4
==1.5.1
->==1.6.1
==1.14.1
->==1.15.1
==1.16.0
->==1.17.0
==1.38.0
->==1.41.1
==2.17.0
->==2.18.0
==0.0.3
->==0.0.5
==4.44.2
->==4.48.1
4.48.2
==v4.46.1
->==4.48.1
4.48.2
==v0.11.4
->==0.13.0
0.14.0
==2024.1
->==2024.2
==0.30.6
->==0.34.0
==4.7.1
->==4.10.4
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
huggingface/accelerate (accelerate)
v1.3.0
: Bug fixes + Require torch 2.0Compare Source
Torch 2.0
As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the minimum version for Accelerate, which similarly was done in
transformers
as of its last release.Core
keep_torch_compile
param tounwrap_model
andextract_model_from_parallel
for distributed compiled model. by @ggoggam in https://github.com/huggingface/accelerate/pull/3282Big Modeling
Examples
Full Changelog
What's Changed
keep_torch_compile
param tounwrap_model
andextract_model_from_parallel
for distributed compiled model. by @ggoggam in https://github.com/huggingface/accelerate/pull/3282New Contributors
Full Changelog: huggingface/accelerate@v1.2.1...v1.3.0
v1.2.1
: : PatchfixCompare Source
Full Changelog: huggingface/accelerate@v1.2.0...v1.2.1
v1.2.0
: : Bug Squashing & Fixes across the boardCompare Source
Core
find_executable_batch_size
on XPU by @faaany in https://github.com/huggingface/accelerate/pull/3236numpy._core
instead ofnumpy.core
by @qgallouedec in https://github.com/huggingface/accelerate/pull/3247data_loader
] Optionally also propagate set_epoch to batch sampler by @tomaarsen in https://github.com/huggingface/accelerate/pull/3246accelerate config
prompt text by @faaany in https://github.com/huggingface/accelerate/pull/3268Big Modeling
align_module_device
, ensure only cpu tensors forget_state_dict_offloaded_model
by @kylesayrs in https://github.com/huggingface/accelerate/pull/3217get_state_dict_from_offload
by @kylesayrs in https://github.com/huggingface/accelerate/pull/3253preload_module_classes
is lost for nested modules by @wejoncy in https://github.com/huggingface/accelerate/pull/3248DeepSpeed
Documentation
Update code in tracking documentation by @faaany in https://github.com/huggingface/accelerate/pull/3235
Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by @relh in https://github.com/huggingface/accelerate/pull/3259
Update set-seed by @faaany in https://github.com/huggingface/accelerate/pull/3228
Fix typo by @faaany in https://github.com/huggingface/accelerate/pull/3221
Use real path for
checkpoint
by @faaany in https://github.com/huggingface/accelerate/pull/3220Fixed multiple typos for Tutorials and Guides docs by @henryhmko in https://github.com/huggingface/accelerate/pull/3274
New Contributors
Full Changelog
align_module_device
, ensure only cpu tensors forget_state_dict_offloaded_model
by @kylesayrs in https://github.com/huggingface/accelerate/pull/3217find_executable_batch_size
on XPU by @faaany in https://github.com/huggingface/accelerate/pull/3236data_loader
] Optionally also propagate set_epoch to batch sampler by @tomaarsen in https://github.com/huggingface/accelerate/pull/3246numpy._core
instead ofnumpy.core
by @qgallouedec in https://github.com/huggingface/accelerate/pull/3247accelerate config
prompt text by @faaany in https://github.com/huggingface/accelerate/pull/3268get_state_dict_from_offload
by @kylesayrs in https://github.com/huggingface/accelerate/pull/3253preload_module_classes
is lost for nested modules by @wejoncy in https://github.com/huggingface/accelerate/pull/3248checkpoint
by @faaany in https://github.com/huggingface/accelerate/pull/3220Code Diff
Release diff: huggingface/accelerate@v1.1.1...v1.2.0
v1.1.1
Compare Source
v1.1.0
: : Python 3.9 minimum, torch dynamo deepspeed support, and bug fixesCompare Source
Internals:
data_seed
argument in https://github.com/huggingface/accelerate/pull/3150weights_only=True
by default for all compatible objects when checkpointing and saving withtorch.save
in https://github.com/huggingface/accelerate/pull/3036dim
input inpad_across_processes
in https://github.com/huggingface/accelerate/pull/3114DeepSpeed
Megatron
Big Model Inference
has_offloaded_params
utility added in https://github.com/huggingface/accelerate/pull/3188Examples
Full Changelog
dim
input inpad_across_processes
by @mariusarvinte in https://github.com/huggingface/accelerate/pull/3114data_seed
by @muellerzr in https://github.com/huggingface/accelerate/pull/3150save_model
by @muellerzr in https://github.com/huggingface/accelerate/pull/3146weights_only=True
by default for all compatible objects by @muellerzr in https://github.com/huggingface/accelerate/pull/3036get_xpu_available_memory
by @faaany in https://github.com/huggingface/accelerate/pull/3165has_offloaded_params
by @kylesayrs in https://github.com/huggingface/accelerate/pull/3188Configuration
📅 Schedule: Branch creation - "* 0-3 * * 1" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log.