Skip to content

Releases: bghira/SimpleTuner

v0.4.1 - grumble grumble edition

16 Sep 21:20
11a0c10

Choose a tag to compare

What's Changed

  • AWS downloading improvements for speed/luminance filtration
  • small image fix for a batch size exceeding the value set by @bghira in #141

Full Changelog: v0.4.0...v0.4.1

v0.4.0 - dolphin beach edition

15 Sep 18:42
1cb195a

Choose a tag to compare

image

Changelog

Enhancements:

  • Multi-GPU Support: Added multi-GPU support and various fixes for SD2.1 and SDXL. See PR #88
  • Multi-GPU Support Follow-up: Enhanced support for multi-GPU systems. See PR #92
  • VAE Cache Extension: Extended the VAE Cache to operate across multiple GPUs.
  • SDXL Trainer: Multiple improvements, including correct directory export for the pipeline, enhanced state saving/loading, and various other trainer fixes to enhance multi-GPU training sessions.
  • Sampling:
    • Improved multi-process sampler to efficiently track seen images.
    • Enhanced multi-aspect sampler to remove log_state from the exhausted method.
    • Adjusted the multi-process sampler to remove underfilled buckets, split dataset by process, and reduced log state frequency.
  • Learning Rate: Fixed the learning rate schedule calculation for gradient accumulation steps. See PR #98
  • Weights & Biases Tracker: Fixed project and run names. See PR #100
  • Jupyter Folders: Introduced the PROTECT_JUPYTER_FOLDERS environment variable. If unset, existing Jupyter folders will be removed. See PR #101
  • Validation Options: Introduced options for validation seed randomization and updated OPTIONS. Also added a workaround to fully unload SDXL text encoders. See PR #106
  • CSV/Parquet Support: Added support for mixed folder (CSV and Parquet) uploads to S3 and improved name cleanup for the CSV to S3 uploader. See PR #118, #119, #120, #121, #122, #123
  • Validation UI: Enhanced validation UI to show a progress bar instead of individual image prompts. See PR #111
  • Bug Fixes: Addressed multiple issues, including random seed fixes for multi-GPU systems, corrected epoch/max step count for SDXL trainers, and resolved logging errors.
  • Miscellaneous:
    • Enhanced unit tests for better dataset structure.
    • Improved the filename generation logic with more robust clean-ups.
    • Added an internal epoch step count to transition to the next epoch. See PR #113
    • Addressed mixed tensor batch issues. See PR #112
    • Released updates. See PR #114

For a complete list of changes, see the Full Changelog.

v0.3.4 - aspect bucketing fix

03 Sep 22:51
00e5ff1

Choose a tag to compare

What's Changed

  • Resolves a condition where a single aspect bucket would be sampled until the trainer restarted or the bucket were exhausted.
  • Resolves a condition where a large batch size and a chronically-underfilled bucket can combine to result in a mismatched tensor size.
  • Resolves a condition after training where the model fails to push to huggingface hub (SD 2.1)
  • Resolves a condition during training and latent caching where the image preparation logic resizes an image unnecessarily
  • Adds a built-in retry mechanism for failed S3 reads and writes

Pull requests

  • S3DataBackend: Retry reads/writes by default by @bghira in #76
  • SD 2.x: fix the final push_to_hub call, so that it creates the repo before pushing to repo_id by @bghira in #77
  • Fix regression for aspect bucketing sampler randomness by changing bucket after a batch by @bghira in #78
  • MultiaspectImage resizing images unnecessarily by @bghira in #80
  • MultiaspectImage: Fix return value by @bghira in #81
  • add unit test framework by @bghira in #82

Full Changelog: v0.3.3...v0.3.4

v0.3.3 - the return of SD 2.1

02 Sep 05:24
523a945

Choose a tag to compare

What's Changed

  • SD 2.x: refactor trainer to use new aspect sampler infrastructure by @bghira in #72
  • Remove legacy aspect bucketing and data sampler code by @bghira in #73
  • Import Xformers with Sd 2.1 by @bghira in #74

Full Changelog: v0.3.2...v0.3.3

v0.3.2 - pytorch dependency fixes

31 Aug 20:16
8021bc3

Choose a tag to compare

What's Changed

  • Downgrade Torch to 2.0.1 and fix Triton library requirements by @bghira in #69

Full Changelog: v0.3.1...v0.3.2

v0.3.1 - resolution bugfix

31 Aug 04:51
e6c777e

Choose a tag to compare

This release is an important update. VAE cached latents were forced to 1024px previously.

Documentation 578c49c, 39aac5c, 767fc75
Update env example to fix terminal SNR parameters plus reorganise it 03e82fc
Validation resolution can be separate 167ab10
AWS: Strip the bucket name from file list 821e3bf
Update S3 downloader for better threshold defaults 1fde1e9
csv_to_s3: similarity bump 9058e14
VAECache: shuffle samples to allow multiple machines to assist in caching 5f91f81
revert ubuntu script changes a4e1cf8
updates 36c47ff
MultiAspectSampler: do not resize to 1024px unconditonally 8d554d9 , d446636
train_sdxl: fix typo in log line 401cf82
VAECache: fix init with resolution arg. image latents were being resized to 1024px always.

v0.3.0 - Cloudy edition

28 Aug 20:56
7d04f3f

Choose a tag to compare

This version of the software will require re-seeding the cache directories.
It is incompatible with previous versions, and swapping between these for training runs will be difficult

image

Changelog:

🌟 New Features:

  • Introducing DataBackend:
    • A powerful new abstraction that brings flexibility to your data I/O operations.
    • With the new DataBackend, you can now seamlessly switch between different storage solutions. This release introduces support for:
      • Local Filesystem: Continue using your local storage just like before.
      • S3-compatible storage: Scale up your operations and store data on S3 providers such as R2 or Wasabi, without changing your workflow.
  • Progress bars, cleaner startup messaging.
    • Use env SIMPLETUNER_LOG_LEVEL=INFO (or DEBUG, WARNING, ERROR) to change the verbosity.
  • SDXL to CivitAI-compatible Safetensors Conversion Script:
    • Making it easier than ever to convert your datasets to the SDXL format.
Local backend
  • All of the old logic, abstracted away into a hidey-hole class.
  • Still works as it used to! No changes needed for you.
S3 Data Backend (Experimental)
  • Mostly a drop-in replacement for local filesystem operations
  • Retrieve images from S3
  • Store VAE latent cache in S3
  • Store the aspect bucket cache inside the S3 bucket
  • Compatible with S3-compatible providers, eg. Wasabi
  • Does not currently make use of or support Prefixes
S3 Limitations
  • Text embed cache is stored locally still, due to inefficiency of small files
  • S3 metadata is not in use for storing image properties eg. size / luminance
  • Some optimizations have not been made yet. Notably, egress costs will be higher than they should.
  • VAE cache latents are stored in the bucket to make them portable between machines.
    • This is perhaps as bandwidth-optimized as it can be, with the startup trying to read as few files as possible.

🔧 Improvements & Refinements:

  • Enhanced logging capabilities for prompt embeddings, offering better insights into your operations.
  • Improved data storage with UTF-8 encoding, ensuring compatibility and consistency across platforms.

🐛 Bug Fixes:

  • Addressed an issue with the use_captions behavior in the training script.
  • Made minor fixes to ensure seamless interaction with the S3 storage backend.
  • Fixed #60
  • Fixed #59
  • Fixed #58

Full Changelog: v0.2.3...v0.3.0

v0.3.0-rc1 - cloudy edition

27 Aug 22:10
f5a036f

Choose a tag to compare

Merge pull request #61 from bghira/main

v0.3.0 changes

v0.2.3 - Geisha edition

25 Aug 17:33
5ec31ad

Choose a tag to compare

image

Aspect bucketing changes

  • More robust handling of seen_images, improving on v0.2.2 changes.
  • More efficient bucket changing mechanism. No longer considers exhausted buckets.
  • Fix for running out of images early and looping forever.
  • Running out of images in a single bucket is downgraded from a WARNING log to DEBUG, as this is a normal operating condition.
  • Once ALL buckets are exhausted, we will log the training state, and bump current_epoch by one.
  • Fixed logging message so that remaining image count is correctly updated to reflect that it is the total image count

Optimizers

  • Added --use_adafactor_optimizer without the configurable flags to make it a drop-in for AdamW8Bit. Use with caution, as it is not tested. However, it could help with consumer GPU training support (24G).

Prompt library

  • Adjusted some prompts, removed prompt weighting
  • User prompt library can now be created and used in addition to, or in place of the SimpleTuner prompt library. See the option --user_prompt_library help and the user_prompt_library.json.example document for more information.
  • New prompts added to SimpleTuner library:
    • portrait photography of a beautiful Japanese young geisha with make-up looking at the camera, intimate portrait composition, high detail, a hyper-realistic close-up portrait, symmetrical portrait, in-focus, portrait photography, hard edge lighting photography, essay, portrait photo man, photorealism, serious eyes, leica 50mm, DSLRs, f1. 8, artgerm, dramatic, moody lighting, post-processing highly detailed
      • seems to be a really strong concept in SDXL. if it breaks, you are in danger.
    • The Great Wave off Kanagawa
    • a stunning portrait of a soviet television news show in a 1977 wes anderson style 70mm film shoot

General changes

  • Better logging, with less noise
  • Trainer statistics are now printed on every epoch refresh, when SIMPLETUNER_LOG_LEVEL=INFO or higher
  • Renamed seen_state.json to seen_images.json
  • Added --tracker_project_name and adjusted default value of --tracker_run_name
  • Adjusted value of --learning_rate_end for Polynomial scheduler to 4e-7 from 1e-7
  • Refactored data loaders and bucket samplers to be more robust and maintainable
  • Training state is now saved as a part of the checkpoint itself. Upon resume, the training state is resumed from the checkpoint. This is a breaking change.
  • Fix save path for final checkpoint.

Pull requests:

Full Changelog: v0.2.2...v0.2.3

v0.2.2 - astronaut edition

21 Aug 22:15
adc5605

Choose a tag to compare

SDXL v_prediction, (under)trained via SimpleTuner:
image

What's Changed

  • Resolve an issue with over-sampling of images, increasing randomness
  • Resolve post-training validation error

Full Changelog: v0.2.1...v0.2.2