Skip to content
Merged
Show file tree
Hide file tree
Changes from 71 commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
922e273
drop python 3.8
sayakpaul Oct 21, 2025
5aa4f1d
remove list, tuple, dict from typing
sayakpaul Oct 21, 2025
19921e9
fold Unions into |
sayakpaul Oct 21, 2025
11bf2cf
up
sayakpaul Oct 21, 2025
2b72bee
fix a bunch and please me.
sayakpaul Oct 21, 2025
a076cd8
up
sayakpaul Oct 21, 2025
61c6eae
up
sayakpaul Oct 22, 2025
d1e6fff
up
sayakpaul Oct 22, 2025
85b7478
up
sayakpaul Oct 22, 2025
a33ef35
up
sayakpaul Oct 22, 2025
56d2986
up
sayakpaul Oct 22, 2025
fbc4c99
up
sayakpaul Oct 22, 2025
fbb25a0
resolve conflicts
sayakpaul Oct 27, 2025
6c066f0
enforce 3.10.0.
sayakpaul Oct 27, 2025
ca5afae
up
sayakpaul Oct 27, 2025
585c32b
up
sayakpaul Oct 27, 2025
27c1ac4
up
sayakpaul Oct 27, 2025
4490e4c
Merge branch 'main' into remove-explicit-typing
sayakpaul Oct 27, 2025
bcada5b
up
sayakpaul Oct 27, 2025
41381b1
up
sayakpaul Oct 27, 2025
19fe631
up
sayakpaul Oct 27, 2025
3a00e23
up
sayakpaul Oct 27, 2025
219a8ab
Merge branch 'main' into remove-explicit-typing
sayakpaul Oct 27, 2025
6d2a80c
up
sayakpaul Oct 28, 2025
6f2ded5
Merge branch 'main' into remove-explicit-typing
sayakpaul Oct 28, 2025
dccc206
Merge branch 'main' into remove-explicit-typing
sayakpaul Oct 28, 2025
e68c936
Merge branch 'main' into remove-explicit-typing
sayakpaul Nov 1, 2025
a2a6abc
Update setup.py
sayakpaul Jan 12, 2026
8390581
getting pro at resolving conflicts.
sayakpaul Jan 12, 2026
7407ada
up.
sayakpaul Jan 12, 2026
e47af9b
python 3.10.
sayakpaul Jan 12, 2026
ac3bd4b
ifx
sayakpaul Jan 12, 2026
c13b264
up
sayakpaul Jan 12, 2026
6983485
up
sayakpaul Jan 12, 2026
f9f6758
up
sayakpaul Jan 12, 2026
db62765
up
sayakpaul Jan 12, 2026
a9af091
up
sayakpaul Jan 12, 2026
d77d61b
final
sayakpaul Jan 12, 2026
8063353
up
sayakpaul Jan 12, 2026
2b1f19d
fix typing utils.
sayakpaul Jan 12, 2026
f364948
up
sayakpaul Jan 12, 2026
a72f61a
up
sayakpaul Jan 12, 2026
7192d4b
up
sayakpaul Jan 12, 2026
f2ced21
up
sayakpaul Jan 12, 2026
4e1ce3d
up
sayakpaul Jan 12, 2026
0d52188
up
sayakpaul Jan 12, 2026
19558cb
fix
sayakpaul Jan 12, 2026
e604854
up
sayakpaul Jan 12, 2026
53a943d
up
sayakpaul Jan 12, 2026
78233be
up
sayakpaul Jan 12, 2026
3a0efa3
up
sayakpaul Jan 12, 2026
4b020c5
up
sayakpaul Jan 12, 2026
beede72
up
sayakpaul Jan 12, 2026
337ac57
Merge branch 'main' into remove-explicit-typing
sayakpaul Jan 12, 2026
5ee4e19
handle modern types.
sayakpaul Jan 13, 2026
34388bd
Merge branch 'main' into remove-explicit-typing
sayakpaul Jan 13, 2026
1426c33
up
sayakpaul Jan 13, 2026
aca3b78
Merge branch 'main' into remove-explicit-typing
sayakpaul Jan 13, 2026
4cbe1aa
up
sayakpaul Jan 13, 2026
b30be7d
fix ip adapter type checking.
sayakpaul Jan 13, 2026
987412b
up
sayakpaul Jan 13, 2026
463367d
up
sayakpaul Jan 13, 2026
7ad97d4
resolve conflicts.
sayakpaul Jan 15, 2026
765eb50
up
sayakpaul Jan 15, 2026
b24f1c0
up
sayakpaul Jan 15, 2026
a6ac560
resolve big conflicts.
sayakpaul Feb 12, 2026
7d7c76e
up
sayakpaul Feb 12, 2026
51cbafa
up
sayakpaul Feb 12, 2026
0f775ae
up
sayakpaul Feb 12, 2026
94adaa2
finish
sayakpaul Feb 13, 2026
f876ea8
Merge branch 'main' into remove-explicit-typing
sayakpaul Feb 13, 2026
b7b6081
revert docstring changes.
sayakpaul Feb 13, 2026
f78ef74
keep deleted files deleted.
sayakpaul Feb 13, 2026
de19e0a
keep deleted files deleted.
sayakpaul Feb 13, 2026
d3fdeb1
up
sayakpaul Feb 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/workflows/notify_slack_about_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.8'
python-version: '3.10'

- name: Notify Slack about the release
env:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr_dependency_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install -e .
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand All @@ -55,7 +55,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pr_tests_gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand All @@ -56,7 +56,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr_torch_dependency_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install -e .
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pypi_publish.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.8'
python-version: '3.10'

- name: Fetch latest branch
id: fetch_latest_branch
Expand All @@ -47,7 +47,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"

- name: Install dependencies
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/stale.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: 3.8
python-version: 3.10

- name: Install requirements
run: |
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
implied, including, without limitation, Any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
Expand Down
14 changes: 7 additions & 7 deletions benchmarks/benchmarking_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import threading
from contextlib import nullcontext
from dataclasses import dataclass
from typing import Any, Callable, Dict, Optional, Union
from typing import Any, Callable

import pandas as pd
import torch
Expand Down Expand Up @@ -91,10 +91,10 @@ def model_init_fn(model_cls, group_offload_kwargs=None, layerwise_upcasting=Fals
class BenchmarkScenario:
name: str
model_cls: ModelMixin
model_init_kwargs: Dict[str, Any]
model_init_kwargs: dict[str, Any]
model_init_fn: Callable
get_model_input_dict: Callable
compile_kwargs: Optional[Dict[str, Any]] = None
compile_kwargs: dict[str, Any] | None = None


@require_torch_gpu
Expand Down Expand Up @@ -176,7 +176,7 @@ def run_benchmark(self, scenario: BenchmarkScenario):
result["fullgraph"], result["mode"] = None, None
return result

def run_bencmarks_and_collate(self, scenarios: Union[BenchmarkScenario, list[BenchmarkScenario]], filename: str):
def run_bencmarks_and_collate(self, scenarios: BenchmarkScenario | list[BenchmarkScenario], filename: str):
if not isinstance(scenarios, list):
scenarios = [scenarios]
record_queue = queue.Queue()
Expand Down Expand Up @@ -214,10 +214,10 @@ def _run_phase(
*,
model_cls: ModelMixin,
init_fn: Callable,
init_kwargs: Dict[str, Any],
init_kwargs: dict[str, Any],
get_input_fn: Callable,
compile_kwargs: Optional[Dict[str, Any]],
) -> Dict[str, float]:
compile_kwargs: dict[str, Any] | None = None,
) -> dict[str, float]:
# setup
self.pre_benchmark()

Expand Down
10 changes: 5 additions & 5 deletions examples/cogvideo/train_cogvideox_image_to_video_lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -432,9 +432,9 @@ def get_args():
class VideoDataset(Dataset):
def __init__(
self,
instance_data_root: Optional[str] = None,
dataset_name: Optional[str] = None,
dataset_config_name: Optional[str] = None,
instance_data_root: str | None = None,
dataset_name: str | None = None,
dataset_config_name: str | None = None,
caption_column: str = "text",
video_column: str = "video",
height: int = 480,
Expand All @@ -443,8 +443,8 @@ def __init__(
max_num_frames: int = 49,
skip_frames_start: int = 0,
skip_frames_end: int = 0,
cache_dir: Optional[str] = None,
id_token: Optional[str] = None,
cache_dir: str | None = None,
id_token: str | None = None,
) -> None:
super().__init__()

Expand Down
10 changes: 5 additions & 5 deletions examples/cogvideo/train_cogvideox_lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -416,9 +416,9 @@ def get_args():
class VideoDataset(Dataset):
def __init__(
self,
instance_data_root: Optional[str] = None,
dataset_name: Optional[str] = None,
dataset_config_name: Optional[str] = None,
instance_data_root: str | None = None,
dataset_name: str | None = None,
dataset_config_name: str | None = None,
caption_column: str = "text",
video_column: str = "video",
height: int = 480,
Expand All @@ -428,8 +428,8 @@ def __init__(
max_num_frames: int = 49,
skip_frames_start: int = 0,
skip_frames_end: int = 0,
cache_dir: Optional[str] = None,
id_token: Optional[str] = None,
cache_dir: str | None = None,
id_token: str | None = None,
) -> None:
super().__init__()

Expand Down
4 changes: 2 additions & 2 deletions examples/community/README_community_scripts.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ class SDPromptSchedulingCallback(PipelineCallback):

def callback_fn(
self, pipeline, step_index, timestep, callback_kwargs
) -> Dict[str, Any]:
) -> dict[str, Any]:
cutoff_step_ratio = self.config.cutoff_step_ratio
cutoff_step_index = self.config.cutoff_step_index
if isinstance(self.config.encoded_prompt, tuple):
Expand Down Expand Up @@ -343,7 +343,7 @@ class SDXLPromptSchedulingCallback(PipelineCallback):

def callback_fn(
self, pipeline, step_index, timestep, callback_kwargs
) -> Dict[str, Any]:
) -> dict[str, Any]:
cutoff_step_ratio = self.config.cutoff_step_ratio
cutoff_step_index = self.config.cutoff_step_index
if isinstance(self.config.encoded_prompt, tuple):
Expand Down
2 changes: 1 addition & 1 deletion examples/community/adaptive_mask_inpainting.py
Original file line number Diff line number Diff line change
Expand Up @@ -871,7 +871,7 @@ def __call__(
latents: Optional[torch.FloatTensor] = None,
prompt_embeds: Optional[torch.FloatTensor] = None,
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
Expand Down
4 changes: 2 additions & 2 deletions examples/community/bit_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,9 +231,9 @@ def __call__(
height: Optional[int] = 256,
width: Optional[int] = 256,
num_inference_steps: Optional[int] = 50,
generator: Optional[torch.Generator] = None,
generator: torch.Generator | None = None,
batch_size: Optional[int] = 1,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
**kwargs,
) -> Union[Tuple, ImagePipelineOutput]:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -235,8 +235,8 @@ def __call__(
self,
style_image: Union[torch.Tensor, PIL.Image.Image],
content_image: Union[torch.Tensor, PIL.Image.Image],
style_prompt: Optional[str] = None,
content_prompt: Optional[str] = None,
style_prompt: str | None = None,
content_prompt: str | None = None,
height: Optional[int] = 512,
width: Optional[int] = 512,
noise_strength: float = 0.6,
Expand All @@ -245,8 +245,8 @@ def __call__(
batch_size: Optional[int] = 1,
eta: float = 0.0,
clip_guidance_scale: Optional[float] = 100,
generator: Optional[torch.Generator] = None,
output_type: Optional[str] = "pil",
generator: torch.Generator | None = None,
output_type: str | None = "pil",
return_dict: bool = True,
slerp_latent_style_strength: float = 0.8,
slerp_prompt_style_strength: float = 0.1,
Expand Down
4 changes: 2 additions & 2 deletions examples/community/clip_guided_stable_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,9 @@ def __call__(
clip_prompt: Optional[Union[str, List[str]]] = None,
num_cutouts: Optional[int] = 4,
use_cutouts: Optional[bool] = True,
generator: Optional[torch.Generator] = None,
generator: torch.Generator | None = None,
latents: Optional[torch.Tensor] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
):
if isinstance(prompt, str):
Expand Down
4 changes: 2 additions & 2 deletions examples/community/clip_guided_stable_diffusion_img2img.py
Original file line number Diff line number Diff line change
Expand Up @@ -316,9 +316,9 @@ def __call__(
clip_prompt: Optional[Union[str, List[str]]] = None,
num_cutouts: Optional[int] = 4,
use_cutouts: Optional[bool] = True,
generator: Optional[torch.Generator] = None,
generator: torch.Generator | None = None,
latents: Optional[torch.Tensor] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
):
if isinstance(prompt, str):
Expand Down
6 changes: 3 additions & 3 deletions examples/community/composable_stable_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,13 +357,13 @@ def __call__(
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.0,
generator: Optional[torch.Generator] = None,
generator: torch.Generator | None = None,
latents: Optional[torch.Tensor] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
callback: Optional[Callable[[int, int, torch.Tensor], None]] = None,
callback_steps: int = 1,
weights: Optional[str] = "",
weights: str | None = "",
):
r"""
Function invoked when calling the pipeline for generation.
Expand Down
2 changes: 1 addition & 1 deletion examples/community/ddim_noise_comparative_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def __call__(
eta: float = 0.0,
num_inference_steps: int = 50,
use_clipped_model_output: Optional[bool] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
) -> Union[ImagePipelineOutput, Tuple]:
r"""
Expand Down
2 changes: 1 addition & 1 deletion examples/community/dps_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def __call__(
batch_size: int = 1,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
num_inference_steps: int = 1000,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
zeta: float = 0.3,
) -> Union[ImagePipelineOutput, Tuple]:
Expand Down
12 changes: 5 additions & 7 deletions examples/community/edict_pipeline.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
from typing import Optional

import torch
from PIL import Image
from tqdm.auto import tqdm
Expand Down Expand Up @@ -39,7 +37,7 @@ def __init__(
self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)

def _encode_prompt(
self, prompt: str, negative_prompt: Optional[str] = None, do_classifier_free_guidance: bool = False
self, prompt: str, negative_prompt: str | None = None, do_classifier_free_guidance: bool = False
):
text_inputs = self.tokenizer(
prompt,
Expand Down Expand Up @@ -141,7 +139,7 @@ def prepare_latents(
text_embeds: torch.Tensor,
timesteps: torch.Tensor,
guidance_scale: float,
generator: Optional[torch.Generator] = None,
generator: torch.Generator | None = None,
):
do_classifier_free_guidance = guidance_scale > 1.0

Expand Down Expand Up @@ -194,9 +192,9 @@ def __call__(
guidance_scale: float = 3.0,
num_inference_steps: int = 50,
strength: float = 0.8,
negative_prompt: Optional[str] = None,
generator: Optional[torch.Generator] = None,
output_type: Optional[str] = "pil",
negative_prompt: str | None = None,
generator: torch.Generator | None = None,
output_type: str | None = "pil",
):
do_classifier_free_guidance = guidance_scale > 1.0

Expand Down
4 changes: 2 additions & 2 deletions examples/community/fresco_v2v.py
Original file line number Diff line number Diff line change
Expand Up @@ -1208,7 +1208,7 @@ def apply_FRESCO_attn(pipe):


def retrieve_latents(
encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample"
encoder_output: torch.Tensor, generator: torch.Generator | None = None, sample_mode: str = "sample"
):
if hasattr(encoder_output, "latent_dist") and sample_mode == "sample":
return encoder_output.latent_dist.sample(generator)
Expand Down Expand Up @@ -2064,7 +2064,7 @@ def __call__(
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
ip_adapter_image: Optional[PipelineImageInput] = None,
ip_adapter_image_embeds: Optional[List[torch.FloatTensor]] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
controlnet_conditioning_scale: Union[float, List[float]] = 0.8,
Expand Down
2 changes: 1 addition & 1 deletion examples/community/gluegen.py
Original file line number Diff line number Diff line change
Expand Up @@ -597,7 +597,7 @@ def __call__(
latents: Optional[torch.Tensor] = None,
prompt_embeds: Optional[torch.Tensor] = None,
negative_prompt_embeds: Optional[torch.Tensor] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
guidance_rescale: float = 0.0,
Expand Down
4 changes: 2 additions & 2 deletions examples/community/hd_painter.py
Original file line number Diff line number Diff line change
Expand Up @@ -462,7 +462,7 @@ def __call__(
num_inference_steps: int = 50,
timesteps: List[int] = None,
guidance_scale: float = 7.5,
positive_prompt: Optional[str] = "",
positive_prompt: str | None = "",
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.01,
Expand All @@ -471,7 +471,7 @@ def __call__(
prompt_embeds: Optional[torch.Tensor] = None,
negative_prompt_embeds: Optional[torch.Tensor] = None,
ip_adapter_image: Optional[PipelineImageInput] = None,
output_type: Optional[str] = "pil",
output_type: str | None = "pil",
return_dict: bool = True,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
clip_skip: int = None,
Expand Down
Loading
Loading