the model is not trained using DPMSolverMultistepScheduler, it works in previous versions #9452
Unanswered
Alexandr1111111
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
in version 0.16.1 the model was trained. In the new version diffusers, the DPM SolverMultistepScheduler algorithm has been changed. After model inference (image generation), the model stops learning. Changing settings does not change behavior.
Settings:
num_train_timesteps: int = 1000,
beta_start: float = 0.0001,
beta_end: float = 0.02,
beta_schedule: str = "linear",
trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
solver_order: int = 2,
prediction_type: str = "epsilon",
#prediction_type: str = "v_prediction",
thresholding: bool = False,
dynamic_thresholding_ratio: float = 0.995,
sample_max_value: float = 1.0,
algorithm_type: str = "dpmsolver++",
solver_type: str = "midpoint",
lower_order_final: bool = True,
euler_at_final: bool = False,
use_karras_sigmas: Optional[bool] = False,
use_lu_lambdas: Optional[bool] = False,
final_sigmas_type: Optional[str] = "zero", # "zero", "sigma_min"
lambda_min_clipped: float = -float("inf"),
variance_type: Optional[str] = 'learned',
timestep_spacing: str = "linspace",
steps_offset: int = 0,
rescale_betas_zero_snr: bool = False
How to fix it.
code generation:
noise_scheduler.set_timesteps(num_inference_steps=100)
x = torch.randn(1, 3, 256, 256).to('cuda')
for i, t in tqdm(enumerate(noise_scheduler.timesteps)):
with torch.no_grad():
t = torch.as_tensor([t]).cuda()
residual = model(x, t)
x = noise_scheduler.step(residual, t, x).prev_sample
x = x.detach().cpu().clip(-1, 1)[0]
img1=(x+1)/2
Beta Was this translation helpful? Give feedback.
All reactions