Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions docs/source/en/api/pipelines/bria_fibo_edit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Bria Fibo Edit

Fibo Edit is an 8B parameter image-to-image model that introduces a new paradigm of structured control, operating on JSON inputs paired with source images to enable deterministic and repeatable editing workflows.
Featuring native masking for granular precision, it moves beyond simple prompt-based diffusion to offer explicit, interpretable control optimized for production environments.
Its lightweight architecture is designed for deep customization, empowering researchers to build specialized "Edit" models for domain-specific tasks while delivering top-tier aesthetic quality

## Usage
_As the model is gated, before using it with diffusers you first need to go to the [Bria Fibo Hugging Face page](https://huggingface.co/briaai/Fibo-Edit), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._

Use the command below to log in:

```bash
hf auth login
```


## BriaFiboEditPipeline

[[autodoc]] BriaFiboEditPipeline
- all
- __call__
87 changes: 87 additions & 0 deletions examples/dreambooth/README_fibo_edit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# DreamBooth LoRA training example for Bria Fibo Edit

[DreamBooth](https://huggingface.co/papers/2208.12242) is a method to personalize text2image models given just a few images of a subject.

The `train_dreambooth_fibo_edit.py` script shows how to implement LoRA fine-tuning for [Bria Fibo Edit](https://huggingface.co/briaai/Fibo-edit), an image editing model.

## Running locally with PyTorch

### Installing the dependencies

Before running the scripts, make sure to install the library's training dependencies:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .
```

Then cd in the `examples/dreambooth` folder and run:
```bash
pip install -r requirements_fibo_edit.txt
```

And initialize an [Accelerate](https://github.com/huggingface/accelerate/) environment:

```bash
accelerate config default
```

### Dataset format

The training script expects a dataset with the following columns:
- `input_image`: Source image (before editing)
- `image`: Target image (after editing)
- `caption`: Edit instruction in JSON format

You can use a HuggingFace dataset via `--dataset_name` or a local directory via `--instance_data_dir`.

### Training

```bash
export MODEL_NAME="briaai/Fibo-edit"
export DATASET_NAME="your-dataset"
export OUTPUT_DIR="fibo-edit-dreambooth-lora"

accelerate launch train_dreambooth_fibo_edit.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--learning_rate=1e-4 \
--lr_scheduler="cosine_with_warmup" \
--lr_warmup_steps=100 \
--max_train_steps=1500 \
--lora_rank=128 \
--checkpointing_steps=250 \
--seed=10
```

### Key arguments

| Argument | Default | Description |
|----------|---------|-------------|
| `--lora_rank` | 128 | LoRA rank for fine-tuning |
| `--learning_rate` | 1e-4 | Initial learning rate |
| `--lr_scheduler` | cosine_with_warmup | Learning rate scheduler |
| `--optimizer` | AdamW | Optimizer (AdamW or prodigy) |
| `--gradient_checkpointing` | 1 | Enable gradient checkpointing to save memory |
| `--mixed_precision` | bf16 | Mixed precision training mode |

### Resume from checkpoint

To resume training from a checkpoint:

```bash
accelerate launch train_dreambooth_fibo_edit.py \
... \
--resume_from_checkpoint="latest"
```

Or specify a specific checkpoint path:

```bash
--resume_from_checkpoint="/path/to/checkpoint_500"
```
7 changes: 7 additions & 0 deletions examples/dreambooth/requirements_fibo_edit.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
accelerate>=0.31.0
torchvision
transformers>=4.41.2
peft>=0.11.1
ujson
Pillow
tqdm
Loading