Skip to content

Commit

Permalink
✨ feat: release code, model, dataset
Browse files Browse the repository at this point in the history
  • Loading branch information
ZhendongWang6 committed Aug 27, 2023
1 parent e3f77db commit fcc22b4
Show file tree
Hide file tree
Showing 35 changed files with 6,054 additions and 27 deletions.
46 changes: 43 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
Expand Down Expand Up @@ -50,6 +49,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
Expand All @@ -72,6 +72,7 @@ instance/
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
Expand All @@ -82,7 +83,9 @@ profile_default/
ipython_config.py

# pyenv
.python-version
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
Expand All @@ -91,7 +94,22 @@ ipython_config.py
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
Expand Down Expand Up @@ -127,3 +145,25 @@ dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

.DS_Store
.idea/
*/.DS_Store
.vscode/

data
*.tar.gz
*.jpg
*.png
*.csv
clear.sh
backup.sh
.tarignore
pyrightconfig.json
models
107 changes: 83 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,83 @@
# DIRE for Diffusion-Generated Image Detection
<b>Zhendong Wang, <a href='https://jianminbao.github.io/'>Jianmin Bao</a>, <a href='http://staff.ustc.edu.cn/~zhwg/'>Wengang Zhou</a>, Weilun Wang, Hezhen Hu, Hong Chen, <a href='http://staff.ustc.edu.cn/~lihq/en/'>Houqiang Li </a> </b>

[[Paper](https://arxiv.org/abs/2303.09295)] [[Code (Comming Soon)]()] [[Dataset (Comming Soon)]()]


## Abstract
> Diffusion models have shown remarkable success in visual synthesis, but have also raised concerns about potential abuse for malicious purposes. In this paper, we seek to build a detector for telling apart real images from diffusion-generated images. We find that existing detectors struggle to detect images generated by diffusion models, even if we include generated images from a specific diffusion model in their training data. To address this issue, we propose a novel image representation called DIffusion Reconstruction Error (DIRE), which measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model. We observe that diffusion-generated images can be approximately reconstructed by a diffusion model while real images cannot. It provides a hint that DIRE can serve as a bridge to distinguish generated and real images. DIRE provides an effective way to detect images generated by most diffusion models, and it is general for detecting generated images from unseen diffusion models and robust to various perturbations. Furthermore, we establish a comprehensive diffusion-generated benchmark including images generated by eight diffusion models to evaluate the performance of diffusion-generated image detectors. Extensive experiments on our collected benchmark demonstrate that DIRE exhibits superiority over previous generated-image detectors.

<p align="center">
<img src="figs/teaser.png" width=60%>
</p>

## DIRE
<p align="center">
<img src="figs/dire.png" width=60%>
</p>

## TODO
- [ ] Release code.
- [ ] Release dataset.

## Acknowledgments
Our code is developed based on [guided-diffusion](https://github.com/openai/guided-diffusion) and [CNNDetection](https://github.com/peterwang512/CNNDetection). Thanks for their sharing codes and models.
# DIRE for Diffusion-Generated Image Detection (ICCV 2023)
<b> <a href='https://zhendongwang6.github.io/'>Zhendong Wang</a>, <a href='https://jianminbao.github.io/'>Jianmin Bao</a>, <a href='http://staff.ustc.edu.cn/~zhwg/'>Wengang Zhou</a>, Weilun Wang, Hezhen Hu, Hong Chen, <a href='http://staff.ustc.edu.cn/~lihq/en/'>Houqiang Li </a> </b>

[[Arxiv](https://arxiv.org/abs/2303.09295)] [[DiffusionForensics Dataset (code: ustc)](https://rec.ustc.edu.cn/share/61d2ec20-3b83-11ee-942f-d111ecdbde6f)] [[Pre-trained Model (code: ustc)](https://rec.ustc.edu.cn/share/61d2ec20-3b83-11ee-942f-d111ecdbde6f)]

## News
- [2023/08/27] :fire: Release code, dataset and pre-trained models.
- [2023/07/14] :tada: DIRE is accepted by ICCV 2023.
- [2023/03/16] :sparkles: Release [paper](https://arxiv.org/abs/2303.09295).
## Abstract
> Diffusion models have shown remarkable success in visual synthesis, but have also raised concerns about potential abuse for malicious purposes. In this paper, we seek to build a detector for telling apart real images from diffusion-generated images. We find that existing detectors struggle to detect images generated by diffusion models, even if we include generated images from a specific diffusion model in their training data. To address this issue, we propose a novel image representation called DIffusion Reconstruction Error (DIRE), which measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model. We observe that diffusion-generated images can be approximately reconstructed by a diffusion model while real images cannot. It provides a hint that DIRE can serve as a bridge to distinguish generated and real images. DIRE provides an effective way to detect images generated by most diffusion models, and it is general for detecting generated images from unseen diffusion models and robust to various perturbations. Furthermore, we establish a comprehensive diffusion-generated benchmark including images generated by eight diffusion models to evaluate the performance of diffusion-generated image detectors. Extensive experiments on our collected benchmark demonstrate that DIRE exhibits superiority over previous generated-image detectors.

<p align="center">
<img src="figs/teaser.png" width=60%>
</p>

## DIRE pipeline
<p align="center">
<img src="figs/dire.png" width=60%>
</p>

## Requirements
```
conda create -n dire python=3.9
conda activate dire
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
```
## DiffusionForensics Dataset
The DiffusionForensics dataset can be downloaded from [here](https://rec.ustc.edu.cn/share/61d2ec20-3b83-11ee-942f-d111ecdbde6f) (code: ustc). The dataset is organized as follows:
```
images/recons/dire
└── train/val/test
├── lsun_bedroom
│ ├── real
│ │ └──img1.png...
│ ├── adm
│ │ └──img1.png...
│ ├── ...
├── imagenet
│ ├── real
│ │ └──img1.png...
│ ├── adm
│ │ └──img1.png...
│ ├── ...
└── celebahq
├── real
│ └──img1.png...
├── adm
│ └──img1.png...
├── ...
```
## Training
Before training, you should link the training real and DIRE images to the `data/train` folder. For example, you can link the DIRE images of real LSUN-Bedroom to `data/train/lsun_adm/0_real` and link the DIRE images of ADM-LSUN-Bedroom to `data/train/lsun_adm/1_fake`. And do the same for validation set and testing set, just modify `data/train` to `data/val` and `data/test`. Then, you can train the DIRE model by running the following command:
```
sh train.sh
```
## Evaluation
We provide the pre-trained DIRE model in [here](https://rec.ustc.edu.cn/share/61d2ec20-3b83-11ee-942f-d111ecdbde6f)(code: ustc).
You can evaluate the DIRE model by running the following command:
```
sh test.sh
```
## Inference
We also provide a inference demo `demo.py`. You can run the following command to inference a single image or a folder of images:
```
python demo.py -f [image_path/image_dir] -m [model_path]
```

## Acknowledgments
Our code is developed based on [guided-diffusion](https://github.com/openai/guided-diffusion) and [CNNDetection](https://github.com/peterwang512/CNNDetection). Thanks for their sharing codes and models.

## Citation
If you find this work useful for your research, please cite our paper:
```
@article{wang2023dire,
title={DIRE for Diffusion-Generated Image Detection},
author={Wang, Zhendong and Bao, Jianmin and Zhou, Wengang and Wang, Weilun and Hu, Hezhen and Chen, Hong and Li, Houqiang},
journal={arXiv preprint arXiv:2303.09295},
year={2023}
}
```
69 changes: 69 additions & 0 deletions demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import argparse
import glob
import os

import torch
import torch.nn
import torchvision.transforms as transforms
import torchvision.transforms.functional as TF
from PIL import Image
from tqdm import tqdm

from utils.utils import get_network, str2bool, to_cuda

parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
"-f", "--file", default="data/test/lsun_adm/1_fake/0.png", type=str, help="path to image file or directory of images"
)
parser.add_argument(
"-m",
"--model_path",
type=str,
default="data/exp/ckpt/lsun_adm/model_epoch_latest.pth",
)
parser.add_argument("--use_cpu", action="store_true", help="uses gpu by default, turn on to use cpu")
parser.add_argument("--arch", type=str, default="resnet50")
parser.add_argument("--aug_norm", type=str2bool, default=True)

args = parser.parse_args()

if os.path.isfile(args.file):
print(f"Testing on image '{args.file}'")
file_list = [args.file]
elif os.path.isdir(args.file):
file_list = sorted(glob.glob(os.path.join(args.file, "*.jpg")) + glob.glob(os.path.join(args.file, "*.png"))+glob.glob(os.path.join(args.file, "*.JPEG")))
print(f"Testing images from '{args.file}'")
else:
raise FileNotFoundError(f"Invalid file path: '{args.file}'")


model = get_network(args.arch)
state_dict = torch.load(args.model_path, map_location="cpu")
if "model" in state_dict:
state_dict = state_dict["model"]
model.load_state_dict(state_dict)
model.eval()
if not args.use_cpu:
model.cuda()

print("*" * 50)

trans = transforms.Compose(
(
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
)
)
for img_path in tqdm(file_list, dynamic_ncols=True, disable=len(file_list) <= 1):
img = Image.open(img_path).convert("RGB")
img = trans(img)
if args.aug_norm:
img = TF.normalize(img, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
in_tens = img.unsqueeze(0)
if not args.use_cpu:
in_tens = in_tens.cuda()

with torch.no_grad():
prob = model(in_tens).sigmoid().item()
print(f"Prob of being synthetic: {prob:.4f}")
21 changes: 21 additions & 0 deletions guided-diffusion/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2021 OpenAI

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Loading

0 comments on commit fcc22b4

Please sign in to comment.