Skip to content

Commit a6f45eb

Browse files
committed
Documentation
1 parent 375a436 commit a6f45eb

11 files changed

+439
-203
lines changed

README.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ python dataset_tool.py --source=~/downloads/afhq/train/wild --dest=~/datasets/af
203203
python dataset_tool.py --source=~/downloads/cifar-10-python.tar.gz --dest=~/datasets/cifar10.zip
204204
```
205205

206-
**LSUN**: Download the desired LSUN categories in LMDB format from the [LSUN project page](https://www.yf.io/p/lsun/) and convert to ZIP archive:
206+
**LSUN**: Download the desired categories from the [LSUN project page](https://www.yf.io/p/lsun/) and convert to ZIP archive:
207207

208208
```.bash
209209
python dataset_tool.py --source=~/downloads/lsun/raw/cat_lmdb --dest=~/datasets/lsuncat200k.zip \
@@ -262,7 +262,7 @@ The training configuration can be further customized with additional command lin
262262
* `--cond=1` enables class-conditional training (requires a dataset with labels).
263263
* `--mirror=1` amplifies the dataset with x-flips. Often beneficial, even with ADA.
264264
* `--resume=ffhq1024 --snap=10` performs transfer learning from FFHQ trained at 1024x1024.
265-
* `--resume=~/training-runs/<NAME>/network-snapshot-<KIMG>.pkl` resumes a previous training run where it left off.
265+
* `--resume=~/training-runs/<NAME>/network-snapshot-<INT>.pkl` resumes a previous training run.
266266
* `--gamma=10` overrides R1 gamma. We recommend trying a couple of different values for each new dataset.
267267
* `--aug=ada --target=0.7` adjusts ADA target value (default: 0.6).
268268
* `--augpipe=blit` enables pixel blitting but disables all other augmentations.
@@ -293,7 +293,7 @@ The total training time depends heavily on resolution, number of GPUs, dataset,
293293
| 1024x1024 | 4 | 11h 36m | 12d 02h | 40.1&ndash;40.8 | 8.4 GB | 21.9 GB
294294
| 1024x1024 | 8 | 5h 54m | 6d 03h | 20.2&ndash;20.6 | 8.3 GB | 44.7 GB
295295

296-
The above measurements were done using NVIDIA Tesla V100 GPUs with default settings (`--cfg=auto --aug=ada --metrics=fid50k_full`). "sec/kimg" shows the expected range of variation in raw training performance, as reported in `log.txt`, and "GPU mem" and "CPU mem" show the peak memory consumption observed over the course of training.
296+
The above measurements were done using NVIDIA Tesla V100 GPUs with default settings (`--cfg=auto --aug=ada --metrics=fid50k_full`). "sec/kimg" shows the expected range of variation in raw training performance, as reported in `log.txt`. "GPU mem" and "CPU mem" show the highest observed memory consumption, excluding the peak at the beginning caused by `torch.backends.cudnn.benchmark`.
297297

298298
In typical cases, 25000 kimg or more is needed to reach convergence, but the results are already quite reasonable around 5000 kimg. 1000 kimg is often enough for transfer learning, which tends to converge significantly faster. The following figure shows example convergence curves for different datasets as a function of wallclock time, using the same settings as above:
299299

@@ -325,23 +325,23 @@ We employ the following metrics in the ADA paper. Execution time and GPU memory
325325

326326
| Metric | Time | GPU mem | Description |
327327
| :----- | :----: | :-----: | :---------- |
328-
| `fid50k_full` | 13 min | 1.8 GB | Fr&eacute;chet inception distance<sup>[1]</sup> against the full dataset.
329-
| `kid50k_full` | 13 min | 1.8 GB | Kernel inception distance<sup>[2]</sup> against the full dataset.
330-
| `pr50k3_full` | 13 min | 4.1 GB | Precision and recall<sup>[3]</sup> againt the full dataset.
331-
| `is50k` | 13 min | 1.8 GB | Inception score<sup>[4]</sup> for CIFAR-10.
328+
| `fid50k_full` | 13 min | 1.8 GB | Fr&eacute;chet inception distance<sup>[1]</sup> against the full dataset
329+
| `kid50k_full` | 13 min | 1.8 GB | Kernel inception distance<sup>[2]</sup> against the full dataset
330+
| `pr50k3_full` | 13 min | 4.1 GB | Precision and recall<sup>[3]</sup> againt the full dataset
331+
| `is50k` | 13 min | 1.8 GB | Inception score<sup>[4]</sup> for CIFAR-10
332332

333333
In addition, the following metrics from the [StyleGAN](https://github.com/NVlabs/stylegan) and [StyleGAN2](https://github.com/NVlabs/stylegan2) papers are also supported:
334334

335335
| Metric | Time | GPU mem | Description |
336336
| :------------ | :----: | :-----: | :---------- |
337-
| `fid50k` | 13 min | 1.8 GB | Fr&eacute;chet inception distance against 50k real images.
338-
| `kid50k` | 13 min | 1.8 GB | Kernel inception distance against 50k real images.
339-
| `pr50k3` | 13 min | 4.1 GB | Precision and recall against 50k real images.
340-
| `ppl2_wend` | 36 min | 2.4 GB | Perceptual path length<sup>[5]</sup> in W at path endpoints against full image.
341-
| `ppl_zfull` | 36 min | 2.4 GB | Perceptual path length in Z for full paths against cropped image.
342-
| `ppl_wfull` | 36 min | 2.4 GB | Perceptual path length in W for full paths against cropped image.
343-
| `ppl_zend` | 36 min | 2.4 GB | Perceptual path length in Z at path endpoints against cropped image.
344-
| `ppl_wend` | 36 min | 2.4 GB | Perceptual path length in W at path endpoints against cropped image.
337+
| `fid50k` | 13 min | 1.8 GB | Fr&eacute;chet inception distance against 50k real images
338+
| `kid50k` | 13 min | 1.8 GB | Kernel inception distance against 50k real images
339+
| `pr50k3` | 13 min | 4.1 GB | Precision and recall against 50k real images
340+
| `ppl2_wend` | 36 min | 2.4 GB | Perceptual path length<sup>[5]</sup> in W, endpoints, full image
341+
| `ppl_zfull` | 36 min | 2.4 GB | Perceptual path length in Z, full paths, cropped image
342+
| `ppl_wfull` | 36 min | 2.4 GB | Perceptual path length in W, full paths, cropped image
343+
| `ppl_zend` | 36 min | 2.4 GB | Perceptual path length in Z, endpoints, cropped image
344+
| `ppl_wend` | 36 min | 2.4 GB | Perceptual path length in W, endpoints, cropped image
345345

346346
References:
347347
1. [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500), Heusel et al. 2017

dnnlib/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
1+
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
22
#
33
# NVIDIA CORPORATION and its licensors retain all intellectual property
44
# and proprietary rights in and to this software, related documentation

docs/dataset-tool-help.txt

Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,50 @@
1-
Usage: dataset_tool.py [OPTIONS]
2-
3-
Convert an image dataset into a dataset archive usable with StyleGAN2 ADA
4-
PyTorch.
5-
6-
The input dataset format is guessed from the --source argument:
7-
8-
--source *_lmdb/ - Load LSUN dataset
9-
--source cifar-10-python.tar.gz - Load CIFAR-10 dataset
10-
--source path/ - Recursively load all images from path/
11-
--source dataset.zip - Recursively load all images from dataset.zip
12-
13-
The output dataset format can be either an image folder or a zip archive.
14-
Specifying the output format and path:
15-
16-
--dest /path/to/dir - Save output files under /path/to/dir
17-
--dest /path/to/dataset.zip - Save output files into /path/to/dataset.zip archive
18-
19-
Images within the dataset archive will be stored as uncompressed PNG.
20-
21-
Image scale/crop and resolution requirements:
22-
23-
Output images must be square-shaped and they must all have the same power-
24-
of-two dimensions.
25-
26-
To scale arbitrary input image size to a specific width and height, use
27-
the --width and --height options. Output resolution will be either the
28-
original input resolution (if --width/--height was not specified) or the
29-
one specified with --width/height.
30-
31-
Use the --transform=center-crop or --transform=center-crop-wide options to
32-
apply a center crop transform on the input image. These options should be
33-
used with the --width and --height options. For example:
34-
35-
python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \
36-
--transform=center-crop-wide --width 512 --height=384
37-
38-
Options:
39-
--source PATH Directory or archive name for input dataset
40-
[required]
41-
--dest PATH Output directory or archive name for output
42-
dataset [required]
43-
--max-images INTEGER Output only up to `max-images` images
44-
--resize-filter [box|lanczos] Filter to use when resizing images for
45-
output resolution [default: lanczos]
46-
--transform [center-crop|center-crop-wide]
47-
Input crop/resize mode
48-
--width INTEGER Output width
49-
--height INTEGER Output height
50-
--help Show this message and exit.
1+
Usage: dataset_tool.py [OPTIONS]
2+
3+
Convert an image dataset into a dataset archive usable with StyleGAN2 ADA
4+
PyTorch.
5+
6+
The input dataset format is guessed from the --source argument:
7+
8+
--source *_lmdb/ - Load LSUN dataset
9+
--source cifar-10-python.tar.gz - Load CIFAR-10 dataset
10+
--source path/ - Recursively load all images from path/
11+
--source dataset.zip - Recursively load all images from dataset.zip
12+
13+
The output dataset format can be either an image folder or a zip archive.
14+
Specifying the output format and path:
15+
16+
--dest /path/to/dir - Save output files under /path/to/dir
17+
--dest /path/to/dataset.zip - Save output files into /path/to/dataset.zip archive
18+
19+
Images within the dataset archive will be stored as uncompressed PNG.
20+
21+
Image scale/crop and resolution requirements:
22+
23+
Output images must be square-shaped and they must all have the same power-
24+
of-two dimensions.
25+
26+
To scale arbitrary input image size to a specific width and height, use
27+
the --width and --height options. Output resolution will be either the
28+
original input resolution (if --width/--height was not specified) or the
29+
one specified with --width/height.
30+
31+
Use the --transform=center-crop or --transform=center-crop-wide options to
32+
apply a center crop transform on the input image. These options should be
33+
used with the --width and --height options. For example:
34+
35+
python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \
36+
--transform=center-crop-wide --width 512 --height=384
37+
38+
Options:
39+
--source PATH Directory or archive name for input dataset
40+
[required]
41+
--dest PATH Output directory or archive name for output
42+
dataset [required]
43+
--max-images INTEGER Output only up to `max-images` images
44+
--resize-filter [box|lanczos] Filter to use when resizing images for
45+
output resolution [default: lanczos]
46+
--transform [center-crop|center-crop-wide]
47+
Input crop/resize mode
48+
--width INTEGER Output width
49+
--height INTEGER Output height
50+
--help Show this message and exit.

docs/train-help.txt

Lines changed: 69 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -1,69 +1,69 @@
1-
Usage: train.py [OPTIONS]
2-
3-
Train a GAN using the techniques described in the paper "Training
4-
Generative Adversarial Networks with Limited Data".
5-
6-
Examples:
7-
8-
# Train with custom images using 1 GPU.
9-
python train.py --outdir=~/training-runs --data=~/my-image-folder
10-
11-
# Train class-conditional CIFAR-10 using 2 GPUs.
12-
python train.py --outdir=~/training-runs --data=~/datasets/cifar10.zip \
13-
--gpus=2 --cfg=cifar --cond=1
14-
15-
# Transfer learn MetFaces from FFHQ using 4 GPUs.
16-
python train.py --outdir=~/training-runs --data=~/datasets/metfaces.zip \
17-
--gpus=4 --cfg=paper1024 --mirror=1 --resume=ffhq1024 --snap=10
18-
19-
# Reproduce original StyleGAN2 config F.
20-
python train.py --outdir=~/training-runs --data=~/datasets/ffhq.zip \
21-
--gpus=8 --cfg=stylegan2 --mirror=1 --aug=noaug
22-
23-
Base configs (--cfg):
24-
auto Automatically select reasonable defaults based on resolution
25-
and GPU count. Good starting point for new datasets.
26-
stylegan2 Reproduce results for StyleGAN2 config F at 1024x1024.
27-
paper256 Reproduce results for FFHQ and LSUN Cat at 256x256.
28-
paper512 Reproduce results for BreCaHAD and AFHQ at 512x512.
29-
paper1024 Reproduce results for MetFaces at 1024x1024.
30-
cifar Reproduce results for CIFAR-10 at 32x32.
31-
32-
Transfer learning source networks (--resume):
33-
ffhq256 FFHQ trained at 256x256 resolution.
34-
ffhq512 FFHQ trained at 512x512 resolution.
35-
ffhq1024 FFHQ trained at 1024x1024 resolution.
36-
celebahq256 CelebA-HQ trained at 256x256 resolution.
37-
lsundog256 LSUN Dog trained at 256x256 resolution.
38-
<PATH or URL> Custom network pickle.
39-
40-
Options:
41-
--outdir DIR Where to save the results [required]
42-
--gpus INT Number of GPUs to use [default: 1]
43-
--snap INT Snapshot interval [default: 50 ticks]
44-
--metrics LIST Comma-separated list or "none" [default:
45-
fid50k_full]
46-
--seed INT Random seed [default: 0]
47-
-n, --dry-run Print training options and exit
48-
--data PATH Training data (directory or zip) [required]
49-
--cond BOOL Train conditional model based on dataset
50-
labels [default: false]
51-
--subset INT Train with only N images [default: all]
52-
--mirror BOOL Enable dataset x-flips [default: false]
53-
--cfg [auto|stylegan2|paper256|paper512|paper1024|cifar]
54-
Base config [default: auto]
55-
--gamma FLOAT Override R1 gamma
56-
--kimg INT Override training duration
57-
--batch INT Override batch size
58-
--aug [noaug|ada|fixed] Augmentation mode [default: ada]
59-
--p FLOAT Augmentation probability for --aug=fixed
60-
--target FLOAT ADA target value for --aug=ada
61-
--augpipe [blit|geom|color|filter|noise|cutout|bg|bgc|bgcf|bgcfn|bgcfnc]
62-
Augmentation pipeline [default: bgc]
63-
--resume PKL Resume training [default: noresume]
64-
--freezed INT Freeze-D [default: 0 layers]
65-
--fp32 BOOL Disable mixed-precision training
66-
--nhwc BOOL Use NHWC memory format with FP16
67-
--nobench BOOL Disable cuDNN benchmarking
68-
--workers INT Override number of DataLoader workers
69-
--help Show this message and exit.
1+
Usage: train.py [OPTIONS]
2+
3+
Train a GAN using the techniques described in the paper "Training
4+
Generative Adversarial Networks with Limited Data".
5+
6+
Examples:
7+
8+
# Train with custom images using 1 GPU.
9+
python train.py --outdir=~/training-runs --data=~/my-image-folder
10+
11+
# Train class-conditional CIFAR-10 using 2 GPUs.
12+
python train.py --outdir=~/training-runs --data=~/datasets/cifar10.zip \
13+
--gpus=2 --cfg=cifar --cond=1
14+
15+
# Transfer learn MetFaces from FFHQ using 4 GPUs.
16+
python train.py --outdir=~/training-runs --data=~/datasets/metfaces.zip \
17+
--gpus=4 --cfg=paper1024 --mirror=1 --resume=ffhq1024 --snap=10
18+
19+
# Reproduce original StyleGAN2 config F.
20+
python train.py --outdir=~/training-runs --data=~/datasets/ffhq.zip \
21+
--gpus=8 --cfg=stylegan2 --mirror=1 --aug=noaug
22+
23+
Base configs (--cfg):
24+
auto Automatically select reasonable defaults based on resolution
25+
and GPU count. Good starting point for new datasets.
26+
stylegan2 Reproduce results for StyleGAN2 config F at 1024x1024.
27+
paper256 Reproduce results for FFHQ and LSUN Cat at 256x256.
28+
paper512 Reproduce results for BreCaHAD and AFHQ at 512x512.
29+
paper1024 Reproduce results for MetFaces at 1024x1024.
30+
cifar Reproduce results for CIFAR-10 at 32x32.
31+
32+
Transfer learning source networks (--resume):
33+
ffhq256 FFHQ trained at 256x256 resolution.
34+
ffhq512 FFHQ trained at 512x512 resolution.
35+
ffhq1024 FFHQ trained at 1024x1024 resolution.
36+
celebahq256 CelebA-HQ trained at 256x256 resolution.
37+
lsundog256 LSUN Dog trained at 256x256 resolution.
38+
<PATH or URL> Custom network pickle.
39+
40+
Options:
41+
--outdir DIR Where to save the results [required]
42+
--gpus INT Number of GPUs to use [default: 1]
43+
--snap INT Snapshot interval [default: 50 ticks]
44+
--metrics LIST Comma-separated list or "none" [default:
45+
fid50k_full]
46+
--seed INT Random seed [default: 0]
47+
-n, --dry-run Print training options and exit
48+
--data PATH Training data (directory or zip) [required]
49+
--cond BOOL Train conditional model based on dataset
50+
labels [default: false]
51+
--subset INT Train with only N images [default: all]
52+
--mirror BOOL Enable dataset x-flips [default: false]
53+
--cfg [auto|stylegan2|paper256|paper512|paper1024|cifar]
54+
Base config [default: auto]
55+
--gamma FLOAT Override R1 gamma
56+
--kimg INT Override training duration
57+
--batch INT Override batch size
58+
--aug [noaug|ada|fixed] Augmentation mode [default: ada]
59+
--p FLOAT Augmentation probability for --aug=fixed
60+
--target FLOAT ADA target value for --aug=ada
61+
--augpipe [blit|geom|color|filter|noise|cutout|bg|bgc|bgcf|bgcfn|bgcfnc]
62+
Augmentation pipeline [default: bgc]
63+
--resume PKL Resume training [default: noresume]
64+
--freezed INT Freeze-D [default: 0 layers]
65+
--fp32 BOOL Disable mixed-precision training
66+
--nhwc BOOL Use NHWC memory format with FP16
67+
--nobench BOOL Disable cuDNN benchmarking
68+
--workers INT Override number of DataLoader workers
69+
--help Show this message and exit.

metrics/frechet_inception_distance.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,21 @@
66
# distribution of this software and related documentation without an express
77
# license agreement from NVIDIA CORPORATION is strictly prohibited.
88

9+
"""Frechet Inception Distance (FID) from the paper
10+
"GANs trained by a two time-scale update rule converge to a local Nash
11+
equilibrium". Matches the original implementation by Heusel et al. at
12+
https://github.com/bioinf-jku/TTUR/blob/master/fid.py"""
13+
914
import numpy as np
1015
import scipy.linalg
11-
1216
from . import metric_utils
1317

1418
#----------------------------------------------------------------------------
1519

1620
def compute_fid(opts, max_real, num_gen):
21+
# Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz
1722
detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt'
18-
detector_kwargs = dict(return_features=True)
23+
detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer.
1924

2025
mu_real, sigma_real = metric_utils.compute_feature_stats_for_dataset(
2126
opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs,

0 commit comments

Comments
 (0)