This project is inspired by the paper "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation".
In this work, I aim to transfer facial features—such as expression, lighting, and facial details—from a secondary image to a primary image while preserving the originality of the primary image.
To train the model, you'll need the CelebHQ-A dataset. Follow these steps:
-
Download the dataset by running the following command:
bash download.sh celeba-hq-dataset
-
After this you will have a zip folder ("celeba_hq") inside "data" folder. Please extract it
-
The data set would be in this format:
data
├── celeba_hq
│
- To train the model run
python train.py --data_path /path/to/data --output_path /path/to/output
- For example run the script in this manner:
python train.py --data_path ./data --output_path ./output
For testing check the "test.ipynb" file
Below are the results showcasing the feature transfer:
- The first image is the primary input.
- The second image is the secondary input.
- The third image is the generated output.