This repository accompanies our mid-project report and final poster on investigating the interplay between model compression and algorithmic fairness in convolutional neural networks. We focus on quantization-based compression and two post-hoc bias mitigation strategies:
- Calibrated Equalized Odds (CalEqOd)
- FairALM (Augmented Lagrangian Method)
A detailed write-up of our methodology, datasets, and preliminary approach is available in the PDF:
Fairness_and_Compression.pdf
Our poster summarizes key findings, including compression ratios and fairness trade-offs:
View Poster (PDF)
- Python: ≥3.8
- Install dependencies
pip install -r requirements.txt
.
├── data/ # CelebA subsets and preprocessing scripts
├── src/ # Training, quantization & evaluation code
│ ├── train.py # Model training with/without fairness constraints
│ ├── quantize.py # Post-training quantization routines
│ └── eval.py # Compute accuracy, EO, FPR disparities
├── trials/ # Saved checkpoints & log files
├── Fairness_and_Compression.pdf # Mid-project report
├── fairness_of_model_poster_aryan.pdf # Final poster
├── requirements.txt
└── README.md
-
Train & Evaluate (baseline CNN)
python src/train.py \ --dataset celeba \ --epochs 20 \ --save-dir trials/baseline/ python src/eval.py \ --checkpoint trials/baseline/model.pt
-
Quantize & Measure Fairness
python src/quantize.py \ --checkpoint trials/baseline/model.pt \ --output-dir trials/quantized/ python src/eval.py \ --checkpoint trials/quantized/model_int8.pt
-
Apply Bias Mitigation
- Calibrated Equalized Odds
python src/train.py \ --dataset celeba \ --epochs 10 \ --fairness caleqod \ --save-dir trials/caleqod/
- FairALM
python src/train.py \ --dataset celeba \ --epochs 10 \ --fairness fairalm \ --save-dir trials/fairalm/
- Calibrated Equalized Odds
- Bias mitigation: Implemented post-training Calibrated Equalized Odds and FairALM, reducing demographic disparity by 9.5 % (EO gap 0.21→0.19) while maintaining accuracy (82 %→79 %).
- Compression: Achieved 4× model size reduction via post-training quantization with minimal fairness degradation.
- Explore joint optimization of compression and fairness constraints during training.
- Evaluate on additional protected attributes and datasets.
- Integrate more advanced compression schemes (e.g., pruning + quantization).