Skip to content

For the project course CS 299 and work extending beyond it. Project is under the supervision of Prof. Manisha Padala.

Notifications You must be signed in to change notification settings

Aryan-IIT/fairness_of_compression_techniques

Repository files navigation

Fairness of Compression Techniques in CNNs

This repository accompanies our mid-project report and final poster on investigating the interplay between model compression and algorithmic fairness in convolutional neural networks. We focus on quantization-based compression and two post-hoc bias mitigation strategies:

  • Calibrated Equalized Odds (CalEqOd)
  • FairALM (Augmented Lagrangian Method)

📄 Mid-Project Report

A detailed write-up of our methodology, datasets, and preliminary approach is available in the PDF:
Fairness_and_Compression.pdf

Final Poster

Our poster summarizes key findings, including compression ratios and fairness trade-offs:
View Poster (PDF)


Requirements & Setup

  • Python: ≥3.8
  • Install dependencies
    pip install -r requirements.txt

Repository Structure

.
├── data/                   # CelebA subsets and preprocessing scripts
├── src/                    # Training, quantization & evaluation code
│   ├── train.py            # Model training with/without fairness constraints
│   ├── quantize.py         # Post-training quantization routines
│   └── eval.py             # Compute accuracy, EO, FPR disparities
├── trials/                 # Saved checkpoints & log files
├── Fairness_and_Compression.pdf  # Mid-project report
├── fairness_of_model_poster_aryan.pdf  # Final poster
├── requirements.txt
└── README.md

Usage

  1. Train & Evaluate (baseline CNN)

    python src/train.py \
      --dataset celeba \
      --epochs 20 \
      --save-dir trials/baseline/
    python src/eval.py \
      --checkpoint trials/baseline/model.pt
  2. Quantize & Measure Fairness

    python src/quantize.py \
      --checkpoint trials/baseline/model.pt \
      --output-dir trials/quantized/
    
    python src/eval.py \
      --checkpoint trials/quantized/model_int8.pt
  3. Apply Bias Mitigation

    • Calibrated Equalized Odds
      python src/train.py \
        --dataset celeba \
        --epochs 10 \
        --fairness caleqod \
        --save-dir trials/caleqod/
    • FairALM
      python src/train.py \
        --dataset celeba \
        --epochs 10 \
        --fairness fairalm \
        --save-dir trials/fairalm/

Key Results

  • Bias mitigation: Implemented post-training Calibrated Equalized Odds and FairALM, reducing demographic disparity by 9.5 % (EO gap 0.21→0.19) while maintaining accuracy (82 %→79 %).
  • Compression: Achieved 4× model size reduction via post-training quantization with minimal fairness degradation.

Future Prospects:

  • Explore joint optimization of compression and fairness constraints during training.
  • Evaluate on additional protected attributes and datasets.
  • Integrate more advanced compression schemes (e.g., pruning + quantization).

About

For the project course CS 299 and work extending beyond it. Project is under the supervision of Prof. Manisha Padala.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published