diff --git a/README.md b/README.md
index ec4bcd3..057cf48 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# Attribute Information Leakage of SER Application in Federated Learning
-This repository contains the official implementation (in [PyTorch](https://pytorch.org/) and [PyTorch Lightning](https://www.pytorchlightning.ai/)) of Attribute Inference Attack of Speech Emotion Recognition in Federated Learning.
+This repository contains the official implementation (in [PyTorch](https://pytorch.org/)) of Attribute Inference Attack of Speech Emotion Recognition in Federated Learning.
## Speech Features
@@ -34,11 +34,11 @@ Two common scenarios in FL are:
#### 1. FedSGD (gradients are shared):
-

+
#### 2. FedAvg (model parameters are shared):
-
+
Table shows the prediction results of the SER model trained in two FL scenarios: FedSGD and FedAvg. We report the accuracy and unweighted average recall (UAR) scores of the SER task on each individual data set. In the baseline experiment, we set the learning rate as 0.05 and 0.0005 in FedSGD and FedAvg, respectively. The local batch size is 20, and global training epoch is set to 200. 10% of the clients participant in each global training epoch.
@@ -50,14 +50,14 @@ Table shows the prediction results of the SER model trained in two FL scenarios:
The figure shows the problem setup of the attribute inference attack in this work. **The primary application is SER**, where the **adversaries (the outside attacker or the curious server) attempt to predict the gender (the sensitive attribute)** using the shared model updates training the SER model.
-
+
## Attack Framework
Our attack framework mimics the attack framework commonly used in the membership inference attack (MIA). The attack framework consists of training shadow models, forming attack trianing data set, and training the attack model as shown below.
-
+
#### 1. Shadow Training
@@ -72,27 +72,27 @@ Here, we construct our attack training data set using the gradients input data a
Our attack model architecture is shown below:
-
+
## So how easy is the attack?
The short answer is: inferring gender (UAR score in the table) of the client using the shared model updates is a trivial task when training the SER model in both FedSGD and FedAvg.
-
+
## So which layer leaks most information in this attack?
The short answer is: the shared updates between feature input and first dense layer (UAR score in the table).
-
+
## So will the dropout decrease the attack performance?
The short answer is: the increased dropout makes the attack stronger in this attack (UAR score in the table).
-
+
## Referecences
diff --git a/results/attack_dropout.png b/img/attack_dropout.png
similarity index 100%
rename from results/attack_dropout.png
rename to img/attack_dropout.png
diff --git a/model/attack_framework.png b/img/attack_framework.png
similarity index 100%
rename from model/attack_framework.png
rename to img/attack_framework.png
diff --git a/results/attack_layer_result.png b/img/attack_layer_result.png
similarity index 100%
rename from results/attack_layer_result.png
rename to img/attack_layer_result.png
diff --git a/model/attack_model.png b/img/attack_model.png
similarity index 100%
rename from model/attack_model.png
rename to img/attack_model.png
diff --git a/model/attack_problem.png b/img/attack_problem.png
similarity index 100%
rename from model/attack_problem.png
rename to img/attack_problem.png
diff --git a/results/attack_result.png b/img/attack_result.png
similarity index 100%
rename from results/attack_result.png
rename to img/attack_result.png
diff --git a/model/fed_avg.png b/img/fed_avg.png
similarity index 100%
rename from model/fed_avg.png
rename to img/fed_avg.png
diff --git a/model/fed_sgd.png b/img/fed_sgd.png
similarity index 100%
rename from model/fed_sgd.png
rename to img/fed_sgd.png
diff --git a/model/fl_global.png b/img/fl_global.png
similarity index 100%
rename from model/fl_global.png
rename to img/fl_global.png
diff --git a/results/fl_result.png b/img/fl_result.png
similarity index 100%
rename from results/fl_result.png
rename to img/fl_result.png