@@ -6,8 +6,9 @@ Partial FC is a distributed deep learning training framework for face recognitio
66
77## Contents
88[ Partial FC] ( https://arxiv.org/abs/2010.05222 )
9- - [ Largest Face Recognition Dataset: ** Glint360k** ] ( #Glint360k )
10- - [ Distributed Training Performance] ( #Performance )
9+ - [ Largest Face Recognition Dataset: ** Glint360k** ] ( #Glint360K )
10+ - [ Docker] ( #Docker )
11+ - [ Performance On Million Identities] ( #Benchmark )
1112- [ FAQ] ( #FAQ )
1213- [ Citation] ( #Citation )
1314
@@ -18,7 +19,7 @@ which contains **`17091657`** images of **`360232`** individuals.
1819By employing the Patial FC training strategy, baseline models trained on Glint360K can easily achieve state-of-the-art performance.
1920Detailed evaluation results on the large-scale test set (e.g. IFRT, IJB-C and Megaface) are as follows:
2021
21- #### Evaluation on IFRT
22+ ### 1. Evaluation on IFRT
2223** ` r ` ** denotes the sampling rate of negative class centers.
2324| Backbone | Dataset | African | Caucasian | Indian | Asian | ALL |
2425| ------------ | ----------- | ----- | ----- | ------ | ----- | ----- |
@@ -27,19 +28,19 @@ Detailed evaluation results on the large-scale test set (e.g. IFRT, IJB-C and Me
2728| R100 | ** Glint360k** (r=1.0) | 89.50 | 94.23 | 93.54 | ** 65.07** | ** 88.67** |
2829| R100 | ** Glint360k** (r=0.1) | ** 90.45** | ** 94.60** | ** 93.96** | 63.91 | 88.23 |
2930
30- #### Evaluation on IJB-C and Megaface
31+ ### 2. Evaluation on IJB-C and Megaface
3132We employ ResNet100 as the backbone and CosFace (m=0.4) as the loss function.
3233TAR@FAR=1e-4 is reported on the IJB-C datasets, and TAR@FAR=1e-6 is reported on the Megaface dataset.
3334| Test Dataset | IJB-C | Megaface_Id | Megaface_Ver |
3435| :--- | :---: | :---: | :---: |
3536| MS1MV2 | 96.4 | 98.3 | 98.6 |
3637| ** Glint360k** | ** 97.3** | ** 99.1** | ** 99.1** |
3738
38- #### License
39+ ### 3. License
3940
4041The Glint360K dataset (and the models trained with this dataset) are available for non-commercial research purposes only.
4142
42- #### Download
43+ ### 4. Download
4344- [x] [ ** Baidu Drive** ] ( https://pan.baidu.com/s/1GsYqTTt7_Dn8BfxxsLFN0w ) (code: o3az )
4445- [x] ** Magnet URI** : ` magnet:?xt=urn:btih:E5F46EE502B9E76DA8CC3A0E4F7C17E4000C7B1E&dn=glint360k `
4546
@@ -62,7 +63,7 @@ cat glint360k_* | tar -xzvf -
6263```
6364Use [ unpack_glint360k.py] ( ./unpack_glint360k.py ) to unpack.
6465
65- #### Pretrain models
66+ ### 5. Pretrain models
6667- [x] [ ** Baidu Drive** ] ( https://pan.baidu.com/s/1sd9ZRsV2c_dWHW84kz1P1Q ) (code: befi )
6768- [x] [ ** Google Drive** ] ( https://drive.google.com/drive/folders/1WLjDzEs1wC1K1jxDHNJ7dhEmQ3rOOILl?usp=sharing )
6869
@@ -73,7 +74,7 @@ Use [unpack_glint360k.py](./unpack_glint360k.py) to unpack.
7374| pytorch | [ R50] ( https://drive.google.com/drive/folders/16hjOGRJpwsJCRjIBbO13z3SrSgvPTaMV?usp=sharing ) | 1.0| 97.0| -|
7475| pytorch | [ R100] ( https://drive.google.com/drive/folders/19EHffHN0Yn8DjYm5ofrgVOf_xfkrVgqc?usp=sharing ) | 1.0| 97.4| -|
7576
76- ## Docker For Partial-FC
77+ ## Docker
7778Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution Note that you do not need to
7879install the CUDA Toolkit and other independence on the host system, but the NVIDIA driver needs to be installed.
7980Because the CUDA version used in the image is 10.1,
@@ -97,7 +98,7 @@ sudo docker run -it -v /train_tmp:/train_tmp --net=host --privileged --gpus 8 --
9798` /train_tmp ` is where you put your training set (if you have enough RAM memory,
9899you can turn it into ` tmpfs ` first).
99100
100- ## Training Speed Benchmark
101+ ## Benchmark
101102### 1. Train Glint360K Using MXNET
102103
103104| Backbone | GPU | FP16 | BatchSize / it | Throughput img / sec |
@@ -112,7 +113,7 @@ you can turn it into `tmpfs` first).
112113| R50 | 8 * Tesla V100-SXM2-32GB | True | 128 | 6112 |
113114
114115
115- ## Performance On Million Identities
116+ ### 2. Performance On Million Identities
116117We neglect the influence of IO. All experiments use mixed-precision training, and the backbone is ResNet50.
117118#### 1 Million Identities On 8 RTX2080Ti
118119
@@ -127,11 +128,6 @@ We neglect the influence of IO. All experiments use mixed-precision training, an
127128| Model Parallel | 64 | 2048 | 9684 | 4483 | GPU |
128129| ** Partial FC(Ours)** | ** 64** | ** 4096** | ** 6722** | ** 12600** | GPU |
129130
130- ## TODO
131- - [ ] Mixed precision training (pytorch)
132- - [ ] Pipeline Parallel (pytorch)
133- - [ ] Docker (include mxnet and pytorch)
134- - [ ] A Wonderful Documents
135131
136132## FAQ
137133#### Glint360K's Face Alignment Settings?
0 commit comments