Skip to content

Commit 8ef59dc

Browse files
FileSystemGuyizzetgaikwadabhishekenakta0xE0F
authored
Sync: Reset to upstream/main (#4)
* refactor: convert direct imports to lazy imports in profiler_factory (argonne-lcf#325) - Move profiler imports inside get_profiler() method - Benefits: - Avoids loading TFProfiler (which imports tensorflow) unless needed - Reduces import overhead for users not using TENSORBOARD profiler - Default profiler (IOSTAT) no longer triggers tensorflow import - No breaking changes - same API, same behavior * feat: add native AIStore storage backend (argonne-lcf#321) Add a native AIStore storage handler that uses the official AIStore Python SDK for direct access, bypassing the S3 compatibility layer for better performance and simpler configuration. Changes: - Add AIStoreStorage class with full CRUD operations, range reads, and prefix-based object listing - Add StorageType.AISTORE enum and wire it through StorageFactory, GeneratorFactory, and ReaderFactory (reuses S3 generators/readers) - Add AIStore endpoint configuration support in ConfigArguments - Add 'aistore' optional dependency in setup.py - Add mock-based test suite with full AIStore SDK simulation - Add CI workflow for AIStore tests - Add storage configuration section to documentation Supported formats: NPY, NPZ, JPEG Supported frameworks: PyTorch, TensorFlow Signed-off-by: Abhishek Gaikwad <gaikwadabhishek1997@gmail.com> * fix(counters): train phase was not evaluated (argonne-lcf#328) * fix(counters): train phase was not evaluated PR argonne-lcf#302 moved loop breaking condition from the end of the loop at its start. Which never fires self.stats.end_block of the current block as the iteration never start. Trying regulat pytorch loader from local fs: ``` [OUTPUT] 2026-02-27T06:58:50.214359 Running DLIO [Training & Evaluation] with 2 process(es) [WARNING] The amount of dataset is smaller than the host memory; data might be cached after the first epoch. Increase the size of dataset to eliminate the caching effect!!! [OUTPUT] 2026-02-27T06:58:50.229669 Max steps per epoch: 128 = 1 * 1024 / 4 / 2 (samples per file * num files / batch size / comm size) [OUTPUT] 2026-02-27T06:58:50.229764 Steps per eval: 32 = 1 * 64 / 1 / 2 (samples per file * num files / batch size eval / comm size) [OUTPUT] 2026-02-27T06:58:50.278417 Starting epoch 1: 128 steps expected [OUTPUT] 2026-02-27T06:58:50.278614 Starting block 1 [OUTPUT] 2026-02-27T06:59:03.743752 Ending epoch 1 - 128 steps completed in 13.47 s [OUTPUT] 2026-02-27T06:59:03.747196 Starting eval - 32 steps expected [OUTPUT] 2026-02-27T06:59:07.122980 Ending eval - 32 steps completed in 3.38 s [OUTPUT] 2026-02-27T06:59:07.124598 Epoch 1 [Eval] Accelerator Utilization [AU] (%): 99.4141 [OUTPUT] 2026-02-27T06:59:07.124644 Epoch 1 [Eval] Throughput (samples/second): 18.9592 [OUTPUT] 2026-02-27T06:59:07.130596 Starting epoch 2: 128 steps expected [OUTPUT] 2026-02-27T06:59:07.130832 Starting block 1 [OUTPUT] 2026-02-27T06:59:20.047588 Ending epoch 2 - 128 steps completed in 12.92 s [OUTPUT] 2026-02-27T06:59:20.048553 Starting eval - 32 steps expected [OUTPUT] 2026-02-27T06:59:23.276666 Ending eval - 32 steps completed in 3.23 s [OUTPUT] 2026-02-27T06:59:23.277556 Epoch 2 [Eval] Accelerator Utilization [AU] (%): 99.4022 [OUTPUT] 2026-02-27T06:59:23.277595 Epoch 2 [Eval] Throughput (samples/second): 19.8261 [OUTPUT] 2026-02-27T06:59:23.280422 Starting epoch 3: 128 steps expected [OUTPUT] 2026-02-27T06:59:23.280591 Starting block 1 [OUTPUT] 2026-02-27T06:59:36.196122 Ending epoch 3 - 128 steps completed in 12.92 s [OUTPUT] 2026-02-27T06:59:36.197005 Starting eval - 32 steps expected [OUTPUT] 2026-02-27T06:59:39.425806 Ending eval - 32 steps completed in 3.23 s [OUTPUT] 2026-02-27T06:59:39.426645 Epoch 3 [Eval] Accelerator Utilization [AU] (%): 99.4032 [OUTPUT] 2026-02-27T06:59:39.426682 Epoch 3 [Eval] Throughput (samples/second): 19.8219 [OUTPUT] 2026-02-27T06:59:39.469524 Saved outputs in /lus/flare/projects/DAOS_Testing/PAP166/hydra_log/default/2026-02-27-06-58-50 [OUTPUT] Averaged metric over all steps/epochs [METRIC] ========================================================== [METRIC] Number of Simulated Accelerators: 2 [METRIC] Training Accelerator Utilization [AU] (%): 0.0000 (0.0000) [METRIC] Training Throughput (samples/second): 0.0000 (0.0000) [METRIC] Training I/O Throughput (MB/second): 0.0000 (0.0000) [METRIC] train_au_meet_expectation: fail [METRIC] Eval Accelerator Utilization [AU] (%): 49.7048 (0.0028) [METRIC] Eval Throughput (samples/second): 9.765259 (0.206374) [METRIC] Eval Throughput (MB/second): 0.038146 (0.000806) [METRIC] eval_au_meet_expectation: fail [METRIC] ========================================================== [OUTPUT] 2026-02-27T06:59:39.484237 outputs saved in RANKID_output.json ``` Notice that logs are only show starting of the block and never its ending. After the fix: ``` [OUTPUT] 2026-02-28T12:30:28.000590 Running DLIO [Training & Evaluation] with 2 process(es) [WARNING] The amount of dataset is smaller than the host memory; data might be cached after the first epoch. Increase the size of dataset to eliminate the caching effect!!! [WARNING] Number of files for training in /dataset/train (4000) is more than requested (64). A subset of files will be used [WARNING] Number of files for training in /dataset/train (4000) is more than requested (64). A subset of files will be used [OUTPUT] 2026-02-28T12:30:28.102857 Max steps per epoch: 8 = 1 * 64 / 4 / 2 (samples per file * num files / batch size / comm size) [OUTPUT] 2026-02-28T12:30:28.102992 Steps per eval: 4 = 1 * 8 / 1 / 2 (samples per file * num files / batch size eval / comm size) [OUTPUT] 2026-02-28T12:30:30.572480 Starting epoch 1: 8 steps expected [OUTPUT] 2026-02-28T12:30:30.573084 Starting block 1 [OUTPUT] 2026-02-28T12:30:30.734535 Ending block 1 - 8 steps completed in 0.16 s [OUTPUT] 2026-02-28T12:30:30.740906 Epoch 1 - Block 1 [Training] Accelerator Utilization [AU] (%): 0.1428 [OUTPUT] 2026-02-28T12:30:30.740994 Epoch 1 - Block 1 [Training] Throughput (samples/second): 1753.1357 [OUTPUT] 2026-02-28T12:30:30.741060 Epoch 1 - Block 1 [Training] Computation time per step (second): 0.0000+/-0.0000 (set value: {}) [OUTPUT] 2026-02-28T12:30:30.741497 Ending epoch 1 - 8 steps completed in 0.17 s [OUTPUT] 2026-02-28T12:30:30.742789 Starting eval - 4 steps expected [OUTPUT] 2026-02-28T12:30:30.889307 Ending eval - 4 steps completed in 0.15 s [OUTPUT] 2026-02-28T12:30:30.891985 Epoch 1 [Eval] Accelerator Utilization [AU] (%): 0.0720 [OUTPUT] 2026-02-28T12:30:30.892054 Epoch 1 [Eval] Throughput (samples/second): 54.6620 [OUTPUT] 2026-02-28T12:30:30.900919 Starting epoch 2: 8 steps expected [OUTPUT] 2026-02-28T12:30:30.901249 Starting block 1 [OUTPUT] 2026-02-28T12:30:30.914273 Ending block 1 - 8 steps completed in 0.01 s [OUTPUT] 2026-02-28T12:30:30.915472 Epoch 2 - Block 1 [Training] Accelerator Utilization [AU] (%): 1.9055 [OUTPUT] 2026-02-28T12:30:30.915541 Epoch 2 - Block 1 [Training] Throughput (samples/second): 7765.7316 [OUTPUT] 2026-02-28T12:30:30.915595 Epoch 2 - Block 1 [Training] Computation time per step (second): 0.0000+/-0.0000 (set value: {}) [OUTPUT] 2026-02-28T12:30:30.915931 Ending epoch 2 - 8 steps completed in 0.02 s [OUTPUT] 2026-02-28T12:30:30.917061 Starting eval - 4 steps expected [OUTPUT] 2026-02-28T12:30:30.958733 Ending eval - 4 steps completed in 0.04 s [OUTPUT] 2026-02-28T12:30:30.959729 Epoch 2 [Eval] Accelerator Utilization [AU] (%): 0.0381 [OUTPUT] 2026-02-28T12:30:30.959768 Epoch 2 [Eval] Throughput (samples/second): 192.2493 [OUTPUT] 2026-02-28T12:30:30.960091 Starting epoch 3: 8 steps expected [OUTPUT] 2026-02-28T12:30:30.960275 Starting block 1 [OUTPUT] 2026-02-28T12:30:30.976061 Ending block 1 - 8 steps completed in 0.02 s [OUTPUT] 2026-02-28T12:30:30.977423 Epoch 3 - Block 1 [Training] Accelerator Utilization [AU] (%): 0.6369 [OUTPUT] 2026-02-28T12:30:30.977483 Epoch 3 - Block 1 [Training] Throughput (samples/second): 6020.3520 [OUTPUT] 2026-02-28T12:30:30.977534 Epoch 3 - Block 1 [Training] Computation time per step (second): 0.0000+/-0.0000 (set value: {}) [OUTPUT] 2026-02-28T12:30:30.977792 Ending epoch 3 - 8 steps completed in 0.02 s [OUTPUT] 2026-02-28T12:30:30.978884 Starting eval - 4 steps expected [OUTPUT] 2026-02-28T12:30:30.983803 Ending eval - 4 steps completed in 0.00 s [OUTPUT] 2026-02-28T12:30:30.984927 Epoch 3 [Eval] Accelerator Utilization [AU] (%): 1.3682 [OUTPUT] 2026-02-28T12:30:30.984986 Epoch 3 [Eval] Throughput (samples/second): 1641.1245 [OUTPUT] 2026-02-28T12:30:30.986010 Saved outputs in /home/denis/dev/enakta/dlio_benchmark/hydra_log/default/2026-02-28-12-30-25 [OUTPUT] Averaged metric over all steps/epochs [METRIC] ========================================================== [METRIC] Number of Simulated Accelerators: 2 [METRIC] Training Accelerator Utilization [AU] (%): 0.5939 (0.4129) [METRIC] Training Throughput (samples/second): 4948.3957 (2466.6534) [METRIC] Training I/O Throughput (MB/second): 19.3297 (9.6354) [METRIC] train_au_meet_expectation: fail [METRIC] Eval Accelerator Utilization [AU] (%): 0.4704 (0.5038) [METRIC] Eval Throughput (samples/second): 444.414075 (396.070635) [METRIC] Eval Throughput (MB/second): 1.735992 (1.547151) [METRIC] eval_au_meet_expectation: fail [METRIC] ========================================================== [OUTPUT] 2026-02-28T12:30:30.987839 outputs saved in RANKID_output.json ``` Signed-off-by: Denis Barakhtanov <dbarahtanov@enakta.com> * fix: remove unreachable branch Signed-off-by: Denis Barakhtanov <dbarahtanov@enakta.com> --------- Signed-off-by: Denis Barakhtanov <dbarahtanov@enakta.com> Co-authored-by: Denis Barakhtanov <denis.barahtanov@gmail.com> * refactor(generators): unify generators to work with any storage backend (argonne-lcf#329) Every new storage backend required copy-pasting each generator into an _XXX sibling file: npz_generator_s3.py, npy_generator_s3.py and so on. The only difference was whether to write the output locally on disk, directly via numpy/PIL, or via the storage interface. This makes the pattern unsustainable: two duplicated formats today, more with each new backend — incurring a significant maintenance burden. Since all generators already had a storage instance and used it to generate file names, we can leverage it. The only set of generators now can check if the stroage is locally available via `islocalfs` and use some optimisation, if any. If the storage is not local, the sample serializes to io.BytesIO, call buf.getvalue(), and delegate to self.storage.put_data(). All storage backends receive plain bytes as designed by the storage interface, removing type inspection, seek() and getvalue() calls scattered across backends. - FileStorage.put_data was never called, had text-mode open and a double get_uri call (once from the generator, once inside put_data itself). Now it is the default write path for LOCAL_FS, used by almost every workload config. get_data aligned to binary mode ("rb") for consistency. - AIStoreStorage.put_data: remove isinstance dispatch, accept bytes directly. - S3TorchStorage.put_data: remove data.getvalue() — just write data. - generator_factory: removed S3/AIStore branching for NPZ, NPY, JPEG. - factory referenced jpeg_generator_s3.JPEGGeneratorS3 which never existed; JPEG + S3/AIStore would crash at import time. After this patch, adding a new storage backend requires no changes in any generator. Adding a new data format automatically works with all backends. Signed-off-by: Denis Barakhtanov <dbarahtanov@enakta.com> Co-authored-by: Denis Barakhtanov <denis.barahtanov@gmail.com> --------- Signed-off-by: Abhishek Gaikwad <gaikwadabhishek1997@gmail.com> Signed-off-by: Denis Barakhtanov <dbarahtanov@enakta.com> Co-authored-by: Izzet Yildirim <yildirim2@llnl.gov> Co-authored-by: Abhishek Gaikwad <gaikwadabhishek1997@gmail.com> Co-authored-by: enakta <140368024+enakta@users.noreply.github.com> Co-authored-by: Denis Barakhtanov <denis.barahtanov@gmail.com>
1 parent 7017ba2 commit 8ef59dc

25 files changed

+954
-155
lines changed

.github/workflows/ci.yml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -358,3 +358,26 @@ jobs:
358358
run: |
359359
source ${VENV_PATH}/bin/activate
360360
mpirun -np 1 pytest -k test_s3_checkpoint_step -v
361+
# AIStore-specific tests (mock-based, no real cluster needed)
362+
- name: test_aistore_gen_data
363+
run: |
364+
source ${VENV_PATH}/bin/activate
365+
mpirun -np 1 pytest -k test_aistore_gen_data[npy-pytorch] -v
366+
mpirun -np 1 pytest -k test_aistore_gen_data[npz-pytorch] -v
367+
- name: test_aistore_train
368+
run: |
369+
source ${VENV_PATH}/bin/activate
370+
mpirun -np 1 pytest -k test_aistore_train[npy-pytorch-True] -v
371+
mpirun -np 1 pytest -k test_aistore_train[npz-pytorch-True] -v
372+
mpirun -np 1 pytest -k test_aistore_train[npy-pytorch-False] -v
373+
mpirun -np 1 pytest -k test_aistore_train[npz-pytorch-False] -v
374+
- name: test_aistore_eval
375+
run: |
376+
source ${VENV_PATH}/bin/activate
377+
mpirun -np 1 pytest -k test_aistore_eval -v
378+
- name: test_aistore_multi_threads
379+
run: |
380+
source ${VENV_PATH}/bin/activate
381+
mpirun -np 1 pytest -k test_aistore_multi_threads[pytorch-0] -v
382+
mpirun -np 1 pytest -k test_aistore_multi_threads[pytorch-1] -v
383+
mpirun -np 1 pytest -k test_aistore_multi_threads[pytorch-2] -v

README.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,14 @@ pip install .
1717
dlio_benchmark ++workload.workflow.generate_data=True
1818
```
1919

20+
### Bare metal installation with AIStore support
21+
22+
```bash
23+
git clone https://github.com/argonne-lcf/dlio_benchmark
24+
cd dlio_benchmark/
25+
pip install .[aistore]
26+
```
27+
2028
### Bare metal installation with profiler
2129

2230
```bash
@@ -150,7 +158,9 @@ The YAML file is loaded through hydra (https://hydra.cc/). The default setting a
150158

151159
* We assume the data/label pairs are stored in the same file. Storing data and labels in separate files will be supported in future.
152160

153-
* File format support: we only support tfrecord, hdf5, npz, csv, jpg, jpeg formats. Other data formats can be extended.
161+
* File format support: we only support tfrecord, hdf5, npz, csv, jpg, jpeg formats. Other data formats can be extended.
162+
163+
* Storage backend support: we support local filesystem, AWS S3, and AIStore as storage backends. Other storage backends can be extended.
154164

155165
* Data Loader support: we support reading datasets using TensorFlow tf.data data loader, PyTorch DataLoader, and a set of custom data readers implemented in ./reader. For TensorFlow tf.data data loader, PyTorch DataLoader
156166
- We have complete support for tfrecord format in TensorFlow data loader.
@@ -163,7 +173,7 @@ General new features needed including:
163173
* support for new workloads: if you think that your workload(s) would be interested to the public, and would like to provide the yaml file to be included in the repo, please submit an issue.
164174
* support for new data loaders, such as DALI loader, MxNet loader, etc
165175
* support for new frameworks, such as MxNet
166-
* support for noval file systems or storage, such as AWS S3.
176+
* support for novel file systems or storage, such as AWS S3, AIStore, etc.
167177
* support for loading new data formats.
168178

169179
If you would like to contribute, please submit an issue to https://github.com/argonne-lcf/dlio_benchmark/issues, and contact ALCF DLIO team, Huihuo Zheng at huihuo.zheng@anl.gov

dlio_benchmark/common/enumerations.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,7 @@ class StorageType(Enum):
5858
LOCAL_FS = 'local_fs'
5959
PARALLEL_FS = 'parallel_fs'
6060
S3 = 's3'
61+
AISTORE = 'aistore'
6162

6263
def __str__(self):
6364
return self.value

dlio_benchmark/data_generator/generator_factory.py

Lines changed: 5 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,7 @@
1414
See the License for the specific language governing permissions and
1515
limitations under the License.
1616
"""
17-
from dlio_benchmark.utils.config import ConfigArguments
18-
19-
from dlio_benchmark.common.enumerations import FormatType, StorageType
17+
from dlio_benchmark.common.enumerations import FormatType
2018
from dlio_benchmark.common.error_code import ErrorCodes
2119

2220
class GeneratorFactory(object):
@@ -25,7 +23,6 @@ def __init__(self):
2523

2624
@staticmethod
2725
def get_generator(type):
28-
_args = ConfigArguments.get_instance()
2926
if type == FormatType.TFRECORD:
3027
from dlio_benchmark.data_generator.tf_generator import TFRecordGenerator
3128
return TFRecordGenerator()
@@ -36,19 +33,11 @@ def get_generator(type):
3633
from dlio_benchmark.data_generator.csv_generator import CSVGenerator
3734
return CSVGenerator()
3835
elif type == FormatType.NPZ:
39-
if _args.storage_type == StorageType.S3:
40-
from dlio_benchmark.data_generator.npz_generator_s3 import NPZGeneratorS3
41-
return NPZGeneratorS3()
42-
else:
43-
from dlio_benchmark.data_generator.npz_generator import NPZGenerator
44-
return NPZGenerator()
36+
from dlio_benchmark.data_generator.npz_generator import NPZGenerator
37+
return NPZGenerator()
4538
elif type == FormatType.NPY:
46-
if _args.storage_type == StorageType.S3:
47-
from dlio_benchmark.data_generator.npy_generator_s3 import NPYGeneratorS3
48-
return NPYGeneratorS3()
49-
else:
50-
from dlio_benchmark.data_generator.npy_generator import NPYGenerator
51-
return NPYGenerator()
39+
from dlio_benchmark.data_generator.npy_generator import NPYGenerator
40+
return NPYGenerator()
5241
elif type == FormatType.JPEG:
5342
from dlio_benchmark.data_generator.jpeg_generator import JPEGGenerator
5443
return JPEGGenerator()

dlio_benchmark/data_generator/jpeg_generator.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
See the License for the specific language governing permissions and
1515
limitations under the License.
1616
"""
17+
import io
1718
import numpy as np
1819
import PIL.Image as im
1920

@@ -53,5 +54,8 @@ def generate(self):
5354
self.logger.info(f"Generated file {i}/{self.total_files_to_generate}")
5455
out_path_spec = self.storage.get_uri(self._file_list[i])
5556
progress(i+1, self.total_files_to_generate, "Generating JPEG Data")
56-
img.save(out_path_spec, format='JPEG', bits=8)
57+
output = out_path_spec if self.storage.islocalfs() else io.BytesIO()
58+
img.save(output, format='JPEG', bits=8)
59+
if not self.storage.islocalfs():
60+
self.storage.put_data(out_path_spec, output.getvalue())
5761
np.random.seed()

dlio_benchmark/data_generator/npy_generator.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
See the License for the specific language governing permissions and
1515
limitations under the License.
1616
"""
17+
import io
1718
import numpy as np
1819

1920
from dlio_benchmark.data_generator.data_generator import DataGenerator
@@ -49,5 +50,8 @@ def generate(self):
4950

5051
out_path_spec = self.storage.get_uri(self._file_list[i])
5152
progress(i+1, self.total_files_to_generate, "Generating NPY Data")
52-
np.save(out_path_spec, records)
53+
output = out_path_spec if self.storage.islocalfs() else io.BytesIO()
54+
np.save(output, records)
55+
if not self.storage.islocalfs():
56+
self.storage.put_data(out_path_spec, output.getvalue())
5357
np.random.seed()

dlio_benchmark/data_generator/npy_generator_s3.py

Lines changed: 0 additions & 57 deletions
This file was deleted.

dlio_benchmark/data_generator/npz_generator.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
See the License for the specific language governing permissions and
1515
limitations under the License.
1616
"""
17+
import io
1718
import numpy as np
1819

1920
from dlio_benchmark.common.enumerations import Compression
@@ -48,8 +49,11 @@ def generate(self):
4849
records = gen_random_tensor(shape=(dim_, dim[2*i+1], self.num_samples), dtype=self._args.record_element_dtype, rng=rng)
4950
out_path_spec = self.storage.get_uri(self._file_list[i])
5051
progress(i+1, self.total_files_to_generate, "Generating NPZ Data")
52+
output = out_path_spec if self.storage.islocalfs() else io.BytesIO()
5153
if self.compression != Compression.ZIP:
52-
np.savez(out_path_spec, x=records, y=record_labels)
54+
np.savez(output, x=records, y=record_labels)
5355
else:
54-
np.savez_compressed(out_path_spec, x=records, y=record_labels)
56+
np.savez_compressed(output, x=records, y=record_labels)
57+
if not self.storage.islocalfs():
58+
self.storage.put_data(out_path_spec, output.getvalue())
5559
np.random.seed()

dlio_benchmark/data_generator/npz_generator_s3.py

Lines changed: 0 additions & 59 deletions
This file was deleted.

dlio_benchmark/data_generator/png_generator.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
See the License for the specific language governing permissions and
1515
limitations under the License.
1616
"""
17+
import io
1718
import numpy as np
1819
import PIL.Image as im
1920

@@ -49,5 +50,8 @@ def generate(self):
4950
self.logger.info(f"Generated file {i}/{self.total_files_to_generate}")
5051
out_path_spec = self.storage.get_uri(self._file_list[i])
5152
progress(i+1, self.total_files_to_generate, "Generating PNG Data")
52-
img.save(out_path_spec, format='PNG', bits=8)
53+
output = out_path_spec if self.storage.islocalfs() else io.BytesIO()
54+
img.save(output, format='PNG', bits=8)
55+
if not self.storage.islocalfs():
56+
self.storage.put_data(out_path_spec, output.getvalue())
5357
np.random.seed()

0 commit comments

Comments
 (0)