How to add customized DistributedSampler? #1816
Replies: 3 comments 1 reply
-
hey, did you successfully add a customized distributed sampler? i try to, but come with error like this:
|
Beta Was this translation helpful? Give feedback.
-
Hi To reload every epoch, additionally set this: |
Beta Was this translation helpful? Give feedback.
-
Hello! # regular PyTorch
test_data = MNIST(my_path, train=False, download=True)
train_data = MNIST(my_path, train=True, download=True)
train_data, val_data = random_split(train_data, [55000, 5000])
train_loader = DataLoader(train_data, batch_size=32)
val_loader = DataLoader(val_data, batch_size=32)
test_loader = DataLoader(test_data, batch_size=32)
trainer.fit(model, train_loader, val_loader)
trainer.test(test_loader) So, i re-implemented the data module using Pytorch Lightning 'DataModule' and could run as expected. mnist = MNISTDataModule(my_path)
model = LitClassifier()
trainer = Trainer()
trainer.fit(model, mnist) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
When training in multi-gpu mode, we can disable the auto replacing of sampler with
Trainer(replace_sampler_ddp=False)
. However, how can we add our customized DistributedSampler for train/val dataloader and reset them after each epoch?Beta Was this translation helpful? Give feedback.
All reactions