zero.data.iloader¶
-
zero.data.
iloader
(size, *args, **kwargs)[source]¶ Make
DataLoader
over batches of indices.The shuffling logic is fully delegated to native PyTorch DataLoader, i.e. no custom logic is performed under the hood.
- Parameters
size (int) – the size of dataset (for example,
len(dataset)
)*args – positional arguments for
torch.utils.data.DataLoader
**kwargs – keyword arguments for
torch.utils.data.DataLoader
- Raises
AssertionError – if size is not positive
- Return type
torch.utils.data.dataloader.DataLoader
See also
Examples
Usage for training:
train_loader = iloader(len(train_dataset), batch_size, shuffle=True) for epoch in epochs: for batch_idx in train_loader: ...
Other examples:
dataset_size = 10 # len(dataset) for batch_idx in iloader(dataset_size, batch_size=3): print(batch_idx)
tensor([0, 1, 2]) tensor([3, 4, 5]) tensor([6, 7, 8]) tensor([9])
dataset_size = 10 # len(dataset) for batch_idx in iloader(dataset_size, 3, drop_last=True): print(batch_idx)
tensor([0, 1, 2]) tensor([3, 4, 5]) tensor([6, 7, 8])