This dataloader behaves identically to the standard pytorch dataloader, but will transfer data asynchronously to the GPU with training. You can also use it to wrap an existing dataloader.
dataloader = AsynchronousLoader(DataLoader(ds, batch_size=16), device=device) for b in dataloader: ...
- class pl_bolts.datamodules.async_dataloader.AsynchronousLoader(data, device=torch.device, q_size=10, num_batches=None, **kwargs)
Class for asynchronously loading from CPU memory to device memory with DataLoader.
Note that this only works for single GPU training, multiGPU uses PyTorch’s DataParallel or DistributedDataParallel which uses its own code for transferring data across GPUs. This could just break or make things slower with DataParallel or DistributedDataParallel.
device) – The PyTorch device we are loading to
int]) – Number of batches to load. This must be set if the dataloader doesn’t have a finite __len__. It will also override DataLoader.__len__ if set and DataLoader has a __len__. Otherwise it can be left as None
**kwargs¶ – Any additional arguments to pass to the dataloader if we’re constructing one here