AsynchronousLoader¶
This dataloader behaves identically to the standard pytorch dataloader, but will transfer data asynchronously to the GPU with training. You can also use it to wrap an existing dataloader.
Example:
dataloader = AsynchronousLoader(DataLoader(ds, batch_size=16), device=device)
for b in dataloader:
...
- class pl_bolts.datamodules.async_dataloader.AsynchronousLoader(data, device=torch.device, q_size=10, num_batches=None, **kwargs)[source]
Bases:
object
Class for asynchronously loading from CPU memory to device memory with DataLoader.
Note that this only works for single GPU training, multiGPU uses PyTorch’s DataParallel or DistributedDataParallel which uses its own code for transferring data across GPUs. This could just break or make things slower with DataParallel or DistributedDataParallel.
- Parameters
data¶ (
Union
[DataLoader
,Dataset
]) – The PyTorch Dataset or DataLoader we’re using to load.device¶ (
device
) – The PyTorch device we are loading toq_size¶ (
int
) – Size of the queue used to store the data loaded to the devicenum_batches¶ (
Optional
[int
]) – Number of batches to load. This must be set if the dataloader doesn’t have a finite __len__. It will also override DataLoader.__len__ if set and DataLoader has a __len__. Otherwise it can be left as None**kwargs¶ – Any additional arguments to pass to the dataloader if we’re constructing one here