Learning Rate Schedulers¶
This package lists common learning rate schedulers across research domains (This is a work in progress. If you have any learning rate schedulers you want to contribute, please submit a PR!)
Note
this module is a work in progress
Your Learning Rate Scheduler¶
We’re cleaning up many of our learning rate schedulers, but in the meantime, submit a PR to add yours here!
Linear Warmup Cosine Annealing Learning Rate Scheduler¶
-
class
pl_bolts.optimizers.lr_scheduler.
LinearWarmupCosineAnnealingLR
(optimizer, warmup_epochs, max_epochs, warmup_start_lr=0.0, eta_min=0.0, last_epoch=- 1)[source] Bases:
torch.optim.lr_scheduler.
Sets the learning rate of each parameter group to follow a linear warmup schedule between warmup_start_lr and base_lr followed by a cosine annealing schedule between base_lr and eta_min.
Warning
It is recommended to call
step()
forLinearWarmupCosineAnnealingLR
after each iteration as calling it after each epoch will keep the starting lr at warmup_start_lr for the first epoch which is 0 in most cases.Warning
passing epoch to
step()
is being deprecated and comes with an EPOCH_DEPRECATION_WARNING. It calls the_get_closed_form_lr()
method for this scheduler instead ofget_lr()
. Though this does not change the behavior of the scheduler, when passing epoch param tostep()
, the user should call thestep()
function before calling train and validation methods.Example
>>> layer = nn.Linear(10, 1) >>> optimizer = Adam(layer.parameters(), lr=0.02) >>> scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=40) >>> # >>> # the default case >>> for epoch in range(40): ... # train(...) ... # validate(...) ... scheduler.step() >>> # >>> # passing epoch param case >>> for epoch in range(40): ... scheduler.step(epoch) ... # train(...) ... # validate(...)