Linear Warmup Cosine Annealing¶
We rely on the community to keep these updated and working. If something doesn’t work, we’d really appreciate a contribution to fix!
- class pl_bolts.optimizers.lr_scheduler.LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs, max_epochs, warmup_start_lr=0.0, eta_min=0.0, last_epoch=- 1)
The feature LinearWarmupCosineAnnealingLR is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
Sets the learning rate of each parameter group to follow a linear warmup schedule between warmup_start_lr and base_lr followed by a cosine annealing schedule between base_lr and eta_min.
It is recommended to call
LinearWarmupCosineAnnealingLRafter each iteration as calling it after each epoch will keep the starting lr at warmup_start_lr for the first epoch which is 0 in most cases.
passing epoch to
step()is being deprecated and comes with an EPOCH_DEPRECATION_WARNING. It calls the
_get_closed_form_lr()method for this scheduler instead of
get_lr(). Though this does not change the behavior of the scheduler, when passing epoch param to
step(), the user should call the
step()function before calling train and validation methods.
>>> import torch.nn as nn >>> from torch.optim import Adam >>> # >>> layer = nn.Linear(10, 1) >>> optimizer = Adam(layer.parameters(), lr=0.02) >>> scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=40) >>> # the default case >>> for epoch in range(40): ... # train(...) ... # validate(...) ... scheduler.step() >>> # passing epoch param case >>> for epoch in range(40): ... scheduler.step(epoch) ... # train(...) ... # validate(...)
optimizer¶ (Optimizer) – Wrapped optimizer.