Skip to content

Releases: Tony-Y/pytorch_warmup

PyPI Package Update

24 Oct 07:38
ed2b7bd
Compare
Choose a tag to compare
  • This release was made for the PyPI package to include a license file. (#9)
  • In addition, the GitHub Actions workflows are updated.

There is no further update in this release.

Warmup for PyTorch v1.4.0 or Above

07 Apr 13:42
fcddf0a
Compare
Choose a tag to compare
  • The with statement is used for encapsulating the undampened learning rate.
  • The learning rate scheduler "chaining" can work along with this version of pytorch_warmup.

Why this change is needed

For the previous version, we have to work around to a "chaining" problem by using ugly code:

        optimizer.step()
        lr_scheduler.step(lr_scheduler.last_epoch+1)
        warmup_scheduler.dampen()

This code causes a more serious problem that PyTorch emits a user warning:

UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)

For this version, we simply code:

        optimizer.step()
        with warmup_scheduler.dampening():
            lr_scheduler.step()

If you use no LR scheduler, you code with pass:

        optimizer.step()
        with warmup_scheduler.dampening():
            pass