Shortcuts

LearningRateMonitor

class lightning.pytorch.callbacks.LearningRateMonitor(logging_interval=None, log_momentum=False)[소스]

기반 클래스: lightning.pytorch.callbacks.callback.Callback

Automatically monitor and logs learning rate for learning rate schedulers during training.

매개변수
  • logging_interval (Optional[str]) – set to 'epoch' or 'step' to log lr of all optimizers at the same interval, set to None to log at individual interval according to the interval key of each scheduler. Defaults to None.

  • log_momentum (bool) – option to also log the momentum values of the optimizer, if the optimizer has the momentum or betas attribute. Defaults to False.

예외 발생

MisconfigurationException – If logging_interval is none of "step", "epoch", or None.

Example:

>>> from lightning.pytorch import Trainer
>>> from lightning.pytorch.callbacks import LearningRateMonitor
>>> lr_monitor = LearningRateMonitor(logging_interval='step')
>>> trainer = Trainer(callbacks=[lr_monitor])

Logging names are automatically determined based on optimizer class name. In case of multiple optimizers of same type, they will be named Adam, Adam-1 etc. If a optimizer has multiple parameter groups they will be named Adam/pg1, Adam/pg2 etc. To control naming, pass in a name keyword in the construction of the learning rate schedulers. A name keyword can also be used for parameter groups in the construction of the optimizer.

Example:

def configure_optimizer(self):
    optimizer = torch.optim.Adam(...)
    lr_scheduler = {
        'scheduler': torch.optim.lr_scheduler.LambdaLR(optimizer, ...)
        'name': 'my_logging_name'
    }
    return [optimizer], [lr_scheduler]

Example:

def configure_optimizer(self):
    optimizer = torch.optim.SGD(
        [{
            'params': [p for p in self.parameters()],
            'name': 'my_parameter_group_name'
        }],
        lr=0.1
    )
    lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...)
    return [optimizer], [lr_scheduler]
on_train_batch_start(trainer, *args, **kwargs)[소스]

Called when the train batch begins.

반환 형식

None

on_train_epoch_start(trainer, *args, **kwargs)[소스]

Called when the train epoch begins.

반환 형식

None

on_train_start(trainer, *args, **kwargs)[소스]

Called before training, determines unique names for all lr schedulers in the case of multiple of the same type or in the case of multiple parameter groups.

예외 발생

MisconfigurationException – If Trainer has no logger.

반환 형식

None