Shortcuts

Callback

class lightning.pytorch.callbacks.Callback[소스]

기반 클래스: object

Abstract base class used to build new callbacks.

Subclass this class and override any of the relevant hooks

load_state_dict(state_dict)[소스]

Called when loading a checkpoint, implement to reload callback state given callback’s state_dict.

매개변수

state_dict (Dict[str, Any]) – the callback state returned by state_dict.

반환 형식

None

on_after_backward(trainer, pl_module)[소스]

Called after loss.backward() and before optimizers are stepped.

반환 형식

None

on_before_backward(trainer, pl_module, loss)[소스]

Called before loss.backward().

반환 형식

None

on_before_optimizer_step(trainer, pl_module, optimizer)[소스]

Called before optimizer.step().

반환 형식

None

on_before_zero_grad(trainer, pl_module, optimizer)[소스]

Called before optimizer.zero_grad().

반환 형식

None

on_exception(trainer, pl_module, exception)[소스]

Called when any trainer execution is interrupted by an exception.

반환 형식

None

on_fit_end(trainer, pl_module)[소스]

Called when fit ends.

반환 형식

None

on_fit_start(trainer, pl_module)[소스]

Called when fit begins.

반환 형식

None

on_load_checkpoint(trainer, pl_module, checkpoint)[소스]

Called when loading a model checkpoint, use to reload state.

매개변수
반환 형식

None

on_predict_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]

Called when the predict batch ends.

반환 형식

None

on_predict_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]

Called when the predict batch begins.

반환 형식

None

on_predict_end(trainer, pl_module)[소스]

Called when predict ends.

반환 형식

None

on_predict_epoch_end(trainer, pl_module)[소스]

Called when the predict epoch ends.

반환 형식

None

on_predict_epoch_start(trainer, pl_module)[소스]

Called when the predict epoch begins.

반환 형식

None

on_predict_start(trainer, pl_module)[소스]

Called when the predict begins.

반환 형식

None

on_sanity_check_end(trainer, pl_module)[소스]

Called when the validation sanity check ends.

반환 형식

None

on_sanity_check_start(trainer, pl_module)[소스]

Called when the validation sanity check starts.

반환 형식

None

on_save_checkpoint(trainer, pl_module, checkpoint)[소스]

Called when saving a checkpoint to give you a chance to store anything else you might want to save.

매개변수
반환 형식

None

on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]

Called when the test batch ends.

반환 형식

None

on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]

Called when the test batch begins.

반환 형식

None

on_test_end(trainer, pl_module)[소스]

Called when the test ends.

반환 형식

None

on_test_epoch_end(trainer, pl_module)[소스]

Called when the test epoch ends.

반환 형식

None

on_test_epoch_start(trainer, pl_module)[소스]

Called when the test epoch begins.

반환 형식

None

on_test_start(trainer, pl_module)[소스]

Called when the test begins.

반환 형식

None

on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[소스]

Called when the train batch ends.

참고

The value outputs["loss"] here will be the normalized value w.r.t accumulate_grad_batches of the loss returned from training_step.

반환 형식

None

on_train_batch_start(trainer, pl_module, batch, batch_idx)[소스]

Called when the train batch begins.

반환 형식

None

on_train_end(trainer, pl_module)[소스]

Called when the train ends.

반환 형식

None

on_train_epoch_end(trainer, pl_module)[소스]

Called when the train epoch ends.

To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the pytorch_lightning.LightningModule and access them in this hook:

class MyLightningModule(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.training_step_outputs = []

    def training_step(self):
        loss = ...
        self.training_step_outputs.append(loss)
        return loss


class MyCallback(L.Callback):
    def on_train_epoch_end(self, trainer, pl_module):
        # do something with all training_step outputs, for example:
        epoch_mean = torch.stack(pl_module.training_step_outputs).mean()
        pl_module.log("training_epoch_mean", epoch_mean)
        # free up the memory
        pl_module.training_step_outputs.clear()
반환 형식

None

on_train_epoch_start(trainer, pl_module)[소스]

Called when the train epoch begins.

반환 형식

None

on_train_start(trainer, pl_module)[소스]

Called when the train begins.

반환 형식

None

on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]

Called when the validation batch ends.

반환 형식

None

on_validation_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]

Called when the validation batch begins.

반환 형식

None

on_validation_end(trainer, pl_module)[소스]

Called when the validation loop ends.

반환 형식

None

on_validation_epoch_end(trainer, pl_module)[소스]

Called when the val epoch ends.

반환 형식

None

on_validation_epoch_start(trainer, pl_module)[소스]

Called when the val epoch begins.

반환 형식

None

on_validation_start(trainer, pl_module)[소스]

Called when the validation loop begins.

반환 형식

None

setup(trainer, pl_module, stage)[소스]

Called when fit, validate, test, predict, or tune begins.

반환 형식

None

state_dict()[소스]

Called when saving a checkpoint, implement to generate callback’s state_dict.

반환 형식

Dict[str, Any]

반환

A dictionary containing callback state.

teardown(trainer, pl_module, stage)[소스]

Called when fit, validate, test, predict, or tune ends.

반환 형식

None

property state_key: str

Identifier for the state of the callback.

Used to store and retrieve a callback’s state from the checkpoint dictionary by checkpoint["callbacks"][state_key]. Implementations of a callback need to provide a unique state key if 1) the callback has state and 2) it is desired to maintain the state of multiple instances of that callback.

반환 형식

str