Callback¶
- class lightning.pytorch.callbacks.Callback[소스]¶
기반 클래스:
object
Abstract base class used to build new callbacks.
Subclass this class and override any of the relevant hooks
- load_state_dict(state_dict)[소스]¶
Called when loading a checkpoint, implement to reload callback state given callback’s
state_dict
.
- on_after_backward(trainer, pl_module)[소스]¶
Called after
loss.backward()
and before optimizers are stepped.- 반환 형식
- on_exception(trainer, pl_module, exception)[소스]¶
Called when any trainer execution is interrupted by an exception.
- 반환 형식
- on_load_checkpoint(trainer, pl_module, checkpoint)[소스]¶
Called when loading a model checkpoint, use to reload state.
- 매개변수
pl_module¶ (
LightningModule
) – the currentLightningModule
instance.checkpoint¶ (
Dict
[str
,Any
]) – the full checkpoint dictionary that got loaded by the Trainer.
- 반환 형식
- on_predict_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the predict batch ends.
- 반환 형식
- on_predict_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the predict batch begins.
- 반환 형식
- on_sanity_check_start(trainer, pl_module)[소스]¶
Called when the validation sanity check starts.
- 반환 형식
- on_save_checkpoint(trainer, pl_module, checkpoint)[소스]¶
Called when saving a checkpoint to give you a chance to store anything else you might want to save.
- 매개변수
pl_module¶ (
LightningModule
) – the currentLightningModule
instance.checkpoint¶ (
Dict
[str
,Any
]) – the checkpoint dictionary that will be saved.
- 반환 형식
- on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the test batch ends.
- 반환 형식
- on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the test batch begins.
- 반환 형식
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[소스]¶
Called when the train batch ends.
참고
The value
outputs["loss"]
here will be the normalized value w.r.taccumulate_grad_batches
of the loss returned fromtraining_step
.- 반환 형식
- on_train_batch_start(trainer, pl_module, batch, batch_idx)[소스]¶
Called when the train batch begins.
- 반환 형식
- on_train_epoch_end(trainer, pl_module)[소스]¶
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the
pytorch_lightning.LightningModule
and access them in this hook:class MyLightningModule(L.LightningModule): def __init__(self): super().__init__() self.training_step_outputs = [] def training_step(self): loss = ... self.training_step_outputs.append(loss) return loss class MyCallback(L.Callback): def on_train_epoch_end(self, trainer, pl_module): # do something with all training_step outputs, for example: epoch_mean = torch.stack(pl_module.training_step_outputs).mean() pl_module.log("training_epoch_mean", epoch_mean) # free up the memory pl_module.training_step_outputs.clear()
- 반환 형식
- on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the validation batch ends.
- 반환 형식
- on_validation_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[소스]¶
Called when the validation batch begins.
- 반환 형식
- setup(trainer, pl_module, stage)[소스]¶
Called when fit, validate, test, predict, or tune begins.
- 반환 형식
- teardown(trainer, pl_module, stage)[소스]¶
Called when fit, validate, test, predict, or tune ends.
- 반환 형식
- property state_key: str¶
Identifier for the state of the callback.
Used to store and retrieve a callback’s state from the checkpoint dictionary by
checkpoint["callbacks"][state_key]
. Implementations of a callback need to provide a unique state key if 1) the callback has state and 2) it is desired to maintain the state of multiple instances of that callback.- 반환 형식