Shortcuts

BaseFinetuning

class lightning.pytorch.callbacks.BaseFinetuning[소스]

기반 클래스: lightning.pytorch.callbacks.callback.Callback

This class implements the base logic for writing your own Finetuning Callback.

경고

This is an experimental feature.

Override freeze_before_training and finetune_function methods with your own logic.

freeze_before_training: This method is called before configure_optimizers

and should be used to freeze any modules parameters.

finetune_function: This method is called on every train epoch start and should be used to

unfreeze any parameters. Those parameters need to be added in a new param_group within the optimizer.

참고

Make sure to filter the parameters based on requires_grad.

Example:

>>> from torch.optim import Adam
>>> class MyModel(pl.LightningModule):
...     def configure_optimizer(self):
...         # Make sure to filter the parameters based on `requires_grad`
...         return Adam(filter(lambda p: p.requires_grad, self.parameters()))
...
>>> class FeatureExtractorFreezeUnfreeze(BaseFinetuning):
...     def __init__(self, unfreeze_at_epoch=10):
...         super().__init__()
...         self._unfreeze_at_epoch = unfreeze_at_epoch
...
...     def freeze_before_training(self, pl_module):
...         # freeze any module you want
...         # Here, we are freezing `feature_extractor`
...         self.freeze(pl_module.feature_extractor)
...
...     def finetune_function(self, pl_module, current_epoch, optimizer):
...         # When `current_epoch` is 10, feature_extractor will start training.
...         if current_epoch == self._unfreeze_at_epoch:
...             self.unfreeze_and_add_param_group(
...                 modules=pl_module.feature_extractor,
...                 optimizer=optimizer,
...                 train_bn=True,
...             )
static filter_on_optimizer(optimizer, params)[소스]

This function is used to exclude any parameter which already exists in this optimizer.

매개변수
  • optimizer (Optimizer) – Optimizer used for parameter exclusion

  • params (Iterable) – Iterable of parameters used to check against the provided optimizer

반환 형식

List

반환

List of parameters not contained in this optimizer param groups

static filter_params(modules, train_bn=True, requires_grad=True)[소스]

Yields the requires_grad parameters of a given module or list of modules.

매개변수
  • modules (Union[Module, Iterable[Union[Module, Iterable]]]) – A given module or an iterable of modules

  • train_bn (bool) – Whether not to train the BatchNorm module

  • requires_grad (bool) – Whether to create a generator for trainable or non-trainable parameters.

반환 형식

Generator

반환

Generator

finetune_function(pl_module, epoch, optimizer)[소스]

Override to add your unfreeze logic.

반환 형식

None

static flatten_modules(modules)[소스]

This function is used to flatten a module or an iterable of modules into a list of its leaf modules (modules with no children) and parent modules that have parameters directly themselves.

매개변수

modules (Union[Module, Iterable[Union[Module, Iterable]]]) – A given module or an iterable of modules

반환 형식

List[Module]

반환

List of modules

static freeze(modules, train_bn=True)[소스]

Freezes the parameters of the provided modules.

매개변수
반환 형식

None

반환

None

freeze_before_training(pl_module)[소스]

Override to add your freeze logic.

반환 형식

None

static freeze_module(module)[소스]

Freezes the parameters of the provided module.

매개변수

module (Module) – A given module

반환 형식

None

load_state_dict(state_dict)[소스]

Called when loading a checkpoint, implement to reload callback state given callback’s state_dict.

매개변수

state_dict (Dict[str, Any]) – the callback state returned by state_dict.

반환 형식

None

static make_trainable(modules)[소스]

Unfreezes the parameters of the provided modules.

매개변수

modules (Union[Module, Iterable[Union[Module, Iterable]]]) – A given module or an iterable of modules

반환 형식

None

on_fit_start(trainer, pl_module)[소스]

Called when fit begins.

반환 형식

None

on_train_epoch_start(trainer, pl_module)[소스]

Called when the epoch begins.

반환 형식

None

setup(trainer, pl_module, stage)[소스]

Called when fit, validate, test, predict, or tune begins.

반환 형식

None

state_dict()[소스]

Called when saving a checkpoint, implement to generate callback’s state_dict.

반환 형식

Dict[str, Any]

반환

A dictionary containing callback state.

static unfreeze_and_add_param_group(modules, optimizer, lr=None, initial_denom_lr=10.0, train_bn=True)[소스]

Unfreezes a module and adds its parameters to an optimizer.

매개변수
  • modules (Union[Module, Iterable[Union[Module, Iterable]]]) – A module or iterable of modules to unfreeze. Their parameters will be added to an optimizer as a new param group.

  • optimizer (Optimizer) – The provided optimizer will receive new parameters and will add them to add_param_group

  • lr (Optional[float]) – Learning rate for the new param group.

  • initial_denom_lr (float) – If no lr is provided, the learning from the first param group will be used and divided by initial_denom_lr.

  • train_bn (bool) – Whether to train the BatchNormalization layers.

반환 형식

None