Shortcuts

Strategy

class lightning.pytorch.strategies.Strategy(accelerator=None, checkpoint_io=None, precision_plugin=None)[소스]

기반 클래스: abc.ABC

Base class for all strategies that change the behaviour of the training, validation and test- loop.

abstract all_gather(tensor, group=None, sync_grads=False)[소스]

Perform an all_gather on all processes.

매개변수
  • tensor (Tensor) – the tensor to all_gather

  • group (Optional[Any]) – the process group to gather results from

  • sync_grads (bool) – flag that allows users to synchronize gradients for all_gather op

반환 형식

Tensor

backward(closure_loss, optimizer, *args, **kwargs)[소스]

Forwards backward-calls to the precision plugin.

매개변수
  • closure_loss (Tensor) – a tensor holding the loss value to backpropagate

  • optimizer (Optional[Optimizer]) – An optional optimizer that gets passed down to the precision plugin’s backward

  • *args – Positional arguments that get passed down to the precision plugin’s backward, intended as arguments for the actual function that performs the backward, like backward().

  • **kwargs – Keyword arguments for the same purpose as *args.

반환 형식

Tensor

abstract barrier(name=None)[소스]

Synchronizes all processes which blocks processes until the whole group enters this function.

매개변수

name (Optional[str]) – an optional name to pass into barrier.

반환 형식

None

batch_to_device(batch, device=None, dataloader_idx=0)[소스]

Moves the batch to the correct device.

The returned batch is of the same type as the input batch, just having all tensors on the correct device.

매개변수
  • batch (Any) – The batch of samples to move to the correct device

  • device (Optional[device]) – The target device

  • dataloader_idx (int) – The index of the dataloader to which the batch belongs.

반환 형식

Any

abstract broadcast(obj, src=0)[소스]

Broadcasts an object to all processes.

매개변수
  • obj (TypeVar(TBroadcast)) – the object to broadcast

  • src (int) – source rank

반환 형식

TypeVar(TBroadcast)

connect(model)[소스]

Called by the accelerator to connect the accelerator and the model with this plugin.

반환 형식

None

lightning_module_state_dict()[소스]

Returns model state.

반환 형식

Dict[str, Any]

model_sharded_context()[소스]

Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.

Returns: Model parallel context.

반환 형식

Generator

abstract model_to_device()[소스]

Moves the model to the correct device.

반환 형식

None

on_exception(exception)[소스]

Called when the trainer execution is interrupted by an exception.

반환 형식

None

on_predict_end()[소스]

Called when predict ends.

반환 형식

None

on_predict_start()[소스]

Called when predict begins.

반환 형식

None

on_test_end()[소스]

Called when test end.

반환 형식

None

on_test_start()[소스]

Called when test begins.

반환 형식

None

on_train_batch_start(batch, batch_idx)[소스]

Called in the training loop before anything happens for that batch.

반환 형식

None

on_train_end()[소스]

Called when train ends.

반환 형식

None

on_train_start()[소스]

Called when train begins.

반환 형식

None

on_validation_end()[소스]

Called when validation ends.

반환 형식

None

on_validation_start()[소스]

Called when validation begins.

반환 형식

None

optimizer_state(optimizer)[소스]

Returns state of an optimizer.

Allows for syncing/collating optimizer state from processes in custom plugins.

반환 형식

Dict[str, Tensor]

optimizer_step(optimizer, closure, model=None, **kwargs)[소스]

Performs the actual optimizer step.

매개변수
  • optimizer (Optimizer) – the optimizer performing the step

  • closure (Callable[[], Any]) – closure calculating the loss value

  • model (Union[LightningModule, Module, None]) – reference to the model, optionally defining optimizer step related hooks

  • **kwargs – Keyword arguments to optimizer.step

반환 형식

Any

post_backward(closure_loss)[소스]

Run after precision plugin executes backward.

반환 형식

None

pre_backward(closure_loss)[소스]

Run before precision plugin executes backward.

반환 형식

None

predict_step(*args, **kwargs)[소스]

The actual predict step.

See predict_step() for more details

반환 형식

Union[Tensor, Dict[str, Any]]

process_dataloader(dataloader)[소스]

Wraps the dataloader if necessary.

매개변수

dataloader (object) – iterable. Ideally of type: torch.utils.data.DataLoader

반환 형식

object

abstract reduce(tensor, group=None, reduce_op='mean')[소스]

Reduces the given tensor (e.g. across GPUs/processes).

매개변수
  • tensor (Union[Tensor, Any]) – the tensor to sync and reduce

  • group (Optional[Any]) – the process group to reduce

  • reduce_op (Union[ReduceOp, str, None]) – the reduction operation. Defaults to ‘mean’. Can also be a string ‘sum’ or ReduceOp.

반환 형식

Union[Tensor, Any]

reduce_boolean_decision(decision, all=True)[소스]

Reduce a boolean decision across all processes.

반환 형식

bool

remove_checkpoint(filepath)[소스]

Remove checkpoint filepath from the filesystem.

매개변수

filepath (Union[str, Path]) – Path to checkpoint

반환 형식

None

save_checkpoint(checkpoint, filepath, storage_options=None)[소스]

Save model/training states as a checkpoint file through state-dump and file-write.

매개변수
  • checkpoint (Dict[str, Any]) – dict containing model and trainer state

  • filepath (Union[str, Path]) – write-target file’s path

  • storage_options (Optional[Any]) – parameter for how to save to storage, passed to CheckpointIO plugin

반환 형식

None

setup(trainer)[소스]

Setup plugins for the trainer fit and creates optimizers.

매개변수

trainer (Trainer) – the trainer instance

반환 형식

None

setup_environment()[소스]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

반환 형식

None

setup_optimizers(trainer)[소스]

Creates optimizers and schedulers.

매개변수

trainer (Trainer) – the Trainer, these optimizers should be connected to

반환 형식

None

setup_precision_plugin()[소스]

Attaches the precision plugin to the accelerator.

반환 형식

None

teardown()[소스]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

반환 형식

None

test_step(*args, **kwargs)[소스]

The actual test step.

See test_step() for more details

반환 형식

Union[Tensor, Dict[str, Any], None]

training_step(*args, **kwargs)[소스]

The actual training step.

See training_step() for more details

반환 형식

Union[Tensor, Dict[str, Any]]

validation_step(*args, **kwargs)[소스]

The actual validation step.

See validation_step() for more details

반환 형식

Union[Tensor, Dict[str, Any], None]

property handles_gradient_accumulation: bool

Whether the plugin handles gradient accumulation internally.

반환 형식

bool

abstract property is_global_zero: bool

Whether the current process is the rank zero process not only on the local node, but for all nodes.

반환 형식

bool

property lightning_module: Optional[lightning.pytorch.core.module.LightningModule]

Returns the pure LightningModule without potential wrappers.

반환 형식

Optional[LightningModule]

property lightning_restore_optimizer: bool

Override to disable Lightning restoring optimizers/schedulers.

This is useful for plugins which manage restoring optimizers/schedulers.

반환 형식

bool

property model: Optional[torch.nn.modules.module.Module]

Returns the potentially wrapped LightningModule.

반환 형식

Optional[Module]

property restore_checkpoint_after_setup: bool

Override to delay restoring from checkpoint till after the setup phase has completed. This is useful when the strategy requires all the setup hooks to run before loading checkpoint.

반환 형식

bool

반환

If True, restore checkpoint after strategy setup.

abstract property root_device: torch.device

Returns the root device.

반환 형식

device