Shortcuts

ParallelStrategy

class lightning.pytorch.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[소스]

기반 클래스: lightning.pytorch.strategies.strategy.Strategy, abc.ABC

Plugin for training with multiple processes in parallel.

all_gather(tensor, group=None, sync_grads=False)[소스]

Perform a all_gather on all processes.

반환 형식

Tensor

block_backward_sync()[소스]

Blocks ddp sync gradients behaviour on backwards pass.

This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off

반환 형식

Generator

reduce_boolean_decision(decision, all=True)[소스]

Reduces a boolean decision over distributed processes. By default is analagous to all from the standard library, returning True only if all input decisions evaluate to True. If all is set to False, it behaves like any instead.

매개변수
  • decision (bool) – A single input decision.

  • all (bool) – Whether to logically emulate all or any. Defaults to True.

반환

The reduced boolean decision.

반환 형식

bool

teardown()[소스]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

반환 형식

None

property is_global_zero: bool

Whether the current process is the rank zero process not only on the local node, but for all nodes.

반환 형식

bool

abstract property root_device: torch.device

Return the root device.

반환 형식

device