ParallelStrategy¶
- class lightning.pytorch.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[소스]¶
 기반 클래스:
lightning.pytorch.strategies.strategy.Strategy,abc.ABCPlugin for training with multiple processes in parallel.
- block_backward_sync()[소스]¶
 Blocks ddp sync gradients behaviour on backwards pass.
This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off
- 반환 형식
 
- reduce_boolean_decision(decision, all=True)[소스]¶
 Reduces a boolean decision over distributed processes. By default is analagous to
allfrom the standard library, returningTrueonly if all input decisions evaluate toTrue. Ifallis set toFalse, it behaves likeanyinstead.
- teardown()[소스]¶
 This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- 반환 형식
 
- property is_global_zero: bool¶
 Whether the current process is the rank zero process not only on the local node, but for all nodes.
- 반환 형식
 
- abstract property root_device: torch.device¶
 Return the root device.
- 반환 형식