DDPStrategy¶
- class lightning.pytorch.strategies.DDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='popen', **kwargs)[소스]¶
기반 클래스:
lightning.pytorch.strategies.parallel.ParallelStrategy
Strategy for multi-process single-device training on one or multiple nodes.
- barrier(*args, **kwargs)[소스]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- on_exception(exception)[소스]¶
Called when the trainer execution is interrupted by an exception.
- 반환 형식
- optimizer_step(optimizer, closure, model=None, **kwargs)[소스]¶
Performs the actual optimizer step.
- 매개변수
- 반환 형식
- reduce(tensor, group=None, reduce_op='mean')[소스]¶
Reduces a tensor from several distributed processes to one aggregated tensor.
- 매개변수
- 반환 형식
- 반환
reduced value, except when the input was not a tensor the output remains is unchanged
- setup_environment()[소스]¶
Setup any processes or distributed connections.
This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.
- 반환 형식
- teardown()[소스]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- 반환 형식
- validation_step(*args, **kwargs)[소스]¶
The actual validation step.
See
validation_step()
for more details
- property root_device: torch.device¶
Return the root device.
- 반환 형식