FSDPStrategy¶
- class lightning.pytorch.strategies.FSDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, process_group_backend=None, cpu_offload=None, mixed_precision=None, activation_checkpointing=None, **kwargs)[소스]¶
기반 클래스:
lightning.pytorch.strategies.parallel.ParallelStrategy
Strategy for Fully Sharded Data Parallel provided by torch.distributed.
경고
This is an experimental feature.
Fully Sharded Training shards the entire model across all available GPUs, allowing you to scale model size, whilst using efficient communication to reduce overhead. In practice, this means we can remain at parity with PyTorch DDP, whilst scaling our model sizes dramatically. The technique is similar to ZeRO-Stage 3.
For more information check out this blogpost.
Defaults have been set and options have been exposed, but may require configuration based on your level of memory/speed efficiency. We suggest having a look at this tutorial for more information.
- 매개변수
cpu_offload¶ (
Union
[bool
,CPUOffload
,None
]) – Seecpu_offload
parameter intorch.distributed.fsdp.FullyShardedDataParallel
.mixed_precision¶ (
Optional
[MixedPrecision
]) – Seemixed_precision
parameter intorch.distributed.fsdp.FullyShardedDataParallel
.activation_checkpointing¶ (
Union
[Type
[Module
],List
[Type
[Module
]],None
]) – A single layer or a list of layer classes for which you want to enable activation checkpointing. This is typically your transformer block (including attention + feed-forward). Enabling this can free up a significant amount of memory at the cost of speed since activations in these layers need to be recomputed during backpropagation.**kwargs¶ – See available parameters in
torch.distributed.fsdp.FullyShardedDataParallel
.
- barrier(name=None)[소스]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- model_sharded_context()[소스]¶
Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.
Returns: Model parallel context.
- 반환 형식
- reduce(tensor, group=None, reduce_op='mean')[소스]¶
Reduces a tensor from several distributed processes to one aggregated tensor.
- 매개변수
tensor¶ (
Union
[Tensor
,Any
]) – the tensor to sync and reducegroup¶ (
Optional
[Any
]) – the process group to gather results from. Defaults to all processes (world)reduce_op¶ (
Union
[ReduceOp
,str
,None
]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.
- 반환 형식
- 반환
reduced value, except when the input was not a tensor the output remains is unchanged
- setup_environment()[소스]¶
Setup any processes or distributed connections.
This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.
- 반환 형식
- teardown()[소스]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- 반환 형식
- validation_step(*args, **kwargs)[소스]¶
The actual validation step.
See
validation_step()
for more details
- property root_device: torch.device¶
Return the root device.
- 반환 형식