reduce_bucket_size: reduce的桶大小。 contiguous_gradients: 是否使梯度连续。 速度方面(左边比右边快) 阶段0 (DDP) > 阶段 1 > 阶段 2 > 阶段 2 +offload> 阶段 3 > 阶段 3 + offload GPU 内存使用情况(右侧的 GPU 内存效率高于左侧) 阶段0 (DDP) < 阶段 1 < 阶段 2 < 阶段 2 + offload < ...
在上面的更改中,我们将stage字段设置为2,并配置了在ZeRO Stage2 中可用的其他优化选项。例如,我们启用了contiguous_gradients,以减少反向传播期间的内存碎片。这些优化设置的完整描述可在(https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training)找到。有了这些更改,我们现在可以启动训练。
"contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": 1e6, "stage3_prefetch_bucket_size": 0.94e6, "stage3_param_persistence_threshold": 1e4, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": ...
The Zero Redundancy Optimizer (ZeRO) removes the memory redundancies across data-parallel processes by partitioning the three model states (optimizer states, gradients, and parameters) across data-parallel processes instead of replicating them. By doing this, it boosts memory efficiency compared to class...
{ "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", } "contiguous_gradients": true, "overlap_comm": true } } 如上所述,除了将stage字段设置为2(启用ZeRO Stage 2,但Stage 1也可以),我们还需要将offload_optimizer设备设置为cpu以启用ZeRO-Offload优化。此外,我们可以...
contiguous_gradients:用于控制是否在生成梯度时将其复制到一个连续的缓冲区,以避免内存碎片化,默认为...
{"stage": 2,"allgather_partitions": True,"allgather_bucket_size": 2e8,"overlap_comm": True,"reduce_scatter": True,"reduce_bucket_size": 2e8,"contiguous_gradients": True,"cpu_offload": False}}model_engine, optimizer, _, _ = deepspeed.initialize(args=params,model=model,model_parameters...
"contiguous_gradients":true, "sub_group_size":1e9, "reduce_bucket_size":"auto", "stage3_prefetch_bucket_size":"auto", "stage3_param_persistence_threshold":"auto", "stage3_max_live_parameters":1e9, "stage3_max_reuse_distance":1e9, ...
"contiguous_gradients": true, "cpu_offload": true, "cpu_offload_params": false, "cpu_offload_use_pin_memory": false, "sub_group_size": 1e9, "stage3_prefetch_bucket_size": 5e7, "stage3_param_persistence_threshold": 1e6, "stage3_max_live_parameters": 1e9, ...
"loss_scale":0,"initial_scale_power":16,"loss_scale_window":1000,"hysteresis":2,"min_loss_scale":1},"zero_optimization":{"stage":2,"allgather_partitions":true,"allgather_bucket_size":5e8,"reduce_scatter":true,"reduce_bucket_size":5e8,"overlap_comm":false,"contiguous_gradients":true...