pytorch / pytorch Public Notifications Fork 23.6k Star 87.8k Code Issues 5k+ Pull requests 1.1k Actions Projects 12 Wiki Security Insights Assign User on Comment Create and send full_tensor on ProcessGroup-supported device in _broadcast_tensors #158828 Sign in to view logs Summar...
46 in __call__ return self.cache(*args, **kwargs) File ~/nfs/anaconda3/envs/geochat/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py:450 in propagate_op_sharding_non_cached raise NotImplementedError( NotImplementedError: Operator aten.to.dtype_layout does not have a ...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Create and send `full_tensor` on `ProcessGroup`-supported device in `_broadcast_tensors` · pytorch/pytorch@278b67b
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [C10D] Support group_dst/group_src in c10d send/recv object_list (#1… · pytorch/pytorch@98e6e69
• edited by pytorch-probot bot It would be useful for torch.distributed.send and .recv to be able to send arbitrary objects. I have two requests: One version of send and recv that does not copy to tensor, but instead returns a new tensor. This way, we can send tensors of arbitrar...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Create and send `full_tensor` on `ProcessGroup`-supported device in `_broadcast_tensors` · pytorch/pytorch@56edec3
Create and send full_tensor on ProcessGroup-supported device in _broadcast_tensors #213119 Sign in to view logs Summary Jobs bc_linter Run details Usage Workflow file Re-run triggered March 11, 2025 20:58 mori360 #148865 ringohoffman:fix-_broadcast_tensors-local_state-on-cpu Sta...