
• 8 min read
Collective Communication in Distributed Systems with PyTorch
The full code for this article is on GitHub
Today, we will explore the use of PyTorch's distributed collective communication feature. When working with multiple GPUs, it is necessary to share tensors across them, which is where torch.distributed comes in. It provides a set of APIs to