PyTorch/XLA 1.10 release
Cloud TPUs now support the PyTorch 1.10 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
On top of the underlying improvements and bug fixes in PyTorch's 1.10 release, this release adds several PyTorch/XLA-specific bug fixes:
- Add support for reduce_scatter
- Introduce the AMP Zero gradients optimization for XLA:GPU
- Introduce the environment variable XLA_DOWN_CAST_BF16 and XLA_DOWNCAST_FP16 to downcast input tensors
- adaptive_max_pool2d lowering
- nan_to_num lowering
- sgn lowering
- logical_not/logical_xor/logical_or/logical_and lowering
- amax lowering
- amin lowering
- std_mean lowering
- var_mean lowering
- lerp lowering
- isnan lowering