Skip to content

PyTorch/XLA 1.6 Release (GA)

Compare
Choose a tag to compare
@jysohn23 jysohn23 released this 19 Aug 21:19
9703109

Highlights

Cloud TPUs now support the PyTorch 1.6 release, via PyTorch/XLA integration. With this release we mark our general availability (GA) with the models such as ResNet, FairSeq Transformer and RoBERTa, and HuggingFace GLUE task models that have been rigorously tested and optimized.

In addition, with our PyTorch/XLA 1.6 release, you no longer need to run the env-setup.py script on Colab/Kaggle as those are now compatible with native torch wheels. See here for an example of the new Colab/Kaggle install step. You can still continue to use that script if you would like to run with our latest unstable releases.

New Features

  • XLA RNG state checkpointing/loading (#2096)
  • Device Memory XRT API (#2295)
  • [Kaggle/Colab] Small host VM memory environment utility (#2025)
  • [Advanced User] XLA Builder Support (#2125)
  • New ops supported on PyTorch/XLA
  • Dynamic shape support on XLA:CPU and XLA:GPU (experimental)

Bug Fixes

  • RNG Fix (proper randomness with bernoulli and dropout) (#1932)
  • Manual all-reduce in backward pass (#2325)