Skip to content

Commit 8ba94e7

Browse files
authored
Release v0.2.1 (#72)
1 parent 7daeb10 commit 8ba94e7

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

docs/online-inference-with-maxtext-engine.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,8 @@ Follow the steps in [Manage TPU resources | Google Cloud](https://cloud.google.c
2121
## Step 1: Download JetStream and the MaxText github repository
2222

2323
```bash
24-
git clone -b jetstream-v0.2.0 https://github.com/google/maxtext.git
25-
git clone -b v0.2.0 https://github.com/google/JetStream.git
24+
git clone -b jetstream-v0.2.1 https://github.com/google/maxtext.git
25+
git clone -b v0.2.1 https://github.com/google/JetStream.git
2626
```
2727

2828
## Step 2: Setup MaxText

setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ def parse_requirements(filename):
2424

2525
setup(
2626
name="google-jetstream",
27-
version="0.2.0",
27+
version="0.2.1",
2828
description=(
2929
"JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome)."
3030
),

0 commit comments

Comments
 (0)