Skip to content

Actions: apple/axlearn

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
565 workflow runs
565 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

improve GCS perf: Change resource limit to request
pre-commit #21: Pull request #851 opened by samos123
November 19, 2024 21:35 30m 52s samos123:improve-gcs-fuse-perf
November 19, 2024 21:35 30m 52s
improve GCS perf: Change resource limit to request
build-and-test #31: Pull request #851 opened by samos123
November 19, 2024 21:35 22m 54s samos123:improve-gcs-fuse-perf
November 19, 2024 21:35 22m 54s
Generalizes pre- and post-spmd setup with init modules. (#848)
build-and-test #30: Commit 1af2ba8 pushed by github-merge-queue bot
November 19, 2024 21:13 23m 22s main
November 19, 2024 21:13 23m 22s
Integrate Orbax's emergency checkpoint.
build-and-test #29: Pull request #820 synchronize by hanzhi713
November 19, 2024 21:05 21m 38s hanzhi713:in-mem-ckpt
November 19, 2024 21:05 21m 38s
Integrate Orbax's emergency checkpoint.
pre-commit #20: Pull request #820 synchronize by hanzhi713
November 19, 2024 21:05 29m 1s hanzhi713:in-mem-ckpt
November 19, 2024 21:05 29m 1s
Add llama 3 tokenizer
pre-commit #19: Pull request #850 synchronize by sychen52
November 19, 2024 20:39 28m 27s sychen52:llama
November 19, 2024 20:39 28m 27s
Add llama 3 tokenizer
build-and-test #27: Pull request #850 synchronize by sychen52
November 19, 2024 20:39 21m 19s sychen52:llama
November 19, 2024 20:39 21m 19s
Fix RLHF slowdown in attention multi steps extend_step. (#849)
build-and-test #26: Commit 2803b36 pushed by github-merge-queue bot
November 19, 2024 17:58 21m 52s main
November 19, 2024 17:58 21m 52s
Add llama 3 tokenizer
pre-commit #18: Pull request #850 opened by sychen52
November 19, 2024 17:22 30m 32s sychen52:llama
November 19, 2024 17:22 30m 32s
Add llama 3 tokenizer
build-and-test #24: Pull request #850 opened by sychen52
November 19, 2024 17:22 21m 41s sychen52:llama
November 19, 2024 17:22 21m 41s
Fix RLHF slowdown in attention multi steps extend_step.
pre-commit #17: Pull request #849 opened by ds-hwang
November 19, 2024 16:27 30m 31s ds-hwang:mult_bug_fix
November 19, 2024 16:27 30m 31s
Fix RLHF slowdown in attention multi steps extend_step.
build-and-test #23: Pull request #849 opened by ds-hwang
November 19, 2024 16:27 21m 26s ds-hwang:mult_bug_fix
November 19, 2024 16:27 21m 26s
Conv1D supports paddings. (#847)
build-and-test #22: Commit 420ed7a pushed by github-merge-queue bot
November 19, 2024 02:01 21m 9s main
November 19, 2024 02:01 21m 9s
Optimize TPU Flash Attention (400x speed-up on 32k long context)
build-and-test #20: Pull request #845 synchronize by ds-hwang
November 19, 2024 01:16 16m 44s ds-hwang:flsh_op
November 19, 2024 01:16 16m 44s
Optimize TPU Flash Attention (400x speed-up on 32k long context)
pre-commit #16: Pull request #845 synchronize by ds-hwang
November 19, 2024 01:16 11m 15s ds-hwang:flsh_op
November 19, 2024 01:16 11m 15s
Integrate Orbax's emergency checkpoint.
pre-commit #13: Pull request #820 synchronize by hanzhi713
November 19, 2024 00:37 30m 31s hanzhi713:in-mem-ckpt
November 19, 2024 00:37 30m 31s
Integrate Orbax's emergency checkpoint.
build-and-test #17: Pull request #820 synchronize by hanzhi713
November 19, 2024 00:37 22m 11s hanzhi713:in-mem-ckpt
November 19, 2024 00:37 22m 11s
Optimize TPU Flash Attention (400x speed-up on 32k long context)
pre-commit #9: Pull request #845 synchronize by ds-hwang
November 18, 2024 23:52 11m 13s ds-hwang:flsh_op
November 18, 2024 23:52 11m 13s
Optimize TPU Flash Attention (400x speed-up on 32k long context)
build-and-test #13: Pull request #845 synchronize by ds-hwang
November 18, 2024 23:52 16m 43s ds-hwang:flsh_op
November 18, 2024 23:52 16m 43s
Ensure iterators are saved in per-process dir. (#846)
build-and-test #11: Commit 2335d13 pushed by github-merge-queue bot
November 18, 2024 22:16 21m 43s main
November 18, 2024 22:16 21m 43s
ProTip! You can narrow down the results and go further in time using created:<2024-11-18 or the other filters available.