Pinned Loading
-
bitsandbytes-foundation/bitsandbytes
bitsandbytes-foundation/bitsandbytes PublicAccessible large language models via k-bit quantization for PyTorch.
-
huggingface/accelerate
huggingface/accelerate Public🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
204 contributions in the last year
Day of Week | April Apr | May May | June Jun | July Jul | August Aug | September Sep | October Oct | November Nov | December Dec | January Jan | February Feb | March Mar | April Apr | ||||||||||||||||||||||||||||||||||||||||
Sunday Sun | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Monday Mon | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday Tue | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Wednesday Wed | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Thursday Thu | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Friday Fri | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Saturday Sat |
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More
Activity overview
Contributed to
bitsandbytes-foundation/bitsandbytes,
huggingface/transformers,
bitsandbytes-foundation/bitsandbytes-intel
and 9 other
repositories
Loading
Contribution activity
April 2025
Created 15 commits in 2 repositories
Created a pull request in bitsandbytes-foundation/bitsandbytes that received 1 comment
Fix torch.compile issue for LLM.int8() with threshold=0
Allows compilation of int8 quantized modules without graph breaks (when threshold=0).
+8
−1
lines changed
•
1
comment
Opened 4 other pull requests in 1 repository
bitsandbytes-foundation/bitsandbytes
2
open
2
merged
-
WIP: Changes for device agnosticism
This contribution was made on Apr 16
-
Support LLM.int8() inference with torch.compile
This contribution was made on Apr 15
-
Add autoloading for backend packages
This contribution was made on Apr 14
-
Fix #1588 - torch compatability for <=2.4
This contribution was made on Apr 10
Reviewed 3 pull requests in 1 repository
bitsandbytes-foundation/bitsandbytes
3 pull requests
-
fix typo __getitem__
This contribution was made on Apr 15
-
XPU backend support 8bit optimizer
This contribution was made on Apr 15
-
Support LLM.int8() inference with torch.compile
This contribution was made on Apr 15
Opened 1 issue in 1 repository
bitsandbytes-foundation/bitsandbytes
1
open
-
Optimizer custom ops
This contribution was made on Apr 15
1
contribution
in private repositories
Apr 15