📚 Collection of token-level model compression resources.
-
Updated
Aug 26, 2025
📚 Collection of token-level model compression resources.
Survey: https://arxiv.org/pdf/2507.20198
The official code for the paper: LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs
😎 Awesome papers on token redundancy reduction
[Arxiv 2025 Preprint] HiPrune, a training-free visual token pruning method for VLM acceleration.
This repo integrates DyCoke's token compression method with VLMs such as Gemma3 and InternVL3
Official implementation of TCSVT 2025 paper: DiViCo: Disentangled Visual Token Compression For Efficient Large Vision-Language Model
Add a description, image, and links to the token-compression topic page so that developers can more easily learn about it.
To associate your repository with the token-compression topic, visit your repo's landing page and select "manage topics."