Skip to content

OpenMachine-ai/transformer-tricks

Repository files navigation

Transformer Tricks

PyPI Downloads


A collection of tricks to simplify and speed up transformer models:

Many of these tricks follow a recent trend of removing parts from neural networks such as RMSNorm’s removal of mean centering from LayerNorm, PaLM's removal of bias-parameters, NoPE’s removal of positional encoding, GPT’s removal of the encoder stack, and of course transformer’s revolutionary removal of recurrent layers. Specifically, our FlashNorm removes the weights from RMSNorm and merges them with the next linear layer. And slim attention removes the entire V-cache from the context memory for MHA transformers.


Installation

Install the transformer tricks package:

pip install transformer-tricks

Documentation

Follow the links below for documentation of the python code in this directory:


Notebooks

The papers are accompanied by the following Jupyter notebooks:

  • Slim attention: Colab
  • Flash normalization: Colab Colab
  • Removing weights from skipless transformers: Colab

Newsletter

Please subsribe to our [newsletter] on substack to get the latest news about this project. We will never send you more than one email per month.

Substack


Contributing

Before making a change to this repo, please do the following:

  • Format your code by typing autopep8 *.py. It's using the config in pyproject.toml.
  • Whenever you change transformer_tricks.py, publish a new version of the package as follows:
    • First, update the version number in pyproject.toml and in requirements.txt
    • Then, push the package to PyPi by typing ./push_pypi.sh
    • Links for python package: pypi, stats, source of this readme
  • Whenever you modify flashNorm_example.py or another python file, generate the corresponding notebook as follows:
    jupytext --to ipynb flashNorm_example.py -o notebooks/flashNorm_example.ipynb
    

Please give us a ⭐ if you like this repo, thanks!