With the increasing availability of lower-precision floating-point arithmetic beyond IEEE 64-bit double and 32-bit single precision, both in hardware and software simulation, reduced-precision formats such as 16-bit half precision have gained significant attention in scientific computing and machine learning. These formats offer higher computational throughput, reduced data transfer overhead, and lower energy consumption.
pychop
is a Python library designed for efficient quantization, enabling the conversion of single- or double-precision numbers into low-bitwidth representations. It allows users to define custom floating-point formats with a specified number of exponent and significand bits, offering fine-grained control over precision and range.
pychop
stands out for its versatility, efficiency, and ease of integration with NumPy, PyTorch, and JAX. Its key strengths—customizability, hardware independence, GPU support, and comprehensive rounding options—make it a valuable tool for both practical applications and theoretical exploration in numerical computing. The library supports multiple rounding modes, optional denormal number handling, and runs efficiently on both CPU and GPU devices. This makes it particularly useful for research, experimentation, and optimization in areas like machine learning, numerical analysis, and hardware design, where reduced precision can provide computational advantages.
Inspired by MATLAB’s chop function by Nick Higham, pychop
simulates low-precision floating-point formats as well as fixed-point and integer quantization based on single and double precision. It includes PyTorch and JAX backends, enabling low-precision neural network training simulations. By emulating low-precision formats within a high-precision environment (float32 or float64), pychop
allows users to analyze quantization effects without requiring specialized hardware. The library supports both deterministic and stochastic rounding strategies and is optimized for vectorized operations with NumPy arrays, PyTorch tensors, and JAX arrays.
The proper running environment of pychop
should by Python 3, which relies on the following dependencies:
To install the current current release via PIP use:
pip install pychop
The pychop
class offers several key advantages that make it a powerful tool for developers, researchers, and engineers working with numerical computations:
- Customizable Precision
- Multiple Rounding Modes
- Hardware-Independent Simulation
- Support for Denormal Numbers
- GPU Acceleration
- Reproducible Stochastic Rounding
- Ease of Integration
- Error Detection
The supported floating point arithmetic formats include:
format | description | bits |
---|---|---|
'q43', 'fp8-e4m3' | NVIDIA quarter precision | 4 exponent bits, 3 significand bits |
'q52', 'fp8-e5m2' | NVIDIA quarter precision | 5 exponent bits, 2 significand bits |
'b', 'bfloat16' | bfloat16 | 8 exponent bits, 7 significand bits |
't', 'tf32' | TensorFloat-32 | 8 exponent bits, 10 significand bits |
'h', 'half', 'fp16' | IEEE half precision | 5 exponent bits, 10 significand bits |
's', 'single', 'fp32' | IEEE single precision | 8 exponent bits, 23 significand bits |
'd', 'double', 'fp64' | IEEE double precision | 11 exponent bits, 52 significand bits |
'c', 'custom' | custom format | - - |
We will go through the main functionality of pychop
; for details refer to the documentation.
Users can specify the number of exponent (exp_bits) and significand (sig_bits) bits, enabling precise control over the trade-off between range and precision. For example, setting exp_bits=5 and sig_bits=4 creates a compact 10-bit format (1 sign, 5 exponent, 4 significand), ideal for testing minimal precision scenarios.
Rounding the values with specified precision format:
pychop
supports faster low-precision floating point quantization and also enables GPU emulation (simply move the input to GPU device), with different rounding functions:
import pychop
from pychop import LightChop
import numpy as np
np.random.seed(0)
X = np.random.randn(5000, 5000)
pychop.backend('numpy', 1) # Specify different backends, e.g., jax and torch
ch = LightChop(exp_bits=5, sig_bits=10, rmode=3) # half precision
X_q = ch(X)
print(X_q[:10, 0])
If one is not seeking optimized performance and more emulation supports, one can use the following example.
pychop
also provides same functionalities just like Higham's chop [1], but with relatively faster rounding:
from pychop import Chop
ch = Chop('h') # Standard IEEE 754 half precision
X_q = ch(X) # Rounding values
One can also customize the precision via:
from pychop import Customs
pychop.backend('numpy', 1)
ct1 = Customs(exp_bits=5, sig_bits=10) # half precision (5 exponent bits, 10+(1) significand bits, (1) is implicit bits)
ch = Chop(customs=ct1, rmode=3) # Round towards minus infinity
X_q = ch(X)
print(X_q[:10, 0])
ct2 = Customs(emax=15, t=11)
ch = Chop(customs=ct2, rmode=3)
X_q = ch(X)
print(X_q[:10, 0])
Set quantized layer (seamlessly integrated with Straight-Through Estimator):
import torch
from pychop.layers import QuantizedLayer,
layer = QuantizedLayer(exp_bits=5, sig_bits=10, rmode=1) # half precision, round to nearest ties to even
input_tensor = torch.randn(3, 4)
output = layer(input_tensor)
A sequential neural network can be built with:
import torch.nn as nn
from pychop.layers import *
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__(exp_bits=5, sig_bits=10, rmode=1)
self.flatten = nn.Flatten()
self.fc1 = QuantizedLinear(256, 256, exp_bits, sig_bits, rmode=rmode)
self.relu1 = nn.ReLU()
self.dropout = nn.Dropout(0.2)
self.fc2 = QuantizedLinear(256, 10, exp_bits, sig_bits, rmode=rmode)
# 5 exponent bits, 10 explicit significant bits , round to nearest ties to even
def forward(self, x):
x = self.flatten(x)
x = self.fc1(x)
x = self.relu1(x)
x = self.dropout(x)
x = self.fc2(x)
return x
Alternatively, one can seek a less strict simulation:
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.flatten = nn.Flatten()
self.fc1 = nn.Linear(256, 256)
self.relu1 = nn.ReLU()
self.dropout = nn.Dropout(0.2)
self.fc2 = nn.Linear(256, 10)
self.quant = QuantizedLayer(exp_bits=5, sig_bits=10, rmode=1)
def forward(self, x):
x = self.flatten(x)
x = self.quant(self.fc1(x))
x = self.relu1(x)
x = self.dropout(x)
x = self.quant(self.fc2(x))
return x
Similar to floating point quantization, one can set the corresponding backend. The dominant parameters are ibits and fbits, which are the bitwidths of the integer part and the fractional part, respectively.
pychop.backend('numpy')
from pychop import Chopf
ch = Chopf(ibits=4, fbits=4)
X_q = ch(X)
The code example can be found on the guidance1 and guidance2.
Integer quantization is another important feature of pychop. It intention is to convert the floating point number into a low bit-width integer, which speeds up the computations in certain computing hardware. It performs quantization with user-defined bitwidths. The following example illustrates the usage of the method.
The integer arithmetic emulation of pychop
is implemented by the interface Chopi. It can be used in many circumstances, and offers flexible options for users, such as symmetric or unsymmetric quantization and the number of bits to use. The usage is illustrated as below:
import numpy as np
from pychop import Chopi
pychop.backend('numpy')
x = np.array([[0.1, -0.2], [0.3, 0.4]])
ch = Chopi(num_bits=8, symmetric=False)
q = ch.quantize(x) # Convert to integers
dq = ch.dequantize(q) # Convert back to floating points
If you use Python virtual environments in MATLAB, ensure MATLAB detects it:
pe = pyenv('Version', 'your_env\python.exe'); % or simply pe = pyenv();
To use Pychop
in your MATLAB environment, similarly, simply load the Pychop module:
pc = py.importlib.import_module('pychop');
ch = pc.LightChop(exp_bits=5, sig_bits=10, rmode=1)
X = rand(100, 100);
X_q = ch(X);
Or more specifically, use
np = py.importlib.import_module('numpy');
pc = py.importlib.import_module('pychop');
ch = pc.LightChop(exp_bits=5, sig_bits=10, rmode=1)
X = np.random.randn(int32(100), int32(100));
X_q = ch(X);
-
Machine Learning: Test the impact of low-precision arithmetic on model accuracy and training stability, especially for resource-constrained environments like edge devices.
-
Hardware Design: Simulate custom floating-point units before hardware implementation, optimizing bit allocations for specific applications.
-
Numerical Analysis: Investigate quantization errors and numerical stability in scientific computations.
-
Education: Teach concepts of floating-point representation, rounding, and denormal numbers with a hands-on, customizable tool.
We welcome contributions in any form! Assistance with documentation is always welcome. To contribute, feel free to open an issue or please fork the project make your changes and submit a pull request. We will do our best to work through any issues and requests.
This project is supported by the European Union (ERC, InEXASCALE, 101075632). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
[1] Nicholas J. Higham and Srikara Pranesh, Simulating Low Precision Floating-Point Arithmetic, SIAM J. Sci. Comput., 2019.
[2] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019 (revision of IEEE Std 754-2008), IEEE, 2019.
[3] Intel Corporation, BFLOAT16---hardware numerics definition, 2018
[4] Muller, Jean-Michel et al., Handbook of Floating-Point Arithmetic, Springer, 2018