Skip to content

This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.

License

Notifications You must be signed in to change notification settings

imics-lab/positional-encoding-benchmark

Repository files navigation

Positional Encoding Benchmark for Time Series Classification

arXiv License: MIT Python 3.10 PyTorch

This repository provides a comprehensive evaluation framework for positional encoding methods in transformer-based time series models, along with implementations and benchmarking results.

Our work is available on arXiv: Positional Encoding in Transformer-Based Time Series Models: A Survey

Models

We present a systematic analysis of positional encoding methods evaluated on two transformer architectures:

  1. Multivariate Time Series Transformer Framework (TST)
  2. Time Series Transformer with Patch Embedding

Positional Encoding Methods

We implement and evaluate eight positional encoding methods:

Method Type Inject. Learn. Params Memory Complex.
Sin. PE Abs Add F 0 O(Ld) O(Ld)
Learn. PE Abs Add L Ld O(Ld) O(Ld)
RPE Rel Att F (2L−1)dl O(L²d) O(L²d)
tAPE Abs Add F 0 O(Ld) O(Ld)
RoPE Hyb Att F 0 O(Ld) O(L²d)
eRPE Rel Att L 2L − 1 O(L² + L) O(L²)
TUPE Hyb Att L 2dl O(Ld+d²) O(Ld+d²)
ConvSPE Rel Att L 3Kdh+dl O(LKR) O(LKR)
T-PE Hyb Comb M 2d²l/h+(2L+2l)d O(L²d) O(L²d)
ALiBi Rel Att F 0 O(L²h) O(L²h)

Legend:

  • Abs=Absolute, Rel=Relative, Hyb=Hybrid
  • Add=Additive, Att=Attention, Comb=Combined
  • F=Fixed, L=Learnable, M=Mixed
  • L: sequence length, d: embedding dimension, h: attention heads, K: kernel size, l: layers

Dependencies

  • Python 3.10
  • PyTorch 2.4.1+cu121
  • NumPy
  • Scikit-learn
  • CUDA 12.2

Clone and Installation

# Clone the repository
git clone https://github.com/imics-lab/positional-encoding-benchmark.git
cd positional-encoding-benchmark

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Linux/Mac
# or
.\venv\Scripts\activate  # Windows

# Install dependencies
pip install -r requirements.txt

# Run benchmark with default config
python examples/run_benchmark.py

# Or with custom config
python examples/run_benchmark.py --config path/to/custom_config.yaml

Results

Our experimental evaluation encompasses eight distinct positional encoding methods tested across eleven diverse time series datasets using two transformer architectures.

Key Findings

📊 Sequence Length Impact

  • Long sequences (>100 steps): 5-6% improvement with advanced methods
  • Medium sequences (50-100 steps): 3-4% improvement
  • Short sequences (<50 steps): 2-3% improvement

⚙️ Architecture Performance

  • TST: More distinct performance gaps
  • Patch Embedding: More balanced performance among top methods

🏆 Average Rankings

  • SPE: 1.727 (batch norm), 2.090 (patch embed)
  • TUPE: 1.909 (batch norm), 2.272 (patch embed)
  • T-PE: 2.636 (batch norm), 2.363 (patch embed)

Performance Analysis

Biomedical Signals (EEG, EMG)

  • TUPE achieves highest average accuracy
  • SPE shows strong performance
  • Both methods demonstrate effectiveness in capturing long-range dependencies

Environmental and Sensor Data

  • SPE exhibits superior performance
  • TUPE maintains competitive accuracy
  • Relative encoding methods show improved local pattern recognition

Computational Efficiency Analysis

Training time measurements on Melbourne Pedestrian dataset (100 epochs):

Method Time (s) Ratio Accuracy
Sin. PE 48.2 1.00 66.8%
Learn. PE 60.1 1.25 70.2%
RPE 128.4 2.66 72.4%
tAPE 54.0 1.12 68.2%
RoPE 67.8 1.41 69.0%
eRPE 142.8 2.96 73.3%
TUPE 118.3 2.45 74.5%
ConvSPE 101.6 2.11 75.3%
T-PE 134.7 2.79 74.2%
ALiBi 93.8 1.94 67.2%

ConvSPE emerges as the efficiency frontier leader, achieving highest accuracy (75.3%) with reasonable computational overhead (2.11×).

Method Selection Guidelines

Sequence Length-Based Recommendations

  • Short sequences (L ≤ 50): Learnable PE or tAPE (minimal gains don't justify computational overhead)
  • Medium sequences (50 < L ≤ 100): SPE or eRPE (3-4% accuracy improvements)
  • Long sequences (L > 100): TUPE for complex patterns, SPE for regular data, ConvSPE for linear complexity

Domain-Specific Guidelines

  • Biomedical signals: TUPE > SPE > T-PE (physiological complexity handling)
  • Environmental sensors: SPE > eRPE (regular sampling patterns)
  • High-dimensional data (d > 5): Advanced methods consistently outperform simple approaches

Computational Resource Framework

  • Limited resources: Sinusoidal PE, tAPE (O(Ld) complexity)
  • Balanced scenarios: SPE, TUPE (optimal accuracy-efficiency trade-off)
  • Performance-critical: TUPE, SPE regardless of computational cost

Architecture-Specific Considerations

  • Time Series Transformers: Prioritize content-position separation methods (TUPE) and relative positioning (eRPE, SPE)
  • Patch Embedding Transformers: Multi-scale approaches (T-PE, ConvSPE) handle hierarchical processing more effectively

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Citation

@article{irani2025positional,
  title={Positional Encoding in Transformer-Based Time Series Models: A Survey},
  author={Irani, Habib and Metsis, Vangelis},
  journal={arXiv preprint arXiv:2502.12370},
  year={2025}
}

About

This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •