🪴 jaden lorenc

Search

Search IconIcon to open search

Informer paper

Last updated Feb 8, 2024 Edit Source

#work/patientsim #paper_notes 2024-02-05

[2012.07436] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting published in 2021

It’s just a transformer with $O(L \log{L})$ complexity in its attention mechanism, using some sparse version of self-attention via distilling it. It’s proven to work well on time-series data.