Alias-Free ViT: Fractional Shift Invariance via Linear Attention
Abstract
Transformers have emerged as a competitive alternative to convnets in vision tasks, yet they lack the architectural inductive bias of convnets, which may hinder their potential performance. Specifically, Vision Transformers (ViTs) are not translation‑invariant and are more sensitive to minor image translations than standard convnets. Previous studies have shown, however, that convnets are also not perfectly shift‑invariant, due to aliasing in down‑sampling and non‑linear layers. Consequently, anti‑aliasing approaches have been proposed to certify convnets translation robustness. Building on this line of work, we propose an Alias‑Free ViT, which combines two main components. First, it uses alias-free down‑sampling and non‑linearities. Second, it uses linear cross‑covariance attention that is shift‑invariant to both integer and fractional translations. Our model maintains competitive performance in image classification and outperforms similar‑sized models in terms of robustness to adversarial translations.