# CV

📎 Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet

ViT에서 Transformer Attention을 단순한 FF Layer로 바꿨는데 성능이 비슷. 79.9(ViT) vs 77.9(FF Layer only)

Posted on Mon, May 24, 2021 TLDR논문리뷰 CV