paper_review

1.논문분석: Attention is all you need

post-thumbnail

2.논문분석: Big Bird: Transformers for Longer Sequences

post-thumbnail

3.논문 리뷰: DEEP DOUBLE DESCENT: WHERE BIGGER MODELS AND MORE DATA HURT

post-thumbnail

4.논문분석: Going deeper with convolutions

post-thumbnail

5.논문분석: Deep Residual Learning for Image Recognition

post-thumbnail

6.논문 분석: AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE

post-thumbnail

7.논문분석: Playing Atari with Deep Reinforcement Learning

post-thumbnail

8.논문 분석: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

post-thumbnail

9.논문 분석: Generative Adversarial Nets

post-thumbnail

10.논문 분석: Denoising Diffusion Probabilistic Models

post-thumbnail

11.논문분석: Batch Normalization : Accelerating Deep Network Training by Reducing Internal Covariate Shift

post-thumbnail

12.논문분석: LIFELONG LEARNING WITH DYNAMICALLY EXPANDABLE NETWORKS

post-thumbnail

13.논문 분석: Language Models are Few-Shot Learners

post-thumbnail

14.논문분석 Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

post-thumbnail

15.논문 분석: Learning Transferable Visual Models From Natural Language Supervision

post-thumbnail

16.논문분석: CoCa: Contrastive Captioners are Image-Text Foundation Models

post-thumbnail

17.You Only Look Once: Unified, Real-Time Object Detection

post-thumbnail

18.Is Space-Time Attention All You Need for Video Understanding?

post-thumbnail

19.Long Text Generation via Adversarial Training with Leaked Information

post-thumbnail

20.Swin Transformer V2: Scaling Up Capacity and Resolution

post-thumbnail

21.End-to-End Object Detection with Transformers

post-thumbnail

22.Emerging Properties in Self-Supervised Vision Transformers

post-thumbnail