자연어 논문 리뷰

1.Word2Vec

post-thumbnail

2.Pointer - Generator

post-thumbnail

3.Style Transformer

post-thumbnail

4.Seq2Seq with Attention

post-thumbnail

5.Transformer - Implementation

post-thumbnail

6.BERT - Pre-training of Deep Bidirectional Transformers for Language Understanding

post-thumbnail

7.[GPT-1] Improving Language Understanding by Generative Pre-Training

post-thumbnail

8.[GPT-2] Language Models are Unsupervised Multitask Learners

post-thumbnail

9.Linear Transformer

post-thumbnail

10.Contextual Embedding - How Contextual are Contextualized Word Representations?

post-thumbnail

11.[GPT-2] Language Models are Unsupervised Multitask Learners

post-thumbnail

12.[TAPT, DAPT] - Don't Stop Pretraining. Adapt Language Models to Domains and Tasks.

post-thumbnail