Introduction Data-Centric AI vs. Model-Centric AILabel ErrorsDataset Creation and CurationData-centric Evaluation of ML ModelsClass Imbalance, Outlier
Introduction Data-Centric AI vs. Model-Centric AILabel ErrorsDataset Creation and CurationData-centric Evaluation of ML ModelsClass Imbalance, Outlier
Introduction Data-Centric AI vs. Model-Centric AILabel ErrorsDataset Creation and CurationData-centric Evaluation of ML ModelsClass Imbalance, Outlier
paper: https://aclanthology.org/2022.naacl-main.290/ > velog: https://velog.io/@zvezda/On-Transferability-of-Prompt-Tuning-for-Natural-Language-Proces
paper: https://arxiv.org/pdf/2010.02502.pdfbackgroundgenerative modeldiffusion model1) 이미지가 주어지면 노이지를 줄여가는 diffusion process2) noise 가지고 생성하는 sam
goal: abstractive summarization system의 수행결과를 human annotation 결과랑 비교해서 long document에 대해 얼마나 잘했는지 평가해본다. result: 비교해보니까 ROUGE result에서는 굿이었음. 그러니까 re
background: OOD를 탐지하는 것은 중요한 문제인데, 아직 어떠한 방식이 detect하는 데 최적의 방법인지에 대한 동의 없음. approach: OOD를 1) background shift, 2) semantic shift 둘로 나눠서 이를 탐지할 수 있는
8\. Conclusions
InversionPermutationTransliterationSyntaxBilingual ModelMonolingual Model
Goal: 1600가지 언어 데이터셋을 활용해서 기존의 PMM 모델들 performance 검증Challenge: 1) 소량 2) narrow domainResult: XLM-R good
paper: https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdfBackground: LM 모델을 더 크게 만든
Dialogue systemopen-domain dialogue system : 대화주제 자유롭게task oriented dialogue (TOD) system: 특정 task 수행TODsingle domain TOD: 하나의 작업만 수행Multi domain TOD:
Introduction Data-Centric AI vs. Model-Centric AILabel ErrorsDataset Creation and CurationData-centric Evaluation of ML ModelsClass Imbalance, Outlier
Introduction Data-Centric AI vs. Model-Centric AILabel ErrorsDataset Creation and CurationData-centric Evaluation of ML ModelsClass Imbalance, Outlier
page: https://dcai.csail.mit.edu/1/17/23: Data-Centric AI vs. Model-Centric AI1/18/23: Label Errors1/19/23: Dataset Creation and Curation1/20/23:
paper: https://aclanthology.org/2022.emnlp-main.399.pdfcode: https://github.com/VanderpoelLiam/CPMIBackground: hallucinate: 요약한게 source docu
Paper: https://aclanthology.org/2020.acl-main.703/ > Code: > - Ko-BART : https://github.com/SKT-AI/KoBART
한줄 요약: unlabeled or labeled data 모두에서 setence embedding 뽑을 수 있다? Paper: https://aclanthology.org/2021.emnlp-main.552/Code: https://github.co
Paper: https://ojs.aaai.org/index.php/AAAI/article/view/4666 > Github: https://github.com/hainow/MCTN # Introduction # Related Work # Proposed Appro
인용이 무려 1913회! ㅇ_ㅇ Summary Introduction Approach Preliminaries: Bidirectional Encoder Representations from Transformers (BERT) Text Representation Tr