모델 추론 시 다음과 같은 메시지가 터미널 등에 함께 출력되는 경우가 많았다.
'The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.'
이는 다음 부분을 추가해주면 사라진다.
import transformers
transformers.logging.set_verbosity_error()