인공지능 및 기계학습 개론 I - Ch1.3

Smiling Sammy·2022년 4월 21일
0

Incoporating Prior Knowledge

  • Bayes formulation
  • P(θ)P(\theta): the part of the prior knowledge

Bayes Viewpoint

  • Need to represent the prior knowledge well

    • multiply goes smooth
    • does not complicate the formula

    => using Beta distribution

Beta distribution

  • need α,β\alpha, \beta parameter

Posterior

Maximum a Posteriori Estimation

  • MLE: maximize P(Dθ)P(D|\theta)

  • MAP: maximize P(θD)P(\theta|D)

Conclusion

  • Question: MLE != MAP?
  • Answer: if aH,aTa_H, a_T become big, α,β\alpha, \beta will be noting
profile
Data Scientist, Data Analyst

0개의 댓글