1장-Introduction

이두현·2022년 5월 24일
0

참고하면 좋은 사이트
http://norman3.github.io/prml/docs/chapter01/2

복습한 만한 요소 기록
p.40 - covariance of vectors
p.42 - frequentist and bayesian perspective
p.46 - Gaussian expression of likelihood function
p.47 - why maximum likelihood underestimate mean variance by factor N-1/N
p.49 - with given input x, output t has Gaussian distribution and likelihood is maximum with respect to w and beta if we find L2 minimum
p.51 - to be true baysean we should consider w as random variable and consider all of its possible values
p.56 - higher dimensionality means more volume around the surface
p.59 - decision making using Bayes' theorem
p.59, 60 - minimizing mistake and maximizing correct probability is pretty much the same
p.63 - three models for decision problem

정리조금
2022.05.18
maximum likelihood 기준으로 학습시키다는 것은 likelihood function인 p(Dw)p(D|w) 을 최대로 만드는 w를 학습하는 것
이 때 loss function은 log(p(Dw))-log(p(D|w)) 로 단조감소함수이므로 이를 최소화 하는 것이 ML을 최대화하는 것과 동치이다.

ML : most probable value of w which maximize P(DW)P(D|W)
MAP(maximum posterior) : find most probable value of w given the data

p(Ck)p(C_k) : prior probability
p(Ckx)p(C_k|x) : posterior probability

profile
0100101

0개의 댓글