VAE research

jj·2021년 2월 26일
0

https://towardsdatascience.com/variational-autoencoders-as-generative-models-with-keras-e0c79415a7eb

Auto Encoder

  • An autoencoder is basically a neural network that takes a high dimensional data point as input, converts it into a lower-dimensional feature vector(ie., latent vector), and later reconstructs the original input sample just utilizing the latent vector representation without losing valuable information.

Problem of AE

  • One issue with the ordinary autoencoders is that they encode each input sample independently.
    → This means that the samples belonging to the same class (or the samples belonging to the same distribution) might learn very different(distant encodings in the latent space) latent embeddings.
    → the latent features of the same class should be somewhat similar (or closer in latent space)
  • This happens because we are not explicitly forcing the neural network to learn the distributions of the input dataset. Due to this issue, our network might not very good at reconstructing related unseen data samples (or less generalizable).

Variational Auto Encoder

  • Instead of directly learning the latent features from the input samples, it actually learns the distribution of latent features.
  • The latent features of the input data are assumed to be following a standard normal distribution.
    → This means that the learned latent vectors are supposed to be zero centric and they can be represented with two statistics-mean and variance
    → VAEs calculate the mean and variance of the latent vectors(instead of directly learning latent features) for each sample and forces them to follow a standard normal distribution.
  • bottleneck part of the network is used to learn mean and variance for each sample, we will define two different fully connected(FC) layers to calculate both.
  • VAEs ensure that the points that are very close to each other in the latent space, are representing very similar data samples(similar classes of data). We are going to prove this fact in this tutorial.

Kullback–Leibler (KL) divergence

  • enforcing a standard normal distribution on the latent features of the input dataset. This can be accomplished using KL-divergence statistics.
  • KL-divergence is a statistical measure of the difference between two probabilistic distributions.
  • Thus, we will utilize KL-divergence value as an objective function(along with the reconstruction loss) in order to ensure that the learned distribution is very similar to the true distribution, which we have already assumed to be a standard normal distribution.

Objective = Reconstruction Loss + KL-Loss

  • This further means that the distribution is centered at zero and is well-spread in the space.

https://taeu.github.io/paper/deeplearning-paper-vae/

https://excelsior-cjh.tistory.com/187

https://medium.com/datadriveninvestor/latent-variable-models-and-autoencoders-97c44858caa0

https://wikidocs.net/3413

profile
재밌는게 재밌는거다

8개의 댓글

comment-user-thumbnail
2025년 7월 9일

Ordinary comes to visit and listed below are one way to thanks for your time for one's exertion, which inturn means that So i'm seeing this website every single day, hunting for unique, important tips. A number of, many thanks! IPTV UK

답글 달기
comment-user-thumbnail
2025년 7월 21일

This kind of is a wonderful submit My partner and i noticed as a result of discuss that. It is just what I desired to find out desire inside upcoming you may keep on regarding revealing this kind of outstanding submit. 【実話】童貞マチアプ体験記!冴えないエンジニア(23)が愛を証明するまで

답글 달기
comment-user-thumbnail
2025년 7월 22일

This kind of is a wonderful submit My partner and i noticed as a result of discuss that. It is just what I desired to find out desire inside upcoming you may keep on regarding revealing this kind of outstanding submit. wicked lingerie

답글 달기
comment-user-thumbnail
2025년 7월 23일

This kind of is a wonderful submit My partner and i noticed as a result of discuss that. It is just what I desired to find out desire inside upcoming you may keep on regarding revealing this kind of outstanding submit.

답글 달기
comment-user-thumbnail
2025년 7월 23일

This kind of is a wonderful submit My partner and i noticed as a result of discuss that. It is just what I desired to find out desire inside upcoming you may keep on regarding revealing this kind of outstanding submit. magnumslot

답글 달기
comment-user-thumbnail
2025년 7월 23일

a extremely superb webpage. a realy educational in addition to a an extremely superior position. i’m a sucker for the. https://www.reddit.com/r/hvacadvice/comments/1lznvi3/best_fsm_tools_for_small_hvac_dispatch/

답글 달기
comment-user-thumbnail
2025년 7월 24일

a extremely superb webpage. a realy educational in addition to a an extremely superior position. i’m a sucker for the. funny thong

답글 달기
comment-user-thumbnail
2025년 7월 29일

That you are allowed to write-up bands, but is not inbound links, except there're okayed in addition to with subject matter. WhatsApp Marketing Tips for Travel Agents

답글 달기