9-1. 들어가며


학습 목표

  • 딥러닝 모델 크기 조절
  • 딥러닝 모델 학습을 위한 규제 배우기

학습 내용

  • 모델 크기 조절

    • 모델 크기 증가
    • 모델 크기 감소
  • 규제(Regularization)

    • L1 규제
    • L2 규제
    • L1-L2 규제
  • 드롭아웃(Dropout)



9-2. 모델 크기 조절


모델 크기 조절

  • 레이어 유닛수 증가/감소 -> 모델 전체 파라미터 수 증가/감소
  • 레이어 수 증가 -> deep한 신경망 -> 모델 크기 증가
  • 데이터 규모가 클수록 -> 크고 깊은 모델에서 좋은 성능을 보이는 특성이 있음
  • 데이터 규모에 비해 -> 모델이 크다면 -> Overfitting

데이터 로드 및 전처리

  • imdb.load_data()로 IMDB 데이터셋 다운로드
  • 10,000 차원 학습데이터로 원-핫 인코딩
from keras.datasets import imdb
import numpy as np

def one_hot_encoding(data, dim=10000): # 아래 imdb.load_data의 num_words를 10000으로 설정할 예정이기 때문에 dim도 10000으로 맞춰줍니다.
  results = np.zeros((len(data), dim))
  for i, d in enumerate(data):
    results[i, d] = 1.
  return results

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

x_train = one_hot_encoding(train_data)
x_test = one_hot_encoding(test_data)

y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')


모델 구성 및 컴파일

  • 모델 정의 : 3개의 Dense 레이어
  • 컴파일 : rmsprop 옵티마이저, binary_crossentropy 손실 함수, accuracy
  • 모델 전체 파라미터 수 : 1,296,769개
import tensorflow as tf
from tensorflow.keras import models, layers

model = models.Sequential()
model.add(layers.Dense(128, activation='relu', input_shape=(10000, ), name='input'))
model.add(layers.Dense(128, activation='relu', name='hidden'))
model.add(layers.Dense(1, activation='sigmoid', name='output'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model.summary()


모델 학습

history = model.fit(x_train, y_train,
                    epochs=30,
                    batch_size=512,
                    validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 5s 79ms/step - loss: 0.4225 - accuracy: 0.8194 - val_loss: 0.3182 - val_accuracy: 0.8705
Epoch 2/30
49/49 [==============================] - 1s 19ms/step - loss: 0.2303 - accuracy: 0.9106 - val_loss: 0.3179 - val_accuracy: 0.8699
Epoch 3/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1632 - accuracy: 0.9374 - val_loss: 0.4061 - val_accuracy: 0.8460
Epoch 4/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1176 - accuracy: 0.9558 - val_loss: 0.3590 - val_accuracy: 0.8665
Epoch 5/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0793 - accuracy: 0.9720 - val_loss: 0.3499 - val_accuracy: 0.8781
Epoch 6/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0503 - accuracy: 0.9859 - val_loss: 0.4595 - val_accuracy: 0.8751
Epoch 7/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0412 - accuracy: 0.9888 - val_loss: 0.5135 - val_accuracy: 0.8726
Epoch 8/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0308 - accuracy: 0.9920 - val_loss: 0.5503 - val_accuracy: 0.8720
Epoch 9/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0318 - accuracy: 0.9932 - val_loss: 0.5555 - val_accuracy: 0.8688
Epoch 10/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0022 - accuracy: 0.9999 - val_loss: 0.7567 - val_accuracy: 0.8706
Epoch 11/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0429 - accuracy: 0.9929 - val_loss: 0.7270 - val_accuracy: 0.8678
Epoch 12/30
49/49 [==============================] - 1s 20ms/step - loss: 6.2821e-04 - accuracy: 1.0000 - val_loss: 0.8904 - val_accuracy: 0.8665
Epoch 13/30
49/49 [==============================] - 1s 20ms/step - loss: 0.0224 - accuracy: 0.9961 - val_loss: 2.1360 - val_accuracy: 0.7686
Epoch 14/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0083 - accuracy: 0.9984 - val_loss: 0.9488 - val_accuracy: 0.8665
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0421 - accuracy: 0.9940 - val_loss: 0.9049 - val_accuracy: 0.8622
Epoch 16/30
49/49 [==============================] - 1s 20ms/step - loss: 3.2664e-04 - accuracy: 1.0000 - val_loss: 0.9771 - val_accuracy: 0.8672
Epoch 17/30
49/49 [==============================] - 1s 20ms/step - loss: 1.3790e-04 - accuracy: 1.0000 - val_loss: 1.1139 - val_accuracy: 0.8682
Epoch 18/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0581 - accuracy: 0.9936 - val_loss: 1.1068 - val_accuracy: 0.8610
Epoch 19/30
49/49 [==============================] - 1s 17ms/step - loss: 1.7751e-04 - accuracy: 1.0000 - val_loss: 1.1626 - val_accuracy: 0.8628
Epoch 20/30
49/49 [==============================] - 1s 19ms/step - loss: 4.2915e-05 - accuracy: 1.0000 - val_loss: 1.2402 - val_accuracy: 0.8655
Epoch 21/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3894e-05 - accuracy: 1.0000 - val_loss: 1.4213 - val_accuracy: 0.8598
Epoch 22/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0366 - accuracy: 0.9946 - val_loss: 1.2643 - val_accuracy: 0.8663
Epoch 23/30
49/49 [==============================] - 1s 18ms/step - loss: 1.0974e-05 - accuracy: 1.0000 - val_loss: 1.2953 - val_accuracy: 0.8652
Epoch 24/30
49/49 [==============================] - 1s 19ms/step - loss: 6.8438e-06 - accuracy: 1.0000 - val_loss: 1.3652 - val_accuracy: 0.8644
Epoch 25/30
49/49 [==============================] - 1s 19ms/step - loss: 3.4267e-06 - accuracy: 1.0000 - val_loss: 1.5060 - val_accuracy: 0.8645
Epoch 26/30
49/49 [==============================] - 1s 20ms/step - loss: 0.0519 - accuracy: 0.9951 - val_loss: 1.5261 - val_accuracy: 0.8515
Epoch 27/30
49/49 [==============================] - 1s 19ms/step - loss: 1.9214e-05 - accuracy: 1.0000 - val_loss: 1.4817 - val_accuracy: 0.8602
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3582e-05 - accuracy: 1.0000 - val_loss: 1.5187 - val_accuracy: 0.8601
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 1.9513e-06 - accuracy: 1.0000 - val_loss: 1.5572 - val_accuracy: 0.8627
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 5.9828e-07 - accuracy: 1.0000 - val_loss: 1.6797 - val_accuracy: 0.8629
  • 모델 지표 결과 : loss, val_loss, accuracy, val_accuracy
    • Overfitting 발생함!
import matplotlib.pyplot as plt

history_dict = history.history

loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, loss, 'b--', label='train_loss')
ax1.plot(epochs, val_loss, 'r--', label='val_loss')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

accuracy = history_dict['accuracy']
val_accuracy = history_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, accuracy, 'b--', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r--', label='val_accuracy')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()


모델 크기 증가

  • Dense 레이어의 유닛수 증가
    • 128 ➡️ 2048
    • 전체 파라미터 수 : 24,680,449개
b_model = models.Sequential()
b_model.add(layers.Dense(2048, activation='relu', input_shape=(10000, ), name='input3'))
b_model.add(layers.Dense(2048, activation='relu', name='hidden3'))
b_model.add(layers.Dense(1, activation='sigmoid', name='output3'))
b_model.compile(optimizer='rmsprop',
                loss='binary_crossentropy',
                metrics=['accuracy'])
b_model.summary()

b_model_history = b_model.fit(x_train, y_train,
                              epochs=30,
                              batch_size=512, 
                              validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 5s 100ms/step - loss: 0.6946 - accuracy: 0.7818 - val_loss: 0.2831 - val_accuracy: 0.8844
Epoch 2/30
49/49 [==============================] - 2s 37ms/step - loss: 0.2267 - accuracy: 0.9112 - val_loss: 0.4239 - val_accuracy: 0.8088
Epoch 3/30
49/49 [==============================] - 2s 36ms/step - loss: 0.1424 - accuracy: 0.9536 - val_loss: 0.2914 - val_accuracy: 0.8838
Epoch 4/30
49/49 [==============================] - 2s 36ms/step - loss: 0.0612 - accuracy: 0.9828 - val_loss: 0.4297 - val_accuracy: 0.8871
Epoch 5/30
49/49 [==============================] - 2s 36ms/step - loss: 0.0974 - accuracy: 0.9848 - val_loss: 0.5125 - val_accuracy: 0.8820
Epoch 6/30
49/49 [==============================] - 2s 36ms/step - loss: 2.5356e-04 - accuracy: 1.0000 - val_loss: 0.6870 - val_accuracy: 0.8840
Epoch 7/30
49/49 [==============================] - 2s 37ms/step - loss: 1.5545e-05 - accuracy: 1.0000 - val_loss: 0.8217 - val_accuracy: 0.8857
Epoch 8/30
49/49 [==============================] - 2s 37ms/step - loss: 1.3582e-06 - accuracy: 1.0000 - val_loss: 0.9576 - val_accuracy: 0.8844
Epoch 9/30
49/49 [==============================] - 2s 37ms/step - loss: 2.0238e-07 - accuracy: 1.0000 - val_loss: 1.0894 - val_accuracy: 0.8852
Epoch 10/30
49/49 [==============================] - 2s 36ms/step - loss: 4.0886e-08 - accuracy: 1.0000 - val_loss: 1.1827 - val_accuracy: 0.8856
Epoch 11/30
49/49 [==============================] - 2s 36ms/step - loss: 1.4549e-08 - accuracy: 1.0000 - val_loss: 1.2354 - val_accuracy: 0.8854
Epoch 12/30
49/49 [==============================] - 2s 36ms/step - loss: 8.2828e-09 - accuracy: 1.0000 - val_loss: 1.2656 - val_accuracy: 0.8856
Epoch 13/30
49/49 [==============================] - 2s 36ms/step - loss: 5.7964e-09 - accuracy: 1.0000 - val_loss: 1.2864 - val_accuracy: 0.8856
Epoch 14/30
49/49 [==============================] - 2s 37ms/step - loss: 4.4881e-09 - accuracy: 1.0000 - val_loss: 1.3020 - val_accuracy: 0.8857
Epoch 15/30
49/49 [==============================] - 2s 36ms/step - loss: 3.6790e-09 - accuracy: 1.0000 - val_loss: 1.3145 - val_accuracy: 0.8856
Epoch 16/30
49/49 [==============================] - 2s 36ms/step - loss: 3.1227e-09 - accuracy: 1.0000 - val_loss: 1.3247 - val_accuracy: 0.8858
Epoch 17/30
49/49 [==============================] - 2s 36ms/step - loss: 2.7261e-09 - accuracy: 1.0000 - val_loss: 1.3334 - val_accuracy: 0.8857
Epoch 18/30
49/49 [==============================] - 2s 37ms/step - loss: 2.4276e-09 - accuracy: 1.0000 - val_loss: 1.3411 - val_accuracy: 0.8856
Epoch 19/30
49/49 [==============================] - 2s 36ms/step - loss: 2.1947e-09 - accuracy: 1.0000 - val_loss: 1.3480 - val_accuracy: 0.8855
Epoch 20/30
49/49 [==============================] - 2s 37ms/step - loss: 1.9992e-09 - accuracy: 1.0000 - val_loss: 1.3543 - val_accuracy: 0.8855
Epoch 21/30
49/49 [==============================] - 2s 39ms/step - loss: 1.8431e-09 - accuracy: 1.0000 - val_loss: 1.3600 - val_accuracy: 0.8853
Epoch 22/30
49/49 [==============================] - 2s 36ms/step - loss: 1.7230e-09 - accuracy: 1.0000 - val_loss: 1.3652 - val_accuracy: 0.8853
Epoch 23/30
49/49 [==============================] - 2s 37ms/step - loss: 1.6143e-09 - accuracy: 1.0000 - val_loss: 1.3699 - val_accuracy: 0.8853
Epoch 24/30
49/49 [==============================] - 2s 38ms/step - loss: 1.5247e-09 - accuracy: 1.0000 - val_loss: 1.3743 - val_accuracy: 0.8856
Epoch 25/30
49/49 [==============================] - 2s 38ms/step - loss: 1.4527e-09 - accuracy: 1.0000 - val_loss: 1.3782 - val_accuracy: 0.8856
Epoch 26/30
49/49 [==============================] - 2s 37ms/step - loss: 1.3732e-09 - accuracy: 1.0000 - val_loss: 1.3819 - val_accuracy: 0.8856
Epoch 27/30
49/49 [==============================] - 2s 36ms/step - loss: 1.3146e-09 - accuracy: 1.0000 - val_loss: 1.3856 - val_accuracy: 0.8854
Epoch 28/30
49/49 [==============================] - 2s 37ms/step - loss: 1.2626e-09 - accuracy: 1.0000 - val_loss: 1.3889 - val_accuracy: 0.8854
Epoch 29/30
49/49 [==============================] - 2s 37ms/step - loss: 1.2197e-09 - accuracy: 1.0000 - val_loss: 1.3918 - val_accuracy: 0.8855
Epoch 30/30
49/49 [==============================] - 2s 36ms/step - loss: 1.1757e-09 - accuracy: 1.0000 - val_loss: 1.3948 - val_accuracy: 0.8852

  • 시각화
b_history_dict = b_model_history.history

b_loss = b_history_dict['loss']
b_val_loss = b_history_dict['val_loss']
epochs = range(1, len(b_loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, b_loss, 'b-', label='train_loss(large)')
ax1.plot(epochs, b_val_loss, 'r-', label='val_loss(large)')
ax1.plot(epochs, loss, 'b--', label='train_loss')
ax1.plot(epochs, val_loss, 'r--', label='val_loss')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

b_accuracy = b_history_dict['accuracy']
b_val_accuracy = b_history_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, b_accuracy, 'b-', label='train_accuracy(large)')
ax2.plot(epochs, b_val_accuracy, 'r-', label='val_accuracy(large)')
ax2.plot(epochs, accuracy, 'b--', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r--', label='val_accuracy')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()

분석

  • 모델 크기가 클수록(파라미터 수가 많은 신경망) ➡️ 빠르게 훈련 데이터 모델링 가능 + 학습 loss 낮음
  • 과대적합에는 더 민감해짐

모델 크기 감소

  • Dense : 128 -> 16
  • 전체 파라미터 수 : 160,305개
s_model = models.Sequential()
s_model.add(layers.Dense(16, activation='relu', input_shape=(10000, ), name='input2'))
s_model.add(layers.Dense(16, activation='relu', name='hidden2'))
s_model.add(layers.Dense(1, activation='sigmoid', name='output2'))
s_model.compile(optimizer='rmsprop',
                loss='binary_crossentropy',
                metrics=['accuracy'])
s_model.summary()

s_model_history = s_model.fit(x_train, y_train,
                              epochs=30,
                              batch_size=512, 
                              validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 5s 85ms/step - loss: 0.4664 - accuracy: 0.8186 - val_loss: 0.3511 - val_accuracy: 0.8791
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2694 - accuracy: 0.9095 - val_loss: 0.2975 - val_accuracy: 0.8831
Epoch 3/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2089 - accuracy: 0.9272 - val_loss: 0.2833 - val_accuracy: 0.8864
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1726 - accuracy: 0.9410 - val_loss: 0.3047 - val_accuracy: 0.8792
Epoch 5/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1503 - accuracy: 0.9474 - val_loss: 0.3096 - val_accuracy: 0.8786
Epoch 6/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1328 - accuracy: 0.9558 - val_loss: 0.3343 - val_accuracy: 0.8747
Epoch 7/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1160 - accuracy: 0.9623 - val_loss: 0.3446 - val_accuracy: 0.8745
Epoch 8/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1043 - accuracy: 0.9648 - val_loss: 0.3776 - val_accuracy: 0.8664
Epoch 9/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0916 - accuracy: 0.9709 - val_loss: 0.3924 - val_accuracy: 0.8682
Epoch 10/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0829 - accuracy: 0.9736 - val_loss: 0.4231 - val_accuracy: 0.8635
Epoch 11/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0737 - accuracy: 0.9772 - val_loss: 0.5068 - val_accuracy: 0.8514
Epoch 12/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0647 - accuracy: 0.9800 - val_loss: 0.4746 - val_accuracy: 0.8605
Epoch 13/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0577 - accuracy: 0.9833 - val_loss: 0.5250 - val_accuracy: 0.8554
Epoch 14/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0505 - accuracy: 0.9867 - val_loss: 0.6160 - val_accuracy: 0.8439
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0430 - accuracy: 0.9891 - val_loss: 0.6453 - val_accuracy: 0.8446
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0405 - accuracy: 0.9895 - val_loss: 0.5865 - val_accuracy: 0.8544
Epoch 17/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0342 - accuracy: 0.9912 - val_loss: 0.6592 - val_accuracy: 0.8492
Epoch 18/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0289 - accuracy: 0.9930 - val_loss: 0.6543 - val_accuracy: 0.8518
Epoch 19/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0255 - accuracy: 0.9938 - val_loss: 0.6993 - val_accuracy: 0.8487
Epoch 20/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0225 - accuracy: 0.9950 - val_loss: 0.7189 - val_accuracy: 0.8484
Epoch 21/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0183 - accuracy: 0.9962 - val_loss: 0.8091 - val_accuracy: 0.8441
Epoch 22/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0157 - accuracy: 0.9970 - val_loss: 0.8267 - val_accuracy: 0.8450
Epoch 23/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0137 - accuracy: 0.9972 - val_loss: 0.8246 - val_accuracy: 0.8463
Epoch 24/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0099 - accuracy: 0.9986 - val_loss: 0.8916 - val_accuracy: 0.8448
Epoch 25/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0101 - accuracy: 0.9977 - val_loss: 0.9393 - val_accuracy: 0.8439
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0085 - accuracy: 0.9982 - val_loss: 0.9653 - val_accuracy: 0.8437
Epoch 27/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0089 - accuracy: 0.9980 - val_loss: 1.0004 - val_accuracy: 0.8438
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0061 - accuracy: 0.9986 - val_loss: 1.0295 - val_accuracy: 0.8442
Epoch 29/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0056 - accuracy: 0.9988 - val_loss: 1.0774 - val_accuracy: 0.8439
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0049 - accuracy: 0.9989 - val_loss: 1.1057 - val_accuracy: 0.8437
  • 시각화
s_history_dict = s_model_history.history

s_loss = s_history_dict['loss']
s_val_loss = s_history_dict['val_loss']
epochs = range(1, len(s_loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, b_loss, 'b-', label='train_loss(large)')
ax1.plot(epochs, b_val_loss, 'r-', label='val_loss(large)')
ax1.plot(epochs, loss, 'b--', label='train_loss')
ax1.plot(epochs, val_loss, 'r--', label='val_loss')
ax1.plot(epochs, s_loss, 'b:', label='train_loss(small)')
ax1.plot(epochs, s_val_loss, 'r:', label='val_loss(small)')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

s_accuracy = s_history_dict['accuracy']
s_val_accuracy = s_history_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, b_accuracy, 'b-', label='train_accuracy(large)')
ax2.plot(epochs, b_val_accuracy, 'r-', label='val_accuracy(large)')
ax2.plot(epochs, accuracy, 'b--', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r--', label='val_accuracy')
ax2.plot(epochs, s_accuracy, 'b:', label='train_accuracy(small)')
ax2.plot(epochs, s_val_accuracy, 'r:', label='val_accuracy(small)')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()

분석

  • 과대적합 문제에 덜 민감해짐
  • 모델 최적화를 위해서 파라미터 수를 적절히 조절해야하는 이유

실습

  • 파라미터 조절 해보기
your_model = models.Sequential()
your_model.add(layers.Dense(64, activation='relu', input_shape=(10000, ), name='input2'))
your_model.add(layers.Dense(32, activation='relu', name='hidden2'))
your_model.add(layers.Dense(1, activation='sigmoid', name='output2'))
your_model.compile(optimizer='rmsprop',
                loss='binary_crossentropy',
                metrics=['accuracy'])
your_model.summary()

your_model_history = your_model.fit(x_train, y_train, epochs=30, batch_size=512,  validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 79ms/step - loss: 0.4214 - accuracy: 0.8101 - val_loss: 0.3304 - val_accuracy: 0.8646
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2401 - accuracy: 0.9086 - val_loss: 0.2810 - val_accuracy: 0.8884
Epoch 3/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1838 - accuracy: 0.9312 - val_loss: 0.2991 - val_accuracy: 0.8810
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1470 - accuracy: 0.9450 - val_loss: 0.3374 - val_accuracy: 0.8729
Epoch 5/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1167 - accuracy: 0.9568 - val_loss: 0.3809 - val_accuracy: 0.8666
Epoch 6/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0949 - accuracy: 0.9665 - val_loss: 0.4462 - val_accuracy: 0.8576
Epoch 7/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0743 - accuracy: 0.9742 - val_loss: 0.4140 - val_accuracy: 0.8712
Epoch 8/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0554 - accuracy: 0.9808 - val_loss: 0.4639 - val_accuracy: 0.8654
Epoch 9/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0382 - accuracy: 0.9880 - val_loss: 0.6972 - val_accuracy: 0.8308
Epoch 10/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0293 - accuracy: 0.9910 - val_loss: 0.5769 - val_accuracy: 0.8634
Epoch 11/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0265 - accuracy: 0.9922 - val_loss: 0.6065 - val_accuracy: 0.8638
Epoch 12/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0208 - accuracy: 0.9945 - val_loss: 0.6604 - val_accuracy: 0.8626
Epoch 13/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0209 - accuracy: 0.9947 - val_loss: 0.6992 - val_accuracy: 0.8623
Epoch 14/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0202 - accuracy: 0.9946 - val_loss: 0.7480 - val_accuracy: 0.8568
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0025 - accuracy: 0.9999 - val_loss: 0.8273 - val_accuracy: 0.8574
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0192 - accuracy: 0.9953 - val_loss: 0.8392 - val_accuracy: 0.8596
Epoch 17/30
49/49 [==============================] - 1s 18ms/step - loss: 9.1049e-04 - accuracy: 1.0000 - val_loss: 0.9079 - val_accuracy: 0.8587
Epoch 18/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0133 - accuracy: 0.9962 - val_loss: 0.9474 - val_accuracy: 0.8566
Epoch 19/30
49/49 [==============================] - 1s 18ms/step - loss: 4.3621e-04 - accuracy: 1.0000 - val_loss: 1.0234 - val_accuracy: 0.8576
Epoch 20/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0185 - accuracy: 0.9958 - val_loss: 1.0301 - val_accuracy: 0.8571
Epoch 21/30
49/49 [==============================] - 1s 17ms/step - loss: 2.2469e-04 - accuracy: 1.0000 - val_loss: 1.0908 - val_accuracy: 0.8565
Epoch 22/30
49/49 [==============================] - 1s 17ms/step - loss: 1.5990e-04 - accuracy: 1.0000 - val_loss: 1.1727 - val_accuracy: 0.8560
Epoch 23/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0255 - accuracy: 0.9956 - val_loss: 1.2020 - val_accuracy: 0.8548
Epoch 24/30
49/49 [==============================] - 1s 18ms/step - loss: 8.1451e-05 - accuracy: 1.0000 - val_loss: 1.2373 - val_accuracy: 0.8546
Epoch 25/30
49/49 [==============================] - 1s 18ms/step - loss: 5.4628e-05 - accuracy: 1.0000 - val_loss: 1.3161 - val_accuracy: 0.8555
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0136 - accuracy: 0.9969 - val_loss: 1.3459 - val_accuracy: 0.8529
Epoch 27/30
49/49 [==============================] - 1s 16ms/step - loss: 5.8296e-05 - accuracy: 1.0000 - val_loss: 1.3678 - val_accuracy: 0.8528
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 2.6224e-05 - accuracy: 1.0000 - val_loss: 1.4214 - val_accuracy: 0.8544
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0190 - accuracy: 0.9970 - val_loss: 1.4380 - val_accuracy: 0.8549
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 1.1256e-05 - accuracy: 1.0000 - val_loss: 1.4583 - val_accuracy: 0.8554
  • 시각화
your_history_dict = your_model_history.history

your_loss = your_history_dict['loss']
your_val_loss = your_history_dict['val_loss']
epochs = range(1, len(your_loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, b_loss, 'b-', label='train_loss(large)')
ax1.plot(epochs, b_val_loss, 'r-', label='val_loss(large)')
ax1.plot(epochs, loss, 'b--', label='train_loss')
ax1.plot(epochs, val_loss, 'r--', label='val_loss')
ax1.plot(epochs, your_loss, 'b:', label='train_loss(small)')
ax1.plot(epochs, your_val_loss, 'r:', label='val_loss(small)')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

your_accuracy = your_history_dict['accuracy']
your_val_accuracy = your_history_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, b_accuracy, 'b-', label='train_accuracy(large)')
ax2.plot(epochs, b_val_accuracy, 'r-', label='val_accuracy(large)')
ax2.plot(epochs, accuracy, 'b--', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r--', label='val_accuracy')
ax2.plot(epochs, s_accuracy, 'b:', label='train_accuracy(small)')
ax2.plot(epochs, s_val_accuracy, 'r:', label='val_accuracy(small)')
ax2.plot(epochs, your_accuracy, 'g-', label='train_accuracy(your)')
ax2.plot(epochs, your_val_accuracy, 'g--', label='val_accuracy(your)')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()



9-3. 규제(Regularization)


규제(Regularization)

  • 모델 과대적합 방지 방법 중 하나

  • 가중치 파라미터 값이 커서 과대적합이 발생하는 경우 多

    • 이러한 경우 방지를 위함
  • 큰 가중치 값은 ➡️ 큰 규제를 가함

  • 가중치 절댓값을 작게 만듦 -> 가중치 원소를 0에 근접하도록 -> 모든 특성이 출력에 주는 영향 자체를 줄이는 것

    • 즉, 기울기를 작게 만든다고 이해하면 됨
  • 효과 : 가중치 분포의 균일도 증가, 복잡한 네트워크일 경우 네트워크 복잡도에 제한이 생겨 가중치가 작은 값을 가지도록 할 수 있음

  • 적절한 규제값 찾아야 함

  • 적용 방법 : 손실 함수에서 큰 가중치에 비용 추가

  • L1 규제, L2 규제, L1+L2 규제


L1 규제

  • 손실 함수(Loss Function) : L(yi,y^i)L(y_i, \hat{y}_i)
  • 가중치 절댓값 합에 비례하는 비용 -> 손실 함수에 추가
  • 가중치 절댓값 == L1 norm

  • 전체 비용 : 기존 손실함수 L에 α를 곱함

  • α값
    • 하이퍼파라미터와 유사하게 규제 조절 가능
    • α 증가 -> 규제 강 -> 가중치 절댓값 합 줄여짐
      • 가중치가 0인 중요하지 않은 것들은 제외한 후 일반화 및 적합
    • α 감소 -> 규제 약 -> 가중치 증가, 오버피팅 발생 가능성 높아짐

케라스에서의 L1 규제 적용

  • 레이어에서 kernel_regularizer =l1
    • bias_regularizer(편향 정규화), activity_regularizer(출력값 정규화) 부분에도 적용 가능
l1_model =  models.Sequential()
l1_model.add(layers.Dense(16, 
                          kernel_regularizer='l1',
                          activation='relu', 
                          input_shape=(10000, )))
l1_model.add(layers.Dense(16, 
                          kernel_regularizer='l1',
                          activation='relu'))
l1_model.add(layers.Dense(1, activation='sigmoid'))
l1_model.compile(optimizer='rmsprop',
                 loss='binary_crossentropy',
                 metrics=['accuracy'])
l1_model.summary()

l1_model_hist = l1_model.fit(x_train, y_train,
                             epochs=30,
                             batch_size=512,
                             validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 73ms/step - loss: 3.7170 - accuracy: 0.5985 - val_loss: 1.9185 - val_accuracy: 0.6300
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 1.8514 - accuracy: 0.6736 - val_loss: 1.7856 - val_accuracy: 0.6722
Epoch 3/30
49/49 [==============================] - 1s 18ms/step - loss: 1.7447 - accuracy: 0.7056 - val_loss: 1.7049 - val_accuracy: 0.7178
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 1.6560 - accuracy: 0.7290 - val_loss: 1.6074 - val_accuracy: 0.7516
Epoch 5/30
49/49 [==============================] - 1s 18ms/step - loss: 1.5791 - accuracy: 0.7505 - val_loss: 1.5507 - val_accuracy: 0.7556
Epoch 6/30
49/49 [==============================] - 1s 17ms/step - loss: 1.5150 - accuracy: 0.7674 - val_loss: 1.4810 - val_accuracy: 0.7716
Epoch 7/30
49/49 [==============================] - 1s 17ms/step - loss: 1.4646 - accuracy: 0.7808 - val_loss: 1.4541 - val_accuracy: 0.7825
Epoch 8/30
49/49 [==============================] - 1s 18ms/step - loss: 1.4301 - accuracy: 0.7907 - val_loss: 1.4120 - val_accuracy: 0.7937
Epoch 9/30
49/49 [==============================] - 1s 17ms/step - loss: 1.4077 - accuracy: 0.7995 - val_loss: 1.4085 - val_accuracy: 0.8002
Epoch 10/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3934 - accuracy: 0.8062 - val_loss: 1.3824 - val_accuracy: 0.8076
Epoch 11/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3807 - accuracy: 0.8150 - val_loss: 1.3872 - val_accuracy: 0.8136
Epoch 12/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3719 - accuracy: 0.8197 - val_loss: 1.3698 - val_accuracy: 0.8160
Epoch 13/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3633 - accuracy: 0.8263 - val_loss: 1.3683 - val_accuracy: 0.8252
Epoch 14/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3573 - accuracy: 0.8279 - val_loss: 1.3503 - val_accuracy: 0.8291
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3497 - accuracy: 0.8330 - val_loss: 1.3579 - val_accuracy: 0.8306
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3458 - accuracy: 0.8332 - val_loss: 1.3407 - val_accuracy: 0.8347
Epoch 17/30
49/49 [==============================] - 1s 19ms/step - loss: 1.3401 - accuracy: 0.8378 - val_loss: 1.3651 - val_accuracy: 0.8266
Epoch 18/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3359 - accuracy: 0.8399 - val_loss: 1.3310 - val_accuracy: 0.8378
Epoch 19/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3341 - accuracy: 0.8417 - val_loss: 1.3736 - val_accuracy: 0.8189
Epoch 20/30
49/49 [==============================] - 1s 19ms/step - loss: 1.3299 - accuracy: 0.8431 - val_loss: 1.3245 - val_accuracy: 0.8418
Epoch 21/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3270 - accuracy: 0.8441 - val_loss: 1.3340 - val_accuracy: 0.8446
Epoch 22/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3239 - accuracy: 0.8459 - val_loss: 1.3253 - val_accuracy: 0.8419
Epoch 23/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3222 - accuracy: 0.8464 - val_loss: 1.3339 - val_accuracy: 0.8450
Epoch 24/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3211 - accuracy: 0.8478 - val_loss: 1.3469 - val_accuracy: 0.8276
Epoch 25/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3188 - accuracy: 0.8474 - val_loss: 1.3259 - val_accuracy: 0.8494
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3159 - accuracy: 0.8499 - val_loss: 1.3359 - val_accuracy: 0.8343
Epoch 27/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3140 - accuracy: 0.8498 - val_loss: 1.3192 - val_accuracy: 0.8505
Epoch 28/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3117 - accuracy: 0.8518 - val_loss: 1.3197 - val_accuracy: 0.8444
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3087 - accuracy: 0.8524 - val_loss: 1.3245 - val_accuracy: 0.8478
Epoch 30/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3082 - accuracy: 0.8526 - val_loss: 1.3037 - val_accuracy: 0.8530
  • 시각화
l1_val_loss = l1_model_hist.history['val_loss']

epochs = range(1, 31)
plt.plot(epochs, val_loss, 'k--', label='Model')
plt.plot(epochs, l1_val_loss, 'b--', label='L1-regularized')
plt.xlabel('Epochs')
plt.ylabel('Validation Loss')
plt.legend()
plt.grid()
plt.show()

분석

  • L1 규제 결과 : 안정적으로 loss 감소

👑L2 규제

  • 가중치 제곱에 비례하는 비용 -> 손실 함수 일정값에 더함
  • 가중치 제곱 == L2 norm

  • 전체 비용 : 기존 손실 함수 LLλ\lambda를 곱함
    • λ\lambda 값이 커지면 -> 가중치 감소 커짐
    • λ\lambda 값이 작아지면 -> 규제 적어짐
  • L1보다 더 Robust한 모델을 생성하기 때문에 더 많이 사용

케라스에서의 L2 규제 적용

  • kernel_regularizer = l2
l2_model =  models.Sequential()
l2_model.add(layers.Dense(16, 
                          kernel_regularizer='l2',
                          activation='relu', 
                          input_shape=(10000, )))
l2_model.add(layers.Dense(16, 
                          kernel_regularizer='l2',
                          activation='relu'))
l2_model.add(layers.Dense(1, activation='sigmoid'))
l2_model.compile(optimizer='rmsprop',
                 loss='binary_crossentropy',
                 metrics=['accuracy'])
l2_model.summary()

l2_model_hist = l2_model.fit(x_train, y_train,
                             epochs=30,
                             batch_size=512,
                             validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 77ms/step - loss: 0.7143 - accuracy: 0.8118 - val_loss: 0.5874 - val_accuracy: 0.8638
Epoch 2/30
49/49 [==============================] - 1s 17ms/step - loss: 0.5342 - accuracy: 0.8753 - val_loss: 0.5332 - val_accuracy: 0.8588
Epoch 3/30
49/49 [==============================] - 1s 18ms/step - loss: 0.4842 - accuracy: 0.8855 - val_loss: 0.5029 - val_accuracy: 0.8619
Epoch 4/30
49/49 [==============================] - 1s 17ms/step - loss: 0.4560 - accuracy: 0.8886 - val_loss: 0.4680 - val_accuracy: 0.8756
Epoch 5/30
49/49 [==============================] - 1s 19ms/step - loss: 0.4365 - accuracy: 0.8896 - val_loss: 0.5221 - val_accuracy: 0.8328
Epoch 6/30
49/49 [==============================] - 1s 19ms/step - loss: 0.4219 - accuracy: 0.8922 - val_loss: 0.4800 - val_accuracy: 0.8535
Epoch 7/30
49/49 [==============================] - 1s 17ms/step - loss: 0.4151 - accuracy: 0.8911 - val_loss: 0.4446 - val_accuracy: 0.8742
Epoch 8/30
49/49 [==============================] - 1s 16ms/step - loss: 0.4026 - accuracy: 0.8950 - val_loss: 0.4589 - val_accuracy: 0.8607
Epoch 9/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3998 - accuracy: 0.8960 - val_loss: 0.4237 - val_accuracy: 0.8817
Epoch 10/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3933 - accuracy: 0.8958 - val_loss: 0.4271 - val_accuracy: 0.8776
Epoch 11/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3898 - accuracy: 0.8981 - val_loss: 0.4298 - val_accuracy: 0.8728
Epoch 12/30
49/49 [==============================] - 1s 19ms/step - loss: 0.3879 - accuracy: 0.8986 - val_loss: 0.4477 - val_accuracy: 0.8639
Epoch 13/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3811 - accuracy: 0.8998 - val_loss: 0.4127 - val_accuracy: 0.8813
Epoch 14/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3844 - accuracy: 0.8967 - val_loss: 0.4101 - val_accuracy: 0.8825
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3771 - accuracy: 0.8994 - val_loss: 0.4087 - val_accuracy: 0.8833
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3704 - accuracy: 0.9035 - val_loss: 0.5141 - val_accuracy: 0.8264
Epoch 17/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3764 - accuracy: 0.8968 - val_loss: 0.4555 - val_accuracy: 0.8558
Epoch 18/30
49/49 [==============================] - 1s 19ms/step - loss: 0.3692 - accuracy: 0.9014 - val_loss: 0.4928 - val_accuracy: 0.8371
Epoch 19/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3651 - accuracy: 0.9028 - val_loss: 0.4177 - val_accuracy: 0.8736
Epoch 20/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3616 - accuracy: 0.9037 - val_loss: 0.4293 - val_accuracy: 0.8686
Epoch 21/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3594 - accuracy: 0.9051 - val_loss: 0.3977 - val_accuracy: 0.8828
Epoch 22/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3566 - accuracy: 0.9040 - val_loss: 0.4135 - val_accuracy: 0.8738
Epoch 23/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3559 - accuracy: 0.9040 - val_loss: 0.3939 - val_accuracy: 0.8839
Epoch 24/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3505 - accuracy: 0.9062 - val_loss: 0.4175 - val_accuracy: 0.8707
Epoch 25/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3490 - accuracy: 0.9060 - val_loss: 0.3910 - val_accuracy: 0.8842
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3461 - accuracy: 0.9078 - val_loss: 0.3906 - val_accuracy: 0.8826
Epoch 27/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3427 - accuracy: 0.9103 - val_loss: 0.4068 - val_accuracy: 0.8727
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3469 - accuracy: 0.9050 - val_loss: 0.3875 - val_accuracy: 0.8845
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3397 - accuracy: 0.9096 - val_loss: 0.4162 - val_accuracy: 0.8696
Epoch 30/30
49/49 [==============================] - 1s 17ms/step - loss: 0.3385 - accuracy: 0.9087 - val_loss: 0.4108 - val_accuracy: 0.8721
l2_val_loss = l2_model_hist.history['val_loss']

epochs = range(1, 31)
plt.plot(epochs, val_loss, 'k--', label='Model')
plt.plot(epochs, l2_val_loss, 'r--', label='L2-regularized')
plt.xlabel('Epochs')
plt.ylabel('Validation Loss')
plt.legend()
plt.grid()
plt.show()

분석

  • 기존 모델에 비해 Loss 값 매우 낮게 감소 -> 오버피팅 문제 해결됨

L1-L2 규제

  • kernel_regularizer = l1_l2
l1_l2_model =  models.Sequential()
l1_l2_model.add(layers.Dense(16, 
                             kernel_regularizer='l1_l2',
                             activation='relu', input_shape=(10000, )))
l1_l2_model.add(layers.Dense(16, 
                             kernel_regularizer='l1_l2',
                             activation='relu'))
l1_l2_model.add(layers.Dense(1, activation='sigmoid'))
l1_l2_model.compile(optimizer='rmsprop',
                    loss='binary_crossentropy',
                    metrics=['accuracy'])
l1_l2_model.summary()

l1_l2_model_hist = l1_l2_model.fit(x_train, y_train,
                                  epochs=30,
                                  batch_size=512,
                                  validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 81ms/step - loss: 3.9135 - accuracy: 0.5709 - val_loss: 2.0596 - val_accuracy: 0.6804
Epoch 2/30
49/49 [==============================] - 1s 17ms/step - loss: 1.9729 - accuracy: 0.6784 - val_loss: 1.8913 - val_accuracy: 0.6494
Epoch 3/30
49/49 [==============================] - 1s 17ms/step - loss: 1.8467 - accuracy: 0.6861 - val_loss: 1.8015 - val_accuracy: 0.6960
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 1.7438 - accuracy: 0.6937 - val_loss: 1.6883 - val_accuracy: 0.7049
Epoch 5/30
49/49 [==============================] - 1s 18ms/step - loss: 1.6566 - accuracy: 0.7029 - val_loss: 1.6247 - val_accuracy: 0.7156
Epoch 6/30
49/49 [==============================] - 1s 17ms/step - loss: 1.5850 - accuracy: 0.7224 - val_loss: 1.5445 - val_accuracy: 0.7348
Epoch 7/30
49/49 [==============================] - 1s 18ms/step - loss: 1.5256 - accuracy: 0.7420 - val_loss: 1.5049 - val_accuracy: 0.7502
Epoch 8/30
49/49 [==============================] - 1s 17ms/step - loss: 1.4794 - accuracy: 0.7587 - val_loss: 1.4579 - val_accuracy: 0.7618
Epoch 9/30
49/49 [==============================] - 1s 19ms/step - loss: 1.4502 - accuracy: 0.7709 - val_loss: 1.4477 - val_accuracy: 0.7764
Epoch 10/30
49/49 [==============================] - 1s 17ms/step - loss: 1.4345 - accuracy: 0.7809 - val_loss: 1.4238 - val_accuracy: 0.7858
Epoch 11/30
49/49 [==============================] - 1s 19ms/step - loss: 1.4214 - accuracy: 0.7896 - val_loss: 1.4194 - val_accuracy: 0.7951
Epoch 12/30
49/49 [==============================] - 1s 18ms/step - loss: 1.4090 - accuracy: 0.7998 - val_loss: 1.4075 - val_accuracy: 0.7923
Epoch 13/30
49/49 [==============================] - 1s 20ms/step - loss: 1.3983 - accuracy: 0.8062 - val_loss: 1.3979 - val_accuracy: 0.8082
Epoch 14/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3898 - accuracy: 0.8117 - val_loss: 1.3844 - val_accuracy: 0.8141
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3813 - accuracy: 0.8190 - val_loss: 1.4054 - val_accuracy: 0.7964
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3764 - accuracy: 0.8208 - val_loss: 1.3707 - val_accuracy: 0.8230
Epoch 17/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3687 - accuracy: 0.8280 - val_loss: 1.3711 - val_accuracy: 0.8261
Epoch 18/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3641 - accuracy: 0.8300 - val_loss: 1.3620 - val_accuracy: 0.8302
Epoch 19/30
49/49 [==============================] - 1s 19ms/step - loss: 1.3604 - accuracy: 0.8323 - val_loss: 1.3639 - val_accuracy: 0.8318
Epoch 20/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3570 - accuracy: 0.8362 - val_loss: 1.3550 - val_accuracy: 0.8339
Epoch 21/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3535 - accuracy: 0.8366 - val_loss: 1.3551 - val_accuracy: 0.8358
Epoch 22/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3476 - accuracy: 0.8404 - val_loss: 1.3508 - val_accuracy: 0.8386
Epoch 23/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3478 - accuracy: 0.8415 - val_loss: 1.3564 - val_accuracy: 0.8366
Epoch 24/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3437 - accuracy: 0.8426 - val_loss: 1.3441 - val_accuracy: 0.8409
Epoch 25/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3431 - accuracy: 0.8427 - val_loss: 1.3432 - val_accuracy: 0.8419
Epoch 26/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3392 - accuracy: 0.8459 - val_loss: 1.3410 - val_accuracy: 0.8400
Epoch 27/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3371 - accuracy: 0.8474 - val_loss: 1.3398 - val_accuracy: 0.8438
Epoch 28/30
49/49 [==============================] - 1s 17ms/step - loss: 1.3352 - accuracy: 0.8465 - val_loss: 1.3457 - val_accuracy: 0.8414
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3332 - accuracy: 0.8478 - val_loss: 1.3717 - val_accuracy: 0.8210
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 1.3324 - accuracy: 0.8480 - val_loss: 1.3415 - val_accuracy: 0.8414
l1_l2_val_loss = l1_l2_model_hist.history['val_loss']

epochs = range(1, 31)
plt.plot(epochs, val_loss, 'k--', label='Model')
plt.plot(epochs, l1_l2_val_loss, 'g--', label='L1_L2-regularized')
plt.xlabel('Epochs')
plt.ylabel('Validation Loss')
plt.legend()
plt.grid()
plt.show()

분석

  • L1 규제와 큰 차이 없음

전체 비교

epochs = range(1, 31)
plt.plot(epochs, val_loss, 'k--', label='Model')
plt.plot(epochs, l1_val_loss, 'b--', label='L1-regularized')
plt.plot(epochs, l2_val_loss, 'r--', label='L2-regularized')
plt.plot(epochs, l1_l2_val_loss, 'g--', label='L1_L2-regularized')
plt.xlabel('Epochs')
plt.ylabel('Validation Loss')
plt.legend()
plt.grid()
plt.show()


실습

  • L2 규제
# [play ground]
# L2 규제의 기본 값은 0.01입니다. 여러분이 원하는 크기로 조절해보세요. 혹은 다른 규제를 사용하셔도 됩니다.
from tensorflow.keras import models, layers, regularizers

your_model = models.Sequential()
your_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu', input_shape=(10000, )))
your_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
your_model.add(layers.Dense(1, activation='sigmoid'))
your_model.compile(optimizer='rmsprop',
                    loss='binary_crossentropy',
                    metrics=['accuracy'])
your_model.summary()

your_model_hist = your_model.fit(x_train, y_train,
                                 epochs=30,
                                 batch_size=512,
                                 validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 5s 85ms/step - loss: 0.4940 - accuracy: 0.8258 - val_loss: 0.3957 - val_accuracy: 0.8699
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3225 - accuracy: 0.9053 - val_loss: 0.3409 - val_accuracy: 0.8873
Epoch 3/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2778 - accuracy: 0.9185 - val_loss: 0.3426 - val_accuracy: 0.8831
Epoch 4/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2575 - accuracy: 0.9278 - val_loss: 0.3372 - val_accuracy: 0.8854
Epoch 5/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2477 - accuracy: 0.9302 - val_loss: 0.3481 - val_accuracy: 0.8820
Epoch 6/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2367 - accuracy: 0.9350 - val_loss: 0.3739 - val_accuracy: 0.8730
Epoch 7/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2314 - accuracy: 0.9369 - val_loss: 0.3582 - val_accuracy: 0.8803
Epoch 8/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2302 - accuracy: 0.9361 - val_loss: 0.3633 - val_accuracy: 0.8782
Epoch 9/30
49/49 [==============================] - 1s 19ms/step - loss: 0.2246 - accuracy: 0.9379 - val_loss: 0.3687 - val_accuracy: 0.8786
Epoch 10/30
49/49 [==============================] - 1s 19ms/step - loss: 0.2180 - accuracy: 0.9422 - val_loss: 0.4169 - val_accuracy: 0.8622
Epoch 11/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2175 - accuracy: 0.9411 - val_loss: 0.4029 - val_accuracy: 0.8683
Epoch 12/30
49/49 [==============================] - 1s 19ms/step - loss: 0.2171 - accuracy: 0.9410 - val_loss: 0.4116 - val_accuracy: 0.8656
Epoch 13/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2088 - accuracy: 0.9458 - val_loss: 0.3935 - val_accuracy: 0.8714
Epoch 14/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2117 - accuracy: 0.9426 - val_loss: 0.3867 - val_accuracy: 0.8745
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2056 - accuracy: 0.9463 - val_loss: 0.3901 - val_accuracy: 0.8734
Epoch 16/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2058 - accuracy: 0.9445 - val_loss: 0.3941 - val_accuracy: 0.8712
Epoch 17/30
49/49 [==============================] - 1s 19ms/step - loss: 0.2013 - accuracy: 0.9467 - val_loss: 0.4419 - val_accuracy: 0.8580
Epoch 18/30
49/49 [==============================] - 1s 17ms/step - loss: 0.2017 - accuracy: 0.9470 - val_loss: 0.3992 - val_accuracy: 0.8694
Epoch 19/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1946 - accuracy: 0.9510 - val_loss: 0.4038 - val_accuracy: 0.8707
Epoch 20/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1949 - accuracy: 0.9499 - val_loss: 0.4302 - val_accuracy: 0.8651
Epoch 21/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1925 - accuracy: 0.9493 - val_loss: 0.4115 - val_accuracy: 0.8686
Epoch 22/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1874 - accuracy: 0.9511 - val_loss: 0.4243 - val_accuracy: 0.8674
Epoch 23/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1890 - accuracy: 0.9518 - val_loss: 0.4489 - val_accuracy: 0.8596
Epoch 24/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1844 - accuracy: 0.9538 - val_loss: 0.4480 - val_accuracy: 0.8622
Epoch 25/30
49/49 [==============================] - 1s 20ms/step - loss: 0.1785 - accuracy: 0.9571 - val_loss: 0.5140 - val_accuracy: 0.8461
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1799 - accuracy: 0.9564 - val_loss: 0.4273 - val_accuracy: 0.8671
Epoch 27/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1785 - accuracy: 0.9560 - val_loss: 0.4281 - val_accuracy: 0.8673
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1715 - accuracy: 0.9609 - val_loss: 0.4406 - val_accuracy: 0.8656
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1715 - accuracy: 0.9571 - val_loss: 0.4854 - val_accuracy: 0.8550
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1674 - accuracy: 0.9613 - val_loss: 0.5444 - val_accuracy: 0.8408
your_val_loss = your_model_hist.history['val_loss']

epochs = range(1, 31)
plt.plot(epochs, val_loss, 'k--', label='Model')
plt.plot(epochs, l1_val_loss, 'b--', label='L1-regularized')
plt.plot(epochs, l2_val_loss, 'r--', label='L2-regularized')
plt.plot(epochs, l1_l2_val_loss, 'g--', label='L1_L2-regularized')
plt.plot(epochs, your_val_loss, 'y--', label='Your L2-regularized')
plt.xlabel('Epochs')
plt.ylabel('Validation Loss')
plt.legend()
plt.grid()
plt.show()



9-4. 드롭아웃(Dropout)


드롭아웃

  • 딥러닝 모델 오버피팅 방지를 위한 규제 방법 중 하나
  • 규제 방법 중 개념이 쉽고, 효과적이고, 사용하기 간편하여 많이 사용됨
  • 모델 학습 시 사용하는 노드 수 -> 전체 노드 중 일부만 사용
  • 학습 진행이 되는 동안 -> 무작위로 레이어의 일부 노드를 제외하는 형태로 동작
  • 20% ~ 50%의 비율로 지정
  • 테스트 단계에선 드롭아웃되지 않고 -> 해당 레이어 출력 노드를 드롭아웃 비율에 맞게 줄임

Q. 각 레이어의 40%, 60%, 40% 무작위 드롭아웃 -> 조합 개수

A. nCr = n!/(n-r)!r!에 의해 10 x 10 x 10 = 1,000가지


드롭아웃 (20%)

model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000, )))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model.summary()

  • 학습 히스토리 별도 저장
drop_20_history = model.fit(x_train, y_train,
                            epochs=30,
                            batch_size=512,
                            validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 73ms/step - loss: 0.5227 - accuracy: 0.7645 - val_loss: 0.3663 - val_accuracy: 0.8759
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3361 - accuracy: 0.8756 - val_loss: 0.2977 - val_accuracy: 0.8880
Epoch 3/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2560 - accuracy: 0.9097 - val_loss: 0.2873 - val_accuracy: 0.8884
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2097 - accuracy: 0.9276 - val_loss: 0.2952 - val_accuracy: 0.8851
Epoch 5/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1788 - accuracy: 0.9379 - val_loss: 0.2994 - val_accuracy: 0.8836
Epoch 6/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1514 - accuracy: 0.9494 - val_loss: 0.3290 - val_accuracy: 0.8801
Epoch 7/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1284 - accuracy: 0.9580 - val_loss: 0.3802 - val_accuracy: 0.8730
Epoch 8/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1138 - accuracy: 0.9621 - val_loss: 0.3602 - val_accuracy: 0.8774
Epoch 9/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0949 - accuracy: 0.9691 - val_loss: 0.4096 - val_accuracy: 0.8747
Epoch 10/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0818 - accuracy: 0.9741 - val_loss: 0.4667 - val_accuracy: 0.8695
Epoch 11/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0715 - accuracy: 0.9774 - val_loss: 0.4805 - val_accuracy: 0.8720
Epoch 12/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0588 - accuracy: 0.9818 - val_loss: 0.5191 - val_accuracy: 0.8693
Epoch 13/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0510 - accuracy: 0.9846 - val_loss: 0.5811 - val_accuracy: 0.8679
Epoch 14/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0453 - accuracy: 0.9857 - val_loss: 0.6093 - val_accuracy: 0.8679
Epoch 15/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0403 - accuracy: 0.9879 - val_loss: 0.6695 - val_accuracy: 0.8659
Epoch 16/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0358 - accuracy: 0.9899 - val_loss: 0.7483 - val_accuracy: 0.8647
Epoch 17/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0331 - accuracy: 0.9902 - val_loss: 0.7662 - val_accuracy: 0.8661
Epoch 18/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0284 - accuracy: 0.9918 - val_loss: 0.8356 - val_accuracy: 0.8662
Epoch 19/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0275 - accuracy: 0.9922 - val_loss: 0.8191 - val_accuracy: 0.8628
Epoch 20/30
49/49 [==============================] - 1s 19ms/step - loss: 0.0240 - accuracy: 0.9931 - val_loss: 0.9135 - val_accuracy: 0.8652
Epoch 21/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0242 - accuracy: 0.9928 - val_loss: 0.9824 - val_accuracy: 0.8626
Epoch 22/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0227 - accuracy: 0.9937 - val_loss: 0.9898 - val_accuracy: 0.8643
Epoch 23/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0223 - accuracy: 0.9932 - val_loss: 1.0168 - val_accuracy: 0.8640
Epoch 24/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0230 - accuracy: 0.9937 - val_loss: 1.0815 - val_accuracy: 0.8610
Epoch 25/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0198 - accuracy: 0.9941 - val_loss: 1.0740 - val_accuracy: 0.8642
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0223 - accuracy: 0.9939 - val_loss: 1.1504 - val_accuracy: 0.8590
Epoch 27/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0237 - accuracy: 0.9945 - val_loss: 1.1496 - val_accuracy: 0.8606
Epoch 28/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0234 - accuracy: 0.9937 - val_loss: 1.1620 - val_accuracy: 0.8606
Epoch 29/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0234 - accuracy: 0.9938 - val_loss: 1.1671 - val_accuracy: 0.8609
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0210 - accuracy: 0.9944 - val_loss: 1.3228 - val_accuracy: 0.8580
  • 기존 모델의 히스토리 결과, 드롭아웃 20% 히스토리 결과 비교
drop_20_dict = drop_20_history.history

drop_20_loss = drop_20_dict['loss']
drop_20_val_loss = drop_20_dict['val_loss']
epochs = range(1, len(loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, loss, 'b-', label='train_loss')
ax1.plot(epochs, val_loss, 'r-', label='val_loss')
ax1.plot(epochs, drop_20_loss, 'b--', label='train_loss (dropout 20%)')
ax1.plot(epochs, drop_20_val_loss, 'r--', label='val_loss (dropout 20%)')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

drop_20_accuracy = drop_20_dict['accuracy']
drop_20_val_accuracy = drop_20_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, accuracy, 'b-', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r-', label='val_accuracy')
ax2.plot(epochs, drop_20_accuracy, 'b--', label='train_accuracy (dropout 20%)')
ax2.plot(epochs, drop_20_val_accuracy, 'r--', label='val_accuracy (dropout 20%)')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()


드롭아웃 50%

model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000, )))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model.summary()

drop_50_history = model.fit(x_train, y_train,
                            epochs=30,
                            batch_size=512,
                            validation_data=(x_test, y_test))
Epoch 1/30
49/49 [==============================] - 4s 75ms/step - loss: 0.5882 - accuracy: 0.6964 - val_loss: 0.4378 - val_accuracy: 0.8714
Epoch 2/30
49/49 [==============================] - 1s 18ms/step - loss: 0.4315 - accuracy: 0.8289 - val_loss: 0.3365 - val_accuracy: 0.8836
Epoch 3/30
49/49 [==============================] - 1s 18ms/step - loss: 0.3393 - accuracy: 0.8811 - val_loss: 0.2975 - val_accuracy: 0.8816
Epoch 4/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2833 - accuracy: 0.9059 - val_loss: 0.2812 - val_accuracy: 0.8898
Epoch 5/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2495 - accuracy: 0.9168 - val_loss: 0.2856 - val_accuracy: 0.8890
Epoch 6/30
49/49 [==============================] - 1s 18ms/step - loss: 0.2189 - accuracy: 0.9281 - val_loss: 0.2917 - val_accuracy: 0.8865
Epoch 7/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1942 - accuracy: 0.9364 - val_loss: 0.3191 - val_accuracy: 0.8849
Epoch 8/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1781 - accuracy: 0.9433 - val_loss: 0.3355 - val_accuracy: 0.8827
Epoch 9/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1629 - accuracy: 0.9481 - val_loss: 0.3695 - val_accuracy: 0.8824
Epoch 10/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1540 - accuracy: 0.9506 - val_loss: 0.3838 - val_accuracy: 0.8802
Epoch 11/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1409 - accuracy: 0.9542 - val_loss: 0.4114 - val_accuracy: 0.8755
Epoch 12/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1344 - accuracy: 0.9576 - val_loss: 0.4253 - val_accuracy: 0.8778
Epoch 13/30
49/49 [==============================] - 1s 19ms/step - loss: 0.1277 - accuracy: 0.9601 - val_loss: 0.4442 - val_accuracy: 0.8772
Epoch 14/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1176 - accuracy: 0.9618 - val_loss: 0.4756 - val_accuracy: 0.8782
Epoch 15/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1168 - accuracy: 0.9609 - val_loss: 0.4875 - val_accuracy: 0.8742
Epoch 16/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1121 - accuracy: 0.9627 - val_loss: 0.5101 - val_accuracy: 0.8710
Epoch 17/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1085 - accuracy: 0.9630 - val_loss: 0.5473 - val_accuracy: 0.8736
Epoch 18/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1058 - accuracy: 0.9663 - val_loss: 0.5445 - val_accuracy: 0.8704
Epoch 19/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1045 - accuracy: 0.9657 - val_loss: 0.6082 - val_accuracy: 0.8731
Epoch 20/30
49/49 [==============================] - 1s 17ms/step - loss: 0.1016 - accuracy: 0.9656 - val_loss: 0.5971 - val_accuracy: 0.8697
Epoch 21/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0986 - accuracy: 0.9662 - val_loss: 0.6734 - val_accuracy: 0.8742
Epoch 22/30
49/49 [==============================] - 1s 18ms/step - loss: 0.1016 - accuracy: 0.9652 - val_loss: 0.6458 - val_accuracy: 0.8726
Epoch 23/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0990 - accuracy: 0.9693 - val_loss: 0.6504 - val_accuracy: 0.8714
Epoch 24/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0985 - accuracy: 0.9695 - val_loss: 0.6293 - val_accuracy: 0.8655
Epoch 25/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0921 - accuracy: 0.9694 - val_loss: 0.6891 - val_accuracy: 0.8711
Epoch 26/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0889 - accuracy: 0.9694 - val_loss: 0.7061 - val_accuracy: 0.8692
Epoch 27/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0922 - accuracy: 0.9714 - val_loss: 0.6933 - val_accuracy: 0.8682
Epoch 28/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0974 - accuracy: 0.9695 - val_loss: 0.7183 - val_accuracy: 0.8664
Epoch 29/30
49/49 [==============================] - 1s 17ms/step - loss: 0.0982 - accuracy: 0.9679 - val_loss: 0.7188 - val_accuracy: 0.8675
Epoch 30/30
49/49 [==============================] - 1s 18ms/step - loss: 0.0958 - accuracy: 0.9706 - val_loss: 0.7255 - val_accuracy: 0.8654
  • 기존, 드롭아웃 20%, 드롭아웃 50% 비교
drop_50_dict = drop_50_history.history

drop_50_loss = drop_50_dict['loss']
drop_50_val_loss = drop_50_dict['val_loss']
epochs = range(1, len(loss) + 1)

fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(epochs, loss, 'b-', label='train_loss')
ax1.plot(epochs, val_loss, 'r-', label='val_loss')
ax1.plot(epochs, drop_20_loss, 'b--', label='train_loss (dropout 20%)')
ax1.plot(epochs, drop_20_val_loss, 'r--', label='val_loss (dropout 20%)')
ax1.plot(epochs, drop_50_loss, 'b:', label='train_loss (dropout 50%)')
ax1.plot(epochs, drop_50_val_loss, 'r:', label='val_loss (dropout 50%)')
ax1.set_title('Train and Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.grid()
ax1.legend()

drop_50_accuracy = drop_50_dict['accuracy']
drop_50_val_accuracy = drop_50_dict['val_accuracy']

ax2 = fig.add_subplot(1, 2, 2)
ax2.plot(epochs, accuracy, 'b-', label='train_accuracy')
ax2.plot(epochs, val_accuracy, 'r-', label='val_accuracy')
ax2.plot(epochs, drop_20_accuracy, 'b--', label='train_accuracy (dropout 20%)')
ax2.plot(epochs, drop_20_val_accuracy, 'r--', label='val_accuracy (dropout 20%)')
ax2.plot(epochs, drop_50_accuracy, 'b:', label='train_accuracy (dropout 50%)')
ax2.plot(epochs, drop_50_val_accuracy, 'r:', label='val_accuracy (dropout 50%)')
ax2.set_title('Train and Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.grid()
ax2.legend()

plt.show()

분석

  • 드롭아웃 50% 적용 모델이 오버피팅이 가장 많이 감소
    • 아직 완전한 해결은 아님


9-5. 마무리하며


Q. 모델 크기 조절, 규제, 드롭아웃의 목적

A. 모델의 언더피팅 or 오버피팅 방지

profile
언젠가 내 코드로 세상에 기여할 수 있도록, BE&Data Science 개발 기록 노트☘️

0개의 댓글