- 이번 챕터에서는 소프트맥스 회귀로 로우-레벨과
F.cross_entropy
를 사용해서 구현.
- 초기화
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
x_train = [[1,2,1,1],
[2,1,3,2],
[3,1,3,4],
[4,1,5,5],
[1,7,5,5],
[1,2,5,6],
[1,6,6,6],
[1,7,7,7]]
y_train = [2,2,2,1,1,1,0,0]
x_train = torch.FloatTensor(x_train)
y_train = torch.LongTensor(y_train)
- x_train의 각 샘플은 4개의 특성을 가지고 있으며, 총 8개의 샘플이 존재함.
- y_train은 각 샘플에 대한 레이블로 3개의 클래스 (0,1,2)로 구성됨.
1. 소프트맥스 회귀 구현하기(로우-레벨)
print(x_train.shape)
print(y_train.shape)
>> torch.Size([8, 4])
>> torch.Size([8])
y_one_hot = torch.zeros(8,3)
y_one_hot.scatter_(1,y_train.unsqueeze(1),1)
print(y_one_hot.shape)
>> torch.Size([8, 3])
W = torch.zeros((4,3),requires_grad=True)
b = torch.zeros(1,requires_grad=True)
optimizer = optim.SGD([W,b],lr =0.1)
nb_epochs = 1000
for epoch in range(nb_epochs+1):
hypothesis = F.softmax(x_train.matmul(W)+b,dim=1)
cost = (y_one_hot * -torch.log(hypothesis)).sum(dim=1).mean()
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch {:4d}/{} Cost {:6f}'.format(epoch, nb_epochs, cost.item()))
Epoch 0/1000 Cost 1.098612
Epoch 100/1000 Cost 0.761050
Epoch 200/1000 Cost 0.689991
Epoch 300/1000 Cost 0.643229
Epoch 400/1000 Cost 0.604117
Epoch 500/1000 Cost 0.568255
Epoch 600/1000 Cost 0.533922
Epoch 700/1000 Cost 0.500291
Epoch 800/1000 Cost 0.466908
Epoch 900/1000 Cost 0.433507
Epoch 1000/1000 Cost 0.399962
2. 소프트맥스 회귀 구현하기(하이-레벨)
F.cross_entropy()
를 사용하며 비용함수 구현
W = torch.zeros((4,3), requires_grad=True)
b = torch.zeros(1,requires_grad=True)
optimizer = optim.SGD([W,b],lr=0.1)
nb_epochs = 1000
for epoch in range(nb_epochs+1):
z = x_train.matmul(W)+b
cost = F.cross_entropy(z,y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch {:4d} / {} Cost: {:6f}'.format(epoch, nb_epochs, cost.item()))
Epoch 0 / 1000 Cost: 1.098612
Epoch 100 / 1000 Cost: 0.761050
Epoch 200 / 1000 Cost: 0.689991
Epoch 300 / 1000 Cost: 0.643229
Epoch 400 / 1000 Cost: 0.604117
Epoch 500 / 1000 Cost: 0.568255
Epoch 600 / 1000 Cost: 0.533922
Epoch 700 / 1000 Cost: 0.500291
Epoch 800 / 1000 Cost: 0.466908
Epoch 900 / 1000 Cost: 0.433507
Epoch 1000 / 1000 Cost: 0.399962
3. 소프트맥스 회귀 nn.Module로 구현하기
nn.Module
중 선형회귀 구현에 사용했던 nn.Linear()
을 사용
- output_dim은 클래스의 개수(3)를 고려해야함.
model = nn.Linear(4,3)
optimizer = optim.SGD(model.parameters(), lr=0.1)
nb_epochs = 1000
for epoch in range(nb_epochs+1):
prediction = model(x_train)
cost = F.cross_entropy(prediction, y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch {:4d} / {} Cost: {:6f}'.format(epoch, nb_epochs, cost.item()))
Epoch 0 / 1000 Cost: 1.616785
Epoch 100 / 1000 Cost: 0.658891
Epoch 200 / 1000 Cost: 0.573444
Epoch 300 / 1000 Cost: 0.518151
Epoch 400 / 1000 Cost: 0.473266
Epoch 500 / 1000 Cost: 0.433516
Epoch 600 / 1000 Cost: 0.396563
Epoch 700 / 1000 Cost: 0.360914
Epoch 800 / 1000 Cost: 0.325392
Epoch 900 / 1000 Cost: 0.289178
Epoch 1000 / 1000 Cost: 0.254148
4. 소프트맥스 회귀 클래스로 구현하기
- 소프트맥스 회귀를
nn.Module
을 상속받은 클래스로 구현하기
class SoftmaxClassifierModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(4,3)
def forward(self,x):
return self.linear(x)
model = SoftmaxClassifierModel()
optimizer = optim.SGD(model.parameters(), lr = 0.1)
nb_epochs = 1000
for epoch in range(nb_epochs+1):
prediction = model(x_train)
cost = F.cross_entropy(prediction,y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch {:4d} / {} Cost: {:6f}'.format(epoch, nb_epochs, cost.item()))
Epoch 0 / 1000 Cost: 2.637636
Epoch 100 / 1000 Cost: 0.647903
Epoch 200 / 1000 Cost: 0.564643
Epoch 300 / 1000 Cost: 0.511043
Epoch 400 / 1000 Cost: 0.467249
Epoch 500 / 1000 Cost: 0.428281
Epoch 600 / 1000 Cost: 0.391924
Epoch 700 / 1000 Cost: 0.356742
Epoch 800 / 1000 Cost: 0.321577
Epoch 900 / 1000 Cost: 0.285617
Epoch 1000 / 1000 Cost: 0.250818