PyTorch
- 딥러닝 프레임워크
- tensorflow : define - run
- pytorch : define by run
tensor
tensor 초기화
data 넣어서 생성
a = torch.tensor([[1,2],[3,4]], dtype=torch.int16)
b = torch.tensor([2], dtype=torch.float32)
c = torch.tensor([3], dtype=torch.float64)
무작위, 상수 값 사용해서 생성
shape = (2, 1)
zero_t = torch.zeros(3)
one_t = torch.ones(shape)
random_t = torch.rand(3)
range_t = torch.arange(0,5)
numpy 배열로부터 생성
data = [[1,2],[3,4]]
np_array = np.array(data)
t_np = torch.from_numpy(np_array)
a = torch.arange(1, 13).view(4, 3)
a_np = a.numpy()
tensor 속성
t.shape
: tensor의 shape 확인
t.dtype
: tensor의 data type 확인
t.device
: tensor가 올라가있는 device 확인, cpu인지 gpu인지
t = torch.tensor([[1,2],[3,4]], dtype=float64)
print("shape of tensor {}".format(t.shape))
print("datatype of tensor {}".format(t.dtype))
print("device of tensor is stored on {}".format(t.device))
tensor 연산
덧셈, 뺄셈
a = torch.tensor([3,2])
b = torch.tensor([5,3])
sum = a + b
print("sum : {}".format(sum))
sub = a - b
print("sub : {}".format(sub))
sum_element_a = a.sum()
print(sum_element_a)
곱셈
matmul
: 행렬 곱
mul
: 같은 위치의 원소끼리의 곱
a = torch.arange(0,9).view(3, 3)
b = torch.arange(0,9).view(3, 3)
mat_mul = torch.matmul(a, b)
print(mat_mul)
ele_mul = torch.mul(a, b)
print(ele_mul)
tensor slicing
a = torch.arange(1, 13).view(4, 3)
print(a[:, 0])
print(a[0,:])
tensor 합치기
concat
a = torch.arange(1, 10).view(3,3)
b = torch.arange(10, 19).view(3,3)
c = torch.arange(19, 28).view(3,3)
abc = torch.cat([a,b,c], dim=0)
print("concat : \n {}".format(abc))
print("shape: {}".format(abc.shape))
abc = torch.cat([a,b,c], dim=1)
print("concat : \n {}".format(abc))
print("shape: {}".format(abc.shape))
stack
a = torch.arange(1, 10).view(3,3)
b = torch.arange(10, 19).view(3,3)
c = torch.arange(19, 28).view(3,3)
abc = torch.stack([a,b,c], dim=0)
print("stack : \n {}".format(abc))
print("shape: {}".format(abc.shape))
tensor reshape
view
를 사용하여 원하는 shape으로 변경
a = torch.tensor([2,4,5,6,7,8])
b = a.view(2, 3)
tensor transpose
t
bt = b.t()
transpose
- torch.transpose(input, dim0, dim1)
a = torch.arange(1, 10).view(3,3)
at = torch.transpose(a,0,1)
permute
- torch.permute(input, dims)
- 모든 dimension을 index 순서대로 변환
b = torch.arange(1, 25).view(4, 3, 2)
bp = b.permute(2, 0, 1)
print("permute b : \n {}".format(bt))
print(bp.shape)