Lec9: Mathematics for Artificial Intelligence_Matrix
01_Matrix
1) Matrix?
- Matrix : Vector를 원소로 가지는 2차원 Array
![](https://velog.velcdn.com/images/ghkd1330/post/46cab36f-2979-4224-9f82-e46a74d21503/image.png)
X = np.array([[1, -2, 3],
[7, 5, 0],
[-2, -1, 2]])
- Matrix는 Row와 Column이라는 Index를 가진다.
![](https://velog.velcdn.com/images/ghkd1330/post/0b9b03ce-5839-4b83-ad29-00f2c1f3de2c/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/ee8f7d40-990f-4a87-a7b6-5a5f0adc01fa/image.png)
- Matrix의 특정 Row(or Column)을 고정하면 Row(or Column) Vector라고 부른다.
![](https://velog.velcdn.com/images/ghkd1330/post/208c55c9-aad8-4567-8cb0-45d74a708926/image.png)
- Transpose Matrix
![](https://velog.velcdn.com/images/ghkd1330/post/fc68b9e0-d260-4c02-882b-8460d0b66f73/image.png)
- Vector가 공간에서 한 점을 의미한다면, Matrix는 여러 점들을 나타낸다.
![](https://velog.velcdn.com/images/ghkd1330/post/51517dab-c1c8-44b4-b9aa-49d93e67c88d/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/1a7df712-c62b-4447-b21b-2026de646bfa/image.png)
2) Matrix Addition, Subtraction, Component Product, Scalar Product
![](https://velog.velcdn.com/images/ghkd1330/post/55e88a2f-b863-4698-8ded-bbde43446883/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/505b7bbb-f299-4c34-b0ad-96548df18c23/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/b8fbaa5a-c454-49fd-8707-bdac44565352/image.png)
3) Matrix Multiplication
![](https://velog.velcdn.com/images/ghkd1330/post/a64827e9-03c5-4241-b9b2-fcd674a478fa/image.png)
X = np.array([1, -2, 3],
[7, 5, 0],
[-2, -1, 2]])
Y = np.array([[0, 1],
[1, -1],
[-2, 1]])
X @ Y
array([[-8, 6],
[5, 2],
[-5, 1]])
4) Dot Product of Matrix
![](https://velog.velcdn.com/images/ghkd1330/post/470ee5ef-d229-4c6d-9752-8c9253dfe935/image.png)
X = np.array([1, -2, 3],
[7, 5, 0],
[-2, -1, 2]])
Y = np.array([[0, 1, -1],
[1, -1, 0]])
np.inner(X, Y)
array([[-5, 3],
[5, 2],
[-3, -1]])
5) Matrix?#2
- Matrix는 Vector Space에서 사용되는 연산자(=Operator)로 이해한다.
- Matrix Multiplication을 통해 Vector를 다른 차원의 공간으로 보낼 수 있다.
- Matrix Multiplication을 통해 Pattern을 추출할 수 있고 Data를 압축할 수도 있다.
![](https://velog.velcdn.com/images/ghkd1330/post/36fb5d06-2bf3-4e32-ba0f-a9c28239590a/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/e5145f91-e2ba-4d86-811f-b312627a3566/image.png)
6) Inverse Matrix
- 어떤 Matrix의 연산을 거꾸로 되돌리는 Matrix : Inverse Matrix
- Row와 Column 숫자가 같고(m=n) 행렬식(=Determinant)이 0이 아닌 경우에만 계산 가능
![](https://velog.velcdn.com/images/ghkd1330/post/f5c149e3-b8f7-424a-97a8-ff42082acca7/image.png)
X = np.array([[1, -2, 3],
[7, 5, 0],
[-2, -1, 2]])
np.linalg.inv(X)
X @ np.linalg.inv(X)
![](https://velog.velcdn.com/images/ghkd1330/post/088c29c1-e8b6-4089-b3ec-4d1033481717/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/9141d1cc-001c-4ac7-b18b-2804eda09693/image.png)
- 만약 Inverse Matrix를 계산할 수 없다면 Pseudo-Inverse 또는 Moore-Penrose Inverse Matrix를 이용.
![](https://velog.velcdn.com/images/ghkd1330/post/69180c4e-496d-4f53-a456-ba3cee0068f7/image.png)
Y = np.array([[0, 1],
[1, -1],
[-2, 1]])
np.linalg.pinv(Y)
np.linalg.pinv(Y) @ Y
![](https://velog.velcdn.com/images/ghkd1330/post/3f06663d-49f8-44c2-8fd8-4af3253b676f/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/3c49e75f-0160-403c-945f-72a94a37505d/image.png)
7) Apply#1 : Solve System of Equations
- np.linalg.pinv를 이용하여 연립방정식의 해를 구하자.
- 연립방정식은 Matrix를 사용하면 Ax = b와 같이 표현 가능.
![](https://velog.velcdn.com/images/ghkd1330/post/116db34d-c07b-4f76-bf98-9ad92f3bfd55/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/76414c31-457e-4493-9eb9-f94f7ac68af0/image.png)
8) Apply#2 : Linear Regression Analysis
- np.linalg.pinv를 이용하면 Data를 Linear Model로 해석하는 선형회귀식을 찾을 수 있다.
![](https://velog.velcdn.com/images/ghkd1330/post/2c0cc5b3-17df-4c0e-80a1-ea5a8874ee1f/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/99cadec7-8db4-4db9-aecf-64347e7444c0/image.png)
![](https://velog.velcdn.com/images/ghkd1330/post/7f0da065-fd7b-4c41-87cc-35ba354424b6/image.png)
- sklearn의 LinearRegression과 같은 결과를 가져올 수 있다.
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(x_test)
X_ = np.array([np.append(x, [1]) for x in X])
beta = np.linalg.pinv(X_) @ y
y_test = np.append(x, [1]) @ beta