앙상블 (Ensemble)

Jane의 study note.·2022년 11월 30일
0

사이킷런 Sklearn

목록 보기
9/19

0. 핵심개념 및 사이킷런 알고리즘 API 링크

ensemble

Part1. 분류(Classification)

1. 분석 데이터 준비

import warnings
warnings.filterwarnings("ignore")
import pandas as pd
data1=pd.read_csv('breast-cancer-wisconsin.csv', encoding='utf-8')
X=data1[data1.columns[1:10]]
y=data1[["Class"]]

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X, y, stratify=y, random_state=42)

from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler()
scaler.fit(X_train)
X_scaled_train=scaler.transform(X_train)
X_scaled_test=scaler.transform(X_test)

2. 강한 학습기: hard learner

from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier

logit_model= LogisticRegression(random_state=42)
rnf_model = RandomForestClassifier(random_state=42)
svm_model = SVC(random_state=42)

voting_hard = VotingClassifier(
    estimators=[('lr', logit_model), ('rf', rnf_model), ('svc', svm_model)], voting='hard')
voting_hard.fit(X_scaled_train, y_train)

VotingClassifier(estimators=[('lr', LogisticRegression(random_state=42)),
                             ('rf', RandomForestClassifier(random_state=42)),
                             ('svc', SVC(random_state=42))])

from sklearn.metrics import accuracy_score

for clf in (logit_model, rnf_model, svm_model, voting_hard):
    clf.fit(X_scaled_train, y_train)
    y_pred = clf.predict(X_scaled_test)
    print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
    
LogisticRegression 0.9590643274853801
RandomForestClassifier 0.9649122807017544
SVC 0.9649122807017544
VotingClassifier 0.9649122807017544 

from sklearn.metrics import confusion_matrix
log_pred_train=logit_model.predict(X_scaled_train)
log_confusion_train=confusion_matrix(y_train, log_pred_train)
print("로지스틱 분류기 훈련데이터 오차행렬:\n", log_confusion_train)

log_pred_test=logit_model.predict(X_scaled_test)
log_confusion_test=confusion_matrix(y_test, log_pred_test)
print("로지스틱 분류기 테스트데이터 오차행렬:\n", log_confusion_test)

로지스틱 분류기 훈련데이터 오차행렬:
 [[328   5]
 [  9 170]]
로지스틱 분류기 테스트데이터 오차행렬:
 [[106   5]
 [  2  58]]
 
svm_pred_train=svm_model.predict(X_scaled_train)
svm_confusion_train=confusion_matrix(y_train, svm_pred_train)
print("서포트벡터머신 분류기 훈련데이터 오차행렬:\n", svm_confusion_train)

svm_pred_test=svm_model.predict(X_scaled_test)
svm_confusion_test=confusion_matrix(y_test, svm_pred_test)
print("서포트벡터머신 분류기 훈련데이터 오차행렬:\n", svm_confusion_test)

서포트벡터머신 분류기 훈련데이터 오차행렬:
 [[329   4]
 [  4 175]]
서포트벡터머신 분류기 훈련데이터 오차행렬:
 [[106   5]
 [  1  59]]
 
rnd_pred_train=rnf_model.predict(X_scaled_train)
rnd_confusion_train=confusion_matrix(y_train, rnd_pred_train)
print("랜덤포레스트 분류기 훈련데이터 오차행렬:\n", rnd_confusion_train)

rnd_pred_test=rnf_model.predict(X_scaled_test)
rnd_confusion_test=confusion_matrix(y_test, rnd_pred_test)
print("랜덤포레스트 분류기 테스트데이터 오차행렬:\n", rnd_confusion_test)

랜덤포레스트 분류기 훈련데이터 오차행렬:
 [[333   0]
 [  0 179]]
랜덤포레스트 분류기 테스트데이터 오차행렬:
 [[106   5]
 [  1  59]]
 
voting_pred_train=voting_hard.predict(X_scaled_train)
voting_confusion_train=confusion_matrix(y_train, voting_pred_train)
print("투표분류기 분류기 훈련데이터 오차행렬:\n", voting_confusion_train)

voting_pred_test=voting_hard.predict(X_scaled_test)
voting_confusion_test=confusion_matrix(y_test, voting_pred_test)
print("투표분류기 분류기 훈련데이터 오차행렬:\n", voting_confusion_test)

투표분류기 분류기 훈련데이터 오차행렬:
 [[329   4]
 [  4 175]]
투표분류기 분류기 훈련데이터 오차행렬:
 [[106   5]
 [  1  59]]

3. 약한 학습기: soft learner

logit_model = LogisticRegression(random_state=42)
rnf_model = RandomForestClassifier(random_state=42)
svm_model = SVC(probability=True, random_state=42)

voting_soft = VotingClassifier(
    estimators=[('lr', logit_model), ('rf', rnf_model), ('svc', svm_model)], voting='soft')
voting_soft.fit(X_scaled_train, y_train)

VotingClassifier(estimators=[('lr', LogisticRegression(random_state=42)),
                             ('rf', RandomForestClassifier(random_state=42)),
                             ('svc', SVC(probability=True, random_state=42))],
                 voting='soft')
                 
from sklearn.metrics import accuracy_score

for clf in (logit_model, rnf_model, svm_model, voting_soft):
    clf.fit(X_scaled_train, y_train)
    y_pred = clf.predict(X_scaled_test)
    print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
    
LogisticRegression 0.9590643274853801
RandomForestClassifier 0.9649122807017544
SVC 0.9649122807017544
VotingClassifier 0.9649122807017544

voting_pred_train=voting_soft.predict(X_scaled_train)
voting_confusion_train=confusion_matrix(y_train, voting_pred_train)
print("투표분류기 분류기 훈련데이터 오차행렬:\n", voting_confusion_train)

voting_pred_test=voting_soft.predict(X_scaled_test)
voting_confusion_test=confusion_matrix(y_test, voting_pred_test)
print("투표분류기 분류기 훈련데이터 오차행렬:\n", voting_confusion_test)

투표분류기 분류기 훈련데이터 오차행렬:
 [[330   3]
 [  3 176]]
투표분류기 분류기 훈련데이터 오차행렬:
 [[106   5]
 [  1  59]]

Part2. 회귀(Regression)

1. 분석 데이터 준비

data2=pd.read_csv('house_price.csv', encoding='utf-8')
X=data2[data2.columns[1:5]]
y=data2[["house_value"]]

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X, y, random_state=42)

from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler()
scaler.fit(X_train)
X_scaled_train=scaler.transform(X_train)
X_scaled_test=scaler.transform(X_test)

2. 모델적용

from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import VotingRegressor

linear_model= LinearRegression()
rnf_model = RandomForestRegressor(random_state=42)

voting_regressor = VotingRegressor(estimators=[('lr', linear_model), ('rf', rnf_model)])
voting_regressor.fit(X_scaled_train, y_train)

VotingRegressor(estimators=[('lr', LinearRegression()),
                            ('rf', RandomForestRegressor(random_state=42))])
                            
pred_train=voting_regressor.predict(X_scaled_train)
voting_regressor.score(X_scaled_train, y_train)

0.7962532705428835

pred_test=voting_regressor.predict(X_scaled_test)
voting_regressor.score(X_scaled_test, y_test)

0.5936371957936408

# RMSE (Root Mean Squared Error)
import numpy as np
from sklearn.metrics import mean_squared_error 
MSE_train = mean_squared_error(y_train, pred_train)
MSE_test = mean_squared_error(y_test, pred_test)
print("훈련   데이터 RMSE:", np.sqrt(MSE_train))
print("테스트 데이터 RMSE:", np.sqrt(MSE_test))

훈련   데이터 RMSE: 43082.050654857834
테스트 데이터 RMSE: 60942.385243534896

0개의 댓글