跳到主要内容

Scikit-learn

问题

Scikit-learn 的核心 API 是什么?如何构建机器学习 Pipeline?

答案

统一 API

所有模型遵循相同接口:fit()predict()score()

from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

# 分割数据
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)

# 训练
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)

# 评估
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))

Pipeline

将预处理和模型组合为一个管道,避免数据泄露:

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder

# 数值特征处理
numeric_transformer = Pipeline([
("imputer", SimpleImputer(strategy="median")),
("scaler", StandardScaler()),
])

# 类别特征处理
categorical_transformer = Pipeline([
("imputer", SimpleImputer(strategy="most_frequent")),
("encoder", OneHotEncoder(handle_unknown="ignore")),
])

# 组合
preprocessor = ColumnTransformer([
("num", numeric_transformer, ["age", "salary"]),
("cat", categorical_transformer, ["city", "department"]),
])

# 完整 Pipeline
pipe = Pipeline([
("preprocess", preprocessor),
("classifier", RandomForestClassifier()),
])

pipe.fit(X_train, y_train)
pipe.predict(X_test)

交叉验证与调参

from sklearn.model_selection import GridSearchCV, cross_val_score

# 交叉验证
scores = cross_val_score(pipe, X, y, cv=5, scoring="f1_macro")
print(f"Mean F1: {scores.mean():.3f} ± {scores.std():.3f}")

# 网格搜索
param_grid = {
"classifier__n_estimators": [100, 200],
"classifier__max_depth": [10, 20, None],
}
grid = GridSearchCV(pipe, param_grid, cv=5, scoring="f1_macro", n_jobs=-1)
grid.fit(X_train, y_train)
print(grid.best_params_)

常见面试问题

Q1: 过拟合和欠拟合怎么处理?

答案

问题表现解决方法
过拟合训练集好、测试集差增加数据、正则化、减少特征、Dropout
欠拟合训练集和测试集都差增加特征、用更复杂模型、减少正则化

Q2: 常用评估指标有哪些?

答案

  • 分类:Accuracy、Precision、Recall、F1-Score、AUC-ROC
  • 回归:MSE、RMSE、MAE、R2R^2
  • 聚类:Silhouette Score、Calinski-Harabasz

Q3: 为什么需要 Pipeline?

答案

  1. 防止数据泄露:预处理的 fit 只在训练集上执行
  2. 代码整洁:一个对象包含全部流程
  3. 方便调参:GridSearchCV 可以同时搜索预处理和模型参数
  4. 部署方便:序列化整个 Pipeline 即可

Q4: 随机森林和 GBDT 的区别?

答案

特性随机森林GBDT (XGBoost/LightGBM)
训练方式独立并行串行提升
偏差-方差主要减小方差主要减小偏差
过拟合风险较高
速度可并行,较快串行,较慢

相关链接