XGBoos详解
XGBoost是一种以决策树(cart树)为基学习器的集成学习方法。
XGBoost的目标:
\[Loss=\large{\sum\nolimits_{i=1}^{n}{l(y_i,\hat{y_i}) + \sum\nolimits_{k=1}^{T}{\Omega(f_k)}}, f_k\in F}
\]
\[\large{\Omega(f_k)=\gamma{T}+\frac{1}{2}\lambda\sum\nolimits_{j}^{T}w_j^2}
\]
\(\hat{y_i}\)代表模型预测值,\(\Omega(f_k)\)为正则项,代表第k颗树的复杂度
\[\large{\begin{align}
Loss^{(t)} &= \large{\sum\nolimits_{i=1}^{n}{l(y_i,\hat{y_i}^{(t)}) + \sum\nolimits_{k=1}^{T}{\Omega(f_k)}}} \ &= \large{\sum\nolimits_{i=1}^{n}{l(y_i,\hat{y_i}^{(t-1)}+f_t({x_i})) + {\Omega(f_t)}}}+const
\end{align}}
\]
泰勒二阶展开为
\[\large{f(x + \Delta{x}){\approx}f(x)+f^{‘}{(x)}{\Delta}x+\frac{1}{2}+f^{‘‘}{(x)}{\Delta}x^2}
\]
将泰勒二阶展开应用于Loss函数,可得
\[\large{Loss{\approx}\sum\nolimits_{i-1}^n{[l(y_i,{\hat{y}}^{t-1})+g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]}+{\Omega(f_t)}+const}
\]
\[\large{g_i={\partial}_{{\hat{y}}^{t-1}}{l(y_i,{\hat{y}}^{t-1})},h_i={\partial}_{{\hat{y}}^{t-1}}^{2}{l(y_i,{\hat{y}}^{t-1})}}
\]
移除常数项,此次迭代的目标函数为
\[\large{Object=\sum\nolimits_{i-1}^n{[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]}+{\Omega(f_t)}}
\]
定义\(I_j\)为第\(j\)个叶子节点里的样本ID的集合,即\(I_j=\{i|q(x_i)=j\}\),将正则项带入
\[\large{\begin{align}
Object&=\sum\nolimits_{i-1}^n{[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]}+{\Omega(f_t)} \&=\sum\nolimits_{i-1}^n{[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]}+\gamma{T}+\frac{1}{2}\lambda\sum\nolimits_{j}^{T}w_j^2 \&=\sum_{j-1}^{T}{[(\sum_{j\in{I_j}}g_i)w_i+\frac{1}{2}(\sum_{j\in{I_j}}h_i+\lambda)w_j^2]}+{\lambda}T
\end{align}}
\]
将Loss看作以\(f_t{(x_i)}\)为自变量的二次函数,求解最小值,即
\[\large{w_j^*=-\frac{\sum\nolimits_{i{\in}I_j}g_i}{\sum\nolimits_{i{\in}I_j}{h_i}+\lambda}} \\large{Object^{(t)}(q)=-\frac{1}{2}{\sum_{j-1}^{T}{\frac{(\sum\nolimits_{i{\in}I_j}g_i)^2}{\sum\nolimits_{i{\in}I_j}h_i+\lambda}}}+{\lambda}T}
\]
假设当前节点的样本集合为\(I\),分裂后节点集合为\(I_L\)和\(I_R\),即\(I=I_L{?}{\bigcup}{?}I_R\),则此次划分带来的增益为
\[\large{\begin{align}
L_{split}&=Object_{I_L}+Object_{I_R}-Object_{I} \&=\frac{1}{2}{[\frac{(\sum\nolimits_{i{\in}I_L}g_i)^2}{\sum\nolimits_{i{\in}I_L}h_i+\lambda}+\frac{(\sum\nolimits_{i{\in}I_R}g_i)^2}{\sum\nolimits_{i{\in}I_R}h_i+\lambda}-\frac{(\sum\nolimits_{i{\in}I}g_i)^2}{\sum\nolimits_{i{\in}I}h_i+\lambda}]}-\lambda
\end{align}}
\]
以此为增益不断划分节点即可
XGBoost
原文:https://www.cnblogs.com/wa007/p/13833002.html