首页 > 其他 > 详细

Regularization

时间:2021-05-13 13:50:36      阅读:17      评论:0      收藏:0      [点我收藏+]

In mathematics, statistics, finance,[1] computer science, particularly in machine learning and inverse problems, regularization is the process of adding information in order to solve an ill-posed(不适定) problem or to prevent overfitting.[2]

Regularization can be applied to objective functions in ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique.

R2 Ridge Regression

At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable.

R1 |R| Lasso Regression

Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, effectively excluding them. This idea is similar to ridge regression, which only shrinks the size of the coefficients, without setting any of them to zero.

正则化可以解决两种类型的问题:

  • 不适定问题(Lasso Regression)
  • 防止过拟合(Ridge Regression)

Regularization

原文:https://www.cnblogs.com/qianxinn/p/14763871.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!