首页 > 系统服务 > 详细

[Machine Learning] Gradient Descent in Practice I - Feature Scaling

时间:2020-08-22 18:27:31      阅读:78      评论:0      收藏:0      [点我收藏+]
Feature scaling: it make gradient descent run much faster and converge in a lot fewer other iterations.
 
Bad cases:
技术分享图片
Good cases:
技术分享图片

We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:

−1 ≤ x(i)? ≤ 1

or

−0.5 ≤ x(i)? ≤ 0.5

 

These aren‘t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.

Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:

技术分享图片

 

Example:

技术分享图片

(D)

[Machine Learning] Gradient Descent in Practice I - Feature Scaling

原文:https://www.cnblogs.com/Answer1215/p/13546179.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!