首页 > 其他 > 详细

01 Linear Regression with One Variable

时间:2019-06-02 19:58:18      阅读:77      评论:0      收藏:0      [点我收藏+]

Symbols:

  • m = Number of training examples
  • x’s = “input” variable /features
  • y’s = “output” variable / “target” variaable
  • (x, y) = one training example
  • \((x^{(i)}, y^{(i)})\) = \(i_{th}\) training example
  • h(x) = hypothesis function
  • \(h_\theta(x) = \theta_0 + \theta_1x\) shorthand:h(x)

Cost Function

  • squared cost function
    \[J(\theta_0, \theta_1) = \dfrac {1}{2m} \displaystyle \sum _{i=1}^m \left ( \hat{y}_{i}- y_{i} \right)^2 = \dfrac {1}{2m} \displaystyle \sum _{i=1}^m \left (h_\theta (x_{i}) - y_{i} \right)^2\]
  • Goal: \(minimize_{\theta_0, \theta_1}J(\theta_0, \theta_1)\)

Gradient descent

repeat until convergence {

\(\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)\) (for j = 0 and j = 0)
}

需要同时更新\(\theta_j\), 否则先更新\(\theta_i\)会对后面的项的更新产生影响

\(\begin{align*} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align*}\)

01 Linear Regression with One Variable

原文:https://www.cnblogs.com/QQ-1615160629/p/01-Linear-Regression-with-One-Variable.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!