首页 > 其他 > 详细

MachineLearning

时间:2014-03-08 13:23:59      阅读:662      评论:0      收藏:0      [点我收藏+]

Notes of Coursera-MachineLearning-Andrew NG

Week1-2014/03/07-hphp
欢迎赐教、讨论、转载,转载请注明原文地址~

Machine Learning Introduction

  • Many Application
    • Amazon , Netflix recommend system 
    • Arthur Samuel, made a machine learn how to check. checkers , made a program play chess with itself , and know better of how to win .
  • Popular
    • and well .. currently large demands of talents. as one of the top 12 computer skill
  • Different types of learning algorithms
    • famous partial methods : supervised learning , unsupervised learning,
  • Main Goal
    • how to develop the best machine learning systems to get better performance.

Supervise learning

how to pick a model ? straight line or polynomial ? 

  • Regression: Predict continous valued output
  • Classification problem  ,
    • tumor size Vs malignant
    • Tumor size , age , Vs malignant or benign ,
    • could use more features to predict ( or regression ) : uniformity of cell shape , cell size ......
  • Statistically
  • compromized : 妥协的

Unsupervised Learning?

bubuko.com,布布扣

  • clustering problem
    • google news , with one news , several diff urls are laid.
bubuko.com,布布扣
    • astronomical data analysis
  • Cocktail party problem
    • seperate voice source
    • bubuko.com,布布扣
    • use octave , could solve the problem quickly and briefly

Linear Regression with one variable

  • Model representation

bubuko.com,布布扣

Training set : m : number of training examples , x :  input , y : output variable ,

y = h(x) , h : hypothesis

How do we represent H?

htheta(x) = theta0 + theta1(x)

univariant -- linear regression (a fancy name)

  • Cost Function

htheta(x) = theta0 + theta1(x)

how to choose two theta s

bubuko.com,布布扣

choose thetas so that h(x) is close to given examples.

                                         m

minimize      =   1/2m Sum ( h(xi) - yi ) 2

theta0, theta 1               1

bubuko.com,布布扣

squared error function  -- the most common coss function in regression .

  • Cost function intuition I - lecture 7

get better intuition what cost function is doing , and why we want to use it.

recap : focus , say briefly

bubuko.com,布布扣

simplified : theta 0 = 0

h(x) = theta1 * x

J(theta1)  = 1/2m * Sum[i:1-m](theta1xi - yi) 2

when theta1 = 1 , J ( theta1 ) = 0

theta1 = 0.5 , J ( theta1 ) = 0.5 , J ( 0 ) = 14/6 

bubuko.com,布布扣

  • Cost function intuition II - lecture 8
bubuko.com,布布扣

basic situation

    • contour plots : outline
    • theta0, theta1 != (0, x) or (x, 0) , with the cost function act as a 3D bowl , below

bubuko.com,布布扣
    • using : contour plots ( or contour figures ) .
bubuko.com,布布扣

? using such data , and such model , we could see that there a circle of "similar" point pairs of ( theta0, theta1) ,
 on which they act the same, so , can we tell the difference of different pairs ?

  • Gradient descent algorithm
    • it is used all over machine learning
    • could minimizing arbitrary functions besides cost function
    • Basic Thoughts

bubuko.com,布布扣  - surface ->bubuko.com,布布扣
    • EG: start at some point on the surface,     
bubuko.com,布布扣 VS bubuko.com,布布扣 
    • ==> start with diff starts , end in diff ends.[it is a property of Gradient descent algorithm ]
    • detailed description 
bubuko.com,布布扣

      • alpha : learning rate [ if alpha is large , aggressive ]
      • calculus and derivative
      • keep in mind : simultaneously, at the same time





MachineLearning,布布扣,bubuko.com

MachineLearning

原文:http://www.cnblogs.com/hphp/p/3587255.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!