首页 > 其他 > 详细

机器学习笔记(Washington University)- Clustering Specialization-week six

时间:2017-06-02 23:30:21      阅读:345      评论:0      收藏:0      [点我收藏+]

1. Hierarchical clustering

  • Avoid choosing number of clusters beforehand
  • Dendrograms help visualize different clustering granularities (no need to rerun algorithm)
  • Most algorithm allow user to choose any distance metric (k-means restricted us to euclidean distance)
  • Can often find more  complex shapes than k-means or gaussian mixture model

Divisive (top-down):

start with all data in a big cluster and recursively split(recursive k-means)

  • which algorithm to recurse
  • how many clusters per split
  • when to split vs stop, max cluster size or max cluster radius or specified number of clusters

 

Agglomerative (bottom-up):

start with each data point at its own cluster, merge cluster until all points are in one big cluster (single linkage)

single linkage

  • initialize each point to be its own cluster
  • define distance between clusters to bb the minimum distance of C1 in cluster one and C2 in clustrer two
  • merge the two closest cluster
  • repeat step 3 until all points are in one cluster

 

Dendrogram

x axis shows data points (carefully ordered).

y axis shows distance between pairs of clusters.

Path shows all cluser to which a point belongs and the order in which clusters merge.

 

机器学习笔记(Washington University)- Clustering Specialization-week six

原文:http://www.cnblogs.com/climberclimb/p/6935542.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!