首页 > 其他 > 详细

Spark-Cassandra-Connector 插入数据函数saveToCassandra

时间:2016-01-21 19:49:11      阅读:150      评论:0      收藏:0      [点我收藏+]

在spark-shell中将数据保存到cassandra:

var data = normalfill.map(line => line.split("\u0005"))

data.map(
 line => (line(0), line(1), line(2))) 
).saveToCassandra(
 "cui", 
 "oper_ios",
 SomeColumns("user_no", "cust_id", "oper_code","oper_time")
)

 

 saveToCassandra方法 当字段类型是counter的时候,默认行为是计数 

 

CREATE TABLE cui.incr(
 name text,
 count counter,
 PRIMARY KEY (name)
)

 

scala> var rdd = sc.parallelize(Array(("cui", 100 )))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[820] at parallelize at <console>:42

scala>  rdd.saveToCassandra("cui","incr", SomeColumns("name","count"))
16/01/21 16:55:35 INFO core.Cluster: New Cassandra host /172.25.1.158:9042 added
……

// name     count

// cui          100

scala> var rdd = sc.parallelize(Array(("cui", 100 )))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[821] at parallelize at <console>:42

scala>  rdd.saveToCassandra("cui","incr", SomeColumns("name","count"))

// name     count

// cui          200

Spark-Cassandra-Connector 插入数据函数saveToCassandra

原文:http://www.cnblogs.com/tugeler/p/5148909.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!