首页 > 其他 > 详细

spark中常用转换操作keys 、values和mapValues

时间:2020-02-14 17:46:21      阅读:48      评论:0      收藏:0      [点我收藏+]

1.keys

功能:

  返回所有键值对的key

示例

val list = List("hadoop","spark","hive","spark")
val rdd = sc.parallelize(list)
val pairRdd = rdd.map(x => (x,1))
pairRdd.keys.collect.foreach(println)

结果

hadoop
spark
hive
spark
list: List[String] = List(hadoop, spark, hive, spark)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[142] at parallelize at command-3434610298353610:2
pairRdd: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[143] at map at command-3434610298353610:3

2.values

功能:

  返回所有键值对的value

示例

val list = List("hadoop","spark","hive","spark")
val rdd = sc.parallelize(list)
val pairRdd = rdd.map(x => (x,1))
pairRdd.values.collect.foreach(println)

结果

1
1
1
1
list: List[String] = List(hadoop, spark, hive, spark)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[145] at parallelize at command-3434610298353610:2
pairRdd: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[146] at map at command-3434610298353610:3

3.mapValues(func)

功能:

  对键值对每个value都应用一个函数,但是,key不会发生变化。

示例 

val list = List("hadoop","spark","hive","spark")
val rdd = sc.parallelize(list)
val pairRdd = rdd.map(x => (x,1))
pairRdd.mapValues(_+1).collect.foreach(println)//对每个value进行+1

结果

(hadoop,2)
(spark,2)
(hive,2)
(spark,2)


原文链接:http://www.mamicode.com/info-detail-2285651.html

spark中常用转换操作keys 、values和mapValues

原文:https://www.cnblogs.com/123456www/p/12308247.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!