首页 > 其他 > 详细

Spark wordcount 编译错误 -- reduceByKey is not a member of RDD

时间:2014-11-06 19:04:55      阅读:3981      评论:0      收藏:0      [点我收藏+]

Attempting to run http://spark.apache.org/docs/latest/quick-start.html#a-standalone-app-in-scala from source.

This line val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_) reports compile 

value reduceByKey is not a member of org.apache.spark.rdd.RDD[(String, Int)] 

 

Resolution:

 import the implicit conversions from SparkContext:

import org.apache.spark.SparkContext._

They use the ‘pimp up my library‘ pattern to add methods to RDD‘s of specific types. If curious, seeSparkContext:1296

Spark wordcount 编译错误 -- reduceByKey is not a member of RDD

原文:http://www.cnblogs.com/abelstronger/p/4079293.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!