首页 > 其他 > 详细

Spark

时间:2017-08-23 20:32:56      阅读:565      评论:0      收藏:0      [点我收藏+]
 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Command exited with code 1)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359)
at 

spark集群部署好之后,运行start-all.sh,可以成功运行,但是运行shell出错,显示超时

由于netty是spark通信框架,通信超时所以产生问题。

解决方法:1.ip6可能是一个可能原因,把::1也就是ip6先注释掉试试(不行)                                 2.设置下超时时间(靠谱):
SparkConf: conf.set("spark.rpc.askTimeout", "600s")
  spark-defaults.conf: spark.rpc.askTimeout 600s
spark-submit: --conf spark.rpc.askTimeout=600s

 


Spark

原文:http://www.cnblogs.com/Khaleesi-yu/p/7419940.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!