首页 > 其他 > 详细

sqoop job local 和 Cannot initialize Cluster 问题

时间:2015-02-08 23:14:14      阅读:422      评论:0      收藏:0      [点我收藏+]

hadoop版本:Hadoop 2.3.0-cdh5.0.0

sqoop版本:Sqoop 1.4.4-cdh5.0.0

配置好sqooop-env.xml:



#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/my/hadoop


#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/my/hadoop/share/hadoop/mapreduce1


#set the path to where bin/hbase is available
#export HBASE_HOME=


#Set the path to where bin/hive is available
export HIVE_HOME=/my/hive


#Set the path for where zookeper config dir is
export ZOOCFGDIR=/my/zookeeper

然后拷贝${hadoop_home}/share/hadoop/mapreduce1/hadoop-examples-2.3.0-mr1-cdh5.0.0.jar 到 ${sqoop_home}/lib下

然后可以正常导出hdfs数据到oracle,但是日志却显示:mapred.JobClient: Running job: job_local1950934197_0001,Task ‘attempt_local1950934197_0001_m_000077_0‘等等,很明显mapreduce任务是在本地运行的,虽然这样可以运行但是有两个问题:不能利用集群的优势和无法根据job_id杀死任务。因为hadoop job -list找不到job_local的任务。然后发现hadoop-*core*.jar是mr1的jar,而集群用的是mr2的jar包,于是想着可能是这个原因,就把export HADOOP_MAPRED_HOME=/e3base/hadoop/share/hadoop/mapreduce1改成HADOOP_MAPRED_HOME=/my/hadoop/share/hadoop/mapreduce,因为这个路径下的jar包是mr2的。同时将${sqoop_home}/lib下的hadoop-*core*.jar删除,将mapreduce目录下的hadoop-*core*.jar拷贝到lib目录下,再次提交export脚本报错了:

15/02/08 19:53:08 WARN security.UserGroupInformation: PriviledgedActionException as:e3base (auth:SIMPLE) c
ause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framewo
rk.name and the correspond server addresses.
15/02/08 19:53:08 ERROR tool.ExportTool: Encountered IOException running export job: java.io.IOException: 
Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond
 server addresses.

不能初始化集群,于是我在想是不是sqoop版本和hadoop版本不协调呢,于是去sqoop官网上看了下发现1.44支持hadoop2.0啊。搜索其他的sqoop安装指南,发现确实是拷贝hadoop-*core*.jar没错。终于找到有用的了http://www.tuicool.com/articles/3YNVvuv,这网址是说是可能缺少jar包的问题,好吧,试下,我把mapreduce目录下的hadoop-mapreduce-client-common-2.3.0-cdh5.0.0.jar拷贝到lib目录下再次运行,还是报错:Cannot initialize Cluster.。于是想着可能还缺jar包,就把mapreduce目录下的hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.0.jar 拷贝到lib目录下再次运行这次运行成功了:Submitting tokens for job: job_1416981396834_2297,sqoop提交mapreduce到集群运行了。然后又把lib下的hadoop-mapreduce-client-common-2.3.0-cdh5.0.0.jar删除运行看汇报什么错呢:

java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.v2.util.MRApps。

sqoop job local 和 Cannot initialize Cluster 问题

原文:http://blog.csdn.net/yonghutwo/article/details/43647719

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!