首页 > 其他 > 详细

yarn hadoop-2.3.0 installation cluster Centos 64bits

时间:2014-04-02 10:33:25      阅读:430      评论:0      收藏:0      [点我收藏+]

cluster: n0,n1,n2

n0:NameNode,ResourceManager ;

n1.n2:DataNode,NodeManager;

 

1. prerequiration

  1.1 添加用户hm

    #useradd hm

    #passwd hm

  1.2 jdk 1.6/1.7

  1.3 ssh 无密码登录

 

bubuko.com,布布扣
  1.所有机器: 使用hm用户登录
      $cd /home/hm
      $mkdir .ssh

   2.  在namenode上生成密钥对
     $ ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa  
     $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    2.1  .ssh目录要设成700 有执行权限
    2.2  authorized_keys要设成600 否则会出错
    2.3  还有ssh 登陆要加入用户名的 比如(需要密码)
      $ssh  n1
      $ssh  n2

   3. 复制公钥(需要密码)
      $cd   .ssh  
      $scp  authorized_keys   n1:/home/hm/.ssh
    $scp   authorized_keys  n2:/home/hm/.ssh
   4.测试 (!!不需要密码)
     ssh  n1
     ssh  n2
bubuko.com,布布扣

 

2. hadoop 通用配置

   2.1 hadoop-env.sh

   2.2 slave 工作节点

3. hadoop四大组件配置

  3.1 组件core-site.xml 

bubuko.com,布布扣
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://n0:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hm/temp</value>
</property>
<property>
<name>hadoop.proxyuser.hm.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hm.groups</name>
<value>*</value>
</property>
</configuration>
bubuko.com,布布扣

 

  3.2 组件 hdfs-site.xml

bubuko.com,布布扣
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>n0:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hm/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hm/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
bubuko.com,布布扣

 

  3.3 组件yarn-site.xml

bubuko.com,布布扣
<?xml version="1.0"?>

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>n0:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>n0:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>n0:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>n0:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>n0:8088</value>
</property>
</configuration>
                                                 
bubuko.com,布布扣

 

  3.4 组件mapred-site.xml 

bubuko.com,布布扣
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>n0:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>n0:19888</value>
</property>
</configuration>
bubuko.com,布布扣

 

 

4. 启动和停止

 4.1 启动

      sbin/start-dfs.sh

      sbin/start-yarn.sh

4.2 停止

      sbin/stop-dfs.sh

      sbin/stop-yarn.sh

5.测试

 

 

 

 

 

yarn hadoop-2.3.0 installation cluster Centos 64bits,布布扣,bubuko.com

yarn hadoop-2.3.0 installation cluster Centos 64bits

原文:http://www.cnblogs.com/GrantYu/p/3638147.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!