首页 > 其他 > 详细

hadoop单机安装

时间:2019-08-13 11:31:06      阅读:86      评论:0      收藏:0      [点我收藏+]

环境 腾讯云centos7 

1、hadoop下载

http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz

2、解压

tar -xvf hadoop-2.7.7.tar.gz -C /usr/java

3、修改hadoop-2.7.7/etc/hadoop/hadoop-env.sh文件

将jdk环境添加进去:
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8

4、添加hadoop环境变量

    HADOOP_HOME=/usr/java/hadoop-2.7.7
    MAVEN_HOME=/usr/java/maven3.6
    RABBITMQ_HOME=/usr/java/rabbitmq_server
    TOMCAT_HOME=/usr/java/tomcat8.5
    JAVA_HOME=/usr/java/jdk1.8
    CLASSPATH=$JAVA_HOME/lib/
    PATH=$PATH:$JAVA_HOME/bin:$TOMCAT_HOME/bin:$RABBITMQ_HOME/sbin:$MAVEN_HOME/bin:$HADOOP_HOME/bin
    export PATH JAVA_HOME CLASSPATH TOMCAT_HOME RABBITMQ_HOME MAVEN_HOME HADOOP_HOME

   环境变量生效:source /etc/profile

5、修改hadoop-2.7.7/etc/hadoop/core-site.xml 

  <!-- 指定HDFS老大(namenode)的通信地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <!-- 指定hadoop运行时产生文件的存储路径 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/java/hadoop-2.7.7/tmp</value>
    </property>

6、修改hadoop-2.7.7/etc/hadoop/hdfs-site.xml 

  <configuration>
        <property>
            <name>dfs.name.dir</name>
            <value>/usr/java/hadoop-2.7.7/hdfs/name</value>
            <description>namenode上存储hdfs名字空间元数据 </description>
        </property>

        <property>
            <name>dfs.data.dir</name>
            <value>/usr/java/hadoop-2.7.7/hdfs/data</value>
            <description>datanode上数据块的物理存储位置</description>
        </property>
        <!-- 设置hdfs副本数量 -->
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
    </configuration>

7、免密登陆 

    ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa
    cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    chmod 0600 ~/.ssh/authorized_keys

8、hdfs启动与停止

    ./bin/hdfs namenode -format  #初始化,必须对namenode进行格式化
        出现:19/08/13 09:46:05 INFO common.Storage: Storage directory /usr/java/hadoop-2.7.7/hdfs/name has been successfully formatted。说明格式化成功!
        
      ./sbin/start-dfs.sh  #启动hadoop
        (base) [root@medecineit hadoop-2.7.7]# ./sbin/start-dfs.sh 
        Starting namenodes on [localhost]
        The authenticity of host ‘localhost (127.0.0.1)‘ can‘t be established.
        ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
        ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
        Are you sure you want to continue connecting (yes/no)? yes
        localhost: Warning: Permanently added ‘localhost‘ (ECDSA) to the list of known hosts.
        localhost: starting namenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-namenode-medecineit.out
        localhost: starting datanode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-datanode-medecineit.out
        Starting secondary namenodes [0.0.0.0]
        The authenticity of host ‘0.0.0.0 (0.0.0.0)‘ can‘t be established.
        ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
        ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
        Are you sure you want to continue connecting (yes/no)? yes
        0.0.0.0: Warning: Permanently added ‘0.0.0.0‘ (ECDSA) to the list of known hosts.
        0.0.0.0: starting secondarynamenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-medecineit.out

      ./sbin/stop-dfs.sh   #停止hadoop

9、查看是否启动相应的节点

  jps命令查看
        (base) [root@medecineit hadoop-2.7.7]# jps
                    4416 NameNode
                    4916 Jps
                    4740 SecondaryNameNode
                    4553 DataNode
                    975 Bootstrap

    说明NameNode,SecondaryNameNode,DataNode启动成功。

10、web界面查看

http://ip:50070

11、配置yarn -->mapred-site.xml

        复制一份文件:cp mapred-site.xml.template mapred-site.xml
    
        <!-- 通知框架MR使用YARN -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>    

12、配置yarn-site.xml文件

    <!-- reducer取数据的方式是mapreduce_shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

13、启动/停止yarn

        ./sbin/start-yarn.sh  #启动
            
            (base) [root@medecineit hadoop-2.7.7]# ./sbin/start-yarn.sh 
            starting yarn daemons
            starting resourcemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-resourcemanager-medecineit.out
            localhost: starting nodemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-nodemanager-medecineit.out
        
            (base) [root@medecineit hadoop-2.7.7]# jps
                8469 ResourceManager
                8585 NodeManager
                8812 Jps
                975 Bootstrap
                
        然后再启动hdfs : ./sbin/start-dfs.sh 

            (base) [root@medecineit hadoop-2.7.7]# jps
                8469 ResourceManager
                9208 DataNode

                9401 SecondaryNameNode
                9065 NameNode
                8585 NodeManager
                9550 Jps
                975 Bootstrap


        ./sbin/stop-yarn.sh    #停止

14、web界面查看yarn

http://ip:8088

单机hadoop和yarn的配置完毕!

 

########zookeeper安装###########

1、下载地址

https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

2、解压

tar -xvf zookeeper-3.4.14.tar.gz -C /usr/java/

3、修改配置文件

    cp zoo_sample.cfg  zoo.cfg 
    将数据保存到zookeeper的data目录中
    dataDir=/usr/java/zookeeper-3.4.14/data

4、启动zookeeper

    ./bin/zkServer.sh start  #启动

    ./bin/zkServer.sh status #查看状态

zookeeper完毕!

 

#######hbase安装##########

1、下载地址

https://www.apache.org/dyn/closer.lua/hbase/2.0.5/hbase-2.0.5-bin.tar.gz

2、解压

tar -xvf hbase-2.0.5-bin.tar.gz -C /usr/java/

3、修改hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8/

4、修改hbase-site.xml

        <property>
          <name>hbase.rootdir</name>
          <value>hdfs://medecineit:9000/hbase</value>
        </property>
        <property>
          <name>hbase.cluster.distributed</name>
          <value>true</value>
        </property>
        <property>
          <name>hbase.zookeeper.quorum</name>
          <value>medecineit</value>
        </property>
        <property>
          <name>dfs.replication</name>
          <value>1</value>
        </property>

5、修改 regionservers

改为主机名:medecineit

6、启动hbase

(base) [root@medecineit hbase-2.0.5]# jps
                8469 ResourceManager
                16902 Jps
                16823 HRegionServer
                9208 DataNode
                16152 QuorumPeerMain
                9401 SecondaryNameNode
                9065 NameNode
                16681 HMaster
                8585 NodeManager
                975 Bootstrap
                
        表明已经启动了HRegionServer,HMaster。

7、web访问

http://ip:60010/master-status

完毕!

 

hadoop单机安装

原文:https://www.cnblogs.com/ywjfx/p/11344345.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!