<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://spark00:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/soft/hadoop-2.6.0-cdh5.4.0/data/tmp</value>
</property>
</configuration>
1,注意使用主机名映射
2,端口使用:8020
3,缓存目录(在hadoop安装目录中创建/data/emp目录)
/root/soft/hadoop-2.6.0-cdh5.4.0/data/tmp
3,hdfs-site,设置副本的数量,因为我是单节点,设置一个即可
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>4,配置datanode的配置文件slaves
vi slaves
修改datanode节点如下:
- spark01
5,格式化cd /root/soft/hadoop-2.6.0-cdh5.4.0bin/hdfs namenode -format
----------------------------
启动namenode:sbin/hadoop-daemon.sh start namenode
启动datenode:sbin/hadoop-daemon.sh start datenode
原文:http://www.cnblogs.com/xiaoxiao5ya/p/7cdd6d16387d78ca3cd7cfc2eaae7fe5.html