这段时间在做大数据方面的开发,使用Hadoop、HBase、Spark、Spart Streaming、Kafka、Docker、Kubernetes等等平台与组件,在服务器运维方面也在模仿着别人重复造轮子。要基于这些系统进行开发,就必须搭建一套稳定的服务器环境,虽然网上有大量的文章与教程,但在学习使用的过程中,还是踩了无数个坑,熬了不知多少个夜晚与周末,直到现在才算是真正上手,能快速搭建好平台,对出现的问题也能快速响应做出处理,当然,现在的积累还是远远不够,还需要继续努力,深入研究这些系统的内核与运行机制。
接下来这些章节就是这段时间的学习成绩,记录下来,以后有需要时方便查看。
在本地创建了5台虚拟机,每台分配2核CPU、2G内存和20G硬盘
IP分别为:
192.168.10.90 192.168.10.91 192.168.10.93 192.168.10.95 192.168.10.96
具体的安装规划如下
服务 | master | master_backup | node1 | node2 | node3 | 备注 |
---|---|---|---|---|---|---|
IP | 192.168.10.90 | 192.168.10.91 | 192.168.10.93 | 192.168.10.95 | 192.168.10.96 | 各服务器ip地址 |
JDK | ? | ? | ? | ? | ? | java运行环境 |
QuorumPeerMain | ? | ? | ZooKeeper集群管理调度工具 | |||
DFSZKFailoverController | ? | ? | ZooKeeper集群容灾控制器 | |||
JournalNode | ? | ? | 集群容灾控制器数据节点 | |||
NameNode | ? | ? | Hadoop NameNode | |||
ResourceManager | ? | ? | Hadoop节点资源管理 | |||
SecondaryNameNode | ? | ? | Hadoop SecondaryNameNode | |||
NodeManager | ? | ? | ? | Hadoop节点管理 | ||
DataNode | ? | ? | ? | Hadoop节点存储 | ||
HMaster | ? | ? | HBase Master | |||
HRegionServer | ? | ? | ? | HBase存储节点 | ||
Master | ? | ? | Spark Master | |||
Worker | ? | ? | ? | Spark工作节点 | ||
Kafka | ? | ? | Kafka分布式队列 |
systemctl stop firewalld
systemctl disable firewalld
setenforce 0 vi /etc/selinux/config
将SELINUX值修改为disable
重启服务器使其生效
yum install pcre-devel -y yum install openssl openssl-devel -y yum install gcc gcc-c++ ncurses-devel perl -y yum install zlib zlib-devel rsync xinetd -y yum install wget lrzsz libxml2 -y yum install kernel-devel libxslt-devel libpqxx-devel libffi-devel python-devel libpq-dev -y
vi /etc/sysctl.conf
将里面的配置内容修改为下面配置
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 vm.swappiness = 0 net.ipv4.neigh.default.gc_stale_time=120 # see details in https://help.aliyun.com/knowledge_detail/39428.html net.ipv4.conf.all.rp_filter=0 # net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 # see details in https://help.aliyun.com/knowledge_detail/41334.html # net.ipv4.tcp_max_tw_buckets = 5000 # net.ipv4.tcp_syncookies = 1 # net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_synack_retries = 2 net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 # net.core.somaxconn = 262144 net.core.somaxconn = 2048 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = 10 net.ipv4.tcp_keepalive_probes=5 net.ipv4.ip_local_port_range = 1024 65535
运行命令,让配置生效
sysctl –p
yum update
可以完成以上安装后,再从虚拟机上克隆出服务器,完成克隆后,记得重置虚拟网卡MAC地址
所有服务器都做以下操作:
vi /etc/hosts
添加下面内容
192.168.10.90 master 192.168.10.91 master-backup 192.168.10.93 node1 192.168.10.95 node2 192.168.10.96 node3
服务器分别运行下面命令,修改对应的主机名称
hostnamectl set-hostname master hostnamectl set-hostname master-backup hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3
PS:服务器设置的hostname不能有下划线,会影响hadoop集群配置格式化namenode时出错
版权声明:本文原创发表于 博客园,作者为 AllEmpty 本文欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则视为侵权。
作者博客:http://www.cnblogs.com/EmptyFS/
原文:https://www.cnblogs.com/EmptyFS/p/12113070.html