各主机名设置如下:
k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2
各主机的/etc/hosts解析设置如下:
172.16.201.3 k8s-master1 172.16.201.4 k8s-master2 172.16.201.5 k8s-master3 172.16.201.6 k8s-node1 172.16.201.7 k8s-node2 172.16.201.10 master.liufeng-k8s.com #负载均衡器,注意它的防火墙需要开放相关端口(6443)
本地添加hosts解析:
192.168.100.100 nexus3-cicd.apps.test.openshift.com |
把/etc/yum.repos.d目录文件全部删除,创建文件nexus.repo,输入以下内容:
[base]name=CentOS-$releasever - Basebaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/centos/$releasever/os/$basearch/enabled=1gpgcheck=0[updates]name=CentOS-$releasever - Updatesbaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/centos/$releasever/updates/$basearch/enabled=1gpgcheck=0[extras]name=CentOS-$releasever - Extrasbaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/centos/$releasever/extras/$basearch/enabled=1gpgcheck=0[epel]name=CentOS-$releasever - Epelbaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/epel/$releasever/$basearch/enabled=1gpgcheck=0[docker-ce-stable]name=Docker CE Stable - $basearchbaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/docker-ce/linux/centos/7/$basearch/stableenabled=1gpgcheck=0[kubernetes]name=Kubernetesbaseurl=http://nexus3-cicd.apps.test.openshift.com/repository/yum/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0 |
安装必要的软件包
# yum install -y yum-utils device-mapper-persistent-data lvm2 |
安装k8s的必须软件包
# yum install -y docker-ce-18.06.3* kubelet kubeadm kubectl |
docker相关,并使用私有docker镜像仓库(https://nexus3-docker-cicd.apps.test.openshift.com):
# mkdir /etc/docker# cat > /etc/docker/daemon.json <<EOF{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"],}EOF# mkdir -p /etc/systemd/system/docker.service.d |
kubelet相关:
# cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOFnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# sysctl --system |
私有docker仓库证书的导入:
# echo -n | openssl s_client -showcerts -connect nexus3-docker-cicd.apps.test.openshift.com:443 2>/dev/null | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p‘ >> /etc/pki/tls/certs/ca-bundle.crt |
重加载服务:
# systemctl daemon-reload# systemctl enable docker# systemctl start docker# systemctl enable kubelet |
禁用swap:
# swapoff -a #临时禁用# sed -i ‘/swap/s/^/#/‘ /etc/fstab #永久禁用 |
kubeadm init
# kubeadm init --kubernetes-version=v1.18.2 --control-plane-endpoint=master.liufeng-k8s.com:6443 \--pod-network-cidr=10.244.0.0/16 --image-repository=nexus3-docker-cicd.apps.test.openshift.com \--upload-certs以下是一些帮助:# kubeadm init --help# kubeadm init --image-repository=nexus3-docker-cicd.apps.test.openshift.com |
如果出现失败,可查看日志排错,并使用kubeadm reset命令还原操作后,再次kubeadm init。
配置集群
# mkdir -p $HOME/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# sudo chown $(id -u):$(id -g) $HOME/.kube/config |
flannel
# kubectl apply -f kube-flannel.yml |
添加新的node节点:
# kubeadm join master.liufeng-k8s.com:6443 --token yiqsq2.700kevht80v37oap \--discovery-token-ca-cert-hash sha256:f30993403486327114e83047fdd476e521f5f775cc304f30afa10ec18f9a05d7 |
添加新的master节点:
# kubeadm join master.liufeng-k8s.com:6443 --token yiqsq2.700kevht80v37oap \--discovery-token-ca-cert-hash sha256:f30993403486327114e83047fdd476e521f5f775cc304f30afa10ec18f9a05d7 \--control-plane |
而因为我第一次安装,在kubeadm init的时候没有加--upload-certs,直接运行上面的命令,会出现类似如下证书找不到的错误:
failure loading key for service account: couldn‘t load the private key file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory |
这时候,需要把master1上的证书scp拷贝到master2、master3上,再运行kubeadm join,具体拷贝的证书如下:
/etc/kubernetes/pki/ca.crt/etc/kubernetes/pki/ca.key/etc/kubernetes/pki/sa.key/etc/kubernetes/pki/sa.pub/etc/kubernetes/pki/front-proxy-ca.crt/etc/kubernetes/pki/front-proxy-ca.key/etc/kubernetes/pki/etcd/ca.crt# Quote this line if you are using external etcd/etc/kubernetes/pki/etcd/ca.key |
如果token没有复制,或者丢失了,则用下面的命令重新生成token:
# kubeadm token create --print-join-command |
和master1一样,配置新加的master节点:
# mkdir -p $HOME/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# sudo chown $(id -u):$(id -g) $HOME/.kube/config |
kubernetes简单命令
# kubectl get cs #查看组件的状态# kubectl get ns #查看集群namespaces# kubectl get pods -n kube-system #查看kube-system下的pod |
kubeadm搭建k8s集群(v1.18.2)所需要的镜像(使用flannel网络组件)
kube-proxy:v1.18.2kube-controller-manager:v1.18.2kube-scheduler:v1.18.2kube-apiserver:v1.18.2etcd:3.4.3-0coredns:1.6.7pause:3.2flannel:v0.12.0-amd64 |
安装过程查看系统日志(centos):
# tail -f /var/log/messages |
kubectl命令的自动补全
# yum install -y bash-completion# echo "source <(kubectl completion bash)" >> ~/.bashrc# source ~/.bashrc |
原文:https://www.cnblogs.com/ooops/p/12935315.html