首页 > 其他 > 详细

K8S系列-1.使用Kubesphere部署多节点K8S

时间:2020-10-07 23:58:31      阅读:481      评论:0      收藏:0      [点我收藏+]

K8S系列-1.使用KubeSphere部署多节点K8S

新版KubeSphere使用kubekey工具kk一键部署K8S集群

安装前准备

1.更换软件源

切换CentOS YUM源为阿里云yum源

# 安装wget
yum install wget -y
# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 获取阿里云yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 获取阿里云epel源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# 清理缓存并创建新的缓存
yum clean all && yum makecache
# 系统更新
yum update -y

2.时间同步

进行并确认时间同步成功

timedatectl
timedatectl set-ntp true

3.安装 Docker

需要在每台机器上安装 Docker,我这里安装的是 docker-ce-19.03.4

# 安装 Docker CE
# 设置仓库
# 安装所需包
yum install -y yum-utils     device-mapper-persistent-data     lvm2

# 新增 Docker 仓库,速度慢的可以换阿里云的源。
yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
# 阿里云源地址
# http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装 Docker CE.
yum install -y containerd.io-1.2.10     docker-ce-19.03.4     docker-ce-cli-19.03.4

# 启动 Docker 并添加开机启动
systemctl start docker
systemctl enable docker

开始安装K8S

1. 创建集群配置文件

[root@localhost ~]# tar xzf kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz 
[root@localhost ~]# cd kubesphere-all-v3.0.0-offline-linux-amd64

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create config
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ll
total 55156
drwxr-xr-x. 5 root root       76 Sep 21 05:36 charts
-rw-r--r--. 1 root root      759 Sep 26 09:10 config-sample.yaml
drwxr-xr-x. 2 root root      116 Sep 21 06:01 dependencies
-rwxr-xr-x. 1 root root 56469720 Sep 21 01:54 kk
drwxr-xr-x. 6 root root       68 Sep  3 01:45 kubekey
drwxr-xr-x. 2 root root     4096 Sep 21 06:54 kubesphere-images-v3.0.0
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# 

2.修改配置文件

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# cat  config-sample.yaml 
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.56.108, internalAddress: 192.168.56.108, user: root, password: kkroot}
  - {name: node2, address: 192.168.56.109, internalAddress: 192.168.56.109, user: root, password: kkroot}
  - {name: node3, address: 192.168.56.110, internalAddress: 192.168.56.110, user: root, password: kkroot}
  roleGroups:
    etcd:
    - node1
    master: 
    - node1
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: dockerhub.kubekey.local
  addons: []

修改node1、node2、node3节点主机IP相关配置,registry添加privateRegistry: dockerhub.kubekey.local

3.检查依赖

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/
INFO[07:23:15 EDT] Init operating system 
INFO[07:19:58 EDT] Start initializing node2 [192.168.56.109]     node=192.168.56.109
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.109:/tmp   Done
INFO[07:21:12 EDT] Complete initialization node2 [192.168.56.109]  node=192.168.56.109

INFO[07:23:20 EDT] Start initializing node3 [192.168.56.110]     node=192.168.56.110
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.110:/tmp   Done
INFO[07:24:27 EDT] Complete initialization node3 [192.168.56.110]  node=192.168.56.110
INFO[07:24:27 EDT] Init operating system successful.  

4.创建镜像仓库

使用kk创建自签名镜像仓库,执行如下命令:

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo

INFO[07:26:32 EDT] Init operating system                        

Local images repository created successfully. Address: dockerhub.kubekey.local

INFO[07:27:03 EDT] Init operating system successful.            

如果没有反应

[root@localhost kubesphere-images-v3.0.0]# docker load < registry.tar 
3e207b409db3: Loading layer  5.879MB/5.879MB
f5b9430e0e42: Loading layer  817.2kB/817.2kB
239a096513b5: Loading layer  20.08MB/20.08MB
a5f27630cdd9: Loading layer  3.584kB/3.584kB
b3f465d7c4d1: Loading layer  2.048kB/2.048kB
Loaded image: registry:2

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# systemctl start docker
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
INFO[10:45:37 EDT] Init operating system                        

Local images repository created successfully. Address: dockerhub.kubekey.local

INFO[10:45:39 EDT] Init operating system successful.          

5.加载、上传镜像

使用push-images.sh将镜像导入之前准备好镜像仓库中:

./push-images.sh  dockerhub.kubekey.local

脚本会获取必要的镜像并重新上传到私有registry仓库 dockerhub.kubekey.local

[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt 
{"repositories":["calico/cni","calico/kube-controllers","calico/node","calico/pod2daemon-flexvol","coredns/coredns","csiplugin/csi-attacher","csiplugin/csi-neonsan","csiplugin/csi-neonsan-centos","csiplugin/csi-neonsan-ubuntu","csiplugin/csi-node-driver-registrar","csiplugin/csi-provisioner","csiplugin/csi-qingcloud","csiplugin/csi-resizer","csiplugin/csi-snapshotter","csiplugin/snapshot-controller","fluent/fluentd","istio/citadel","istio/galley","istio/kubectl","istio/mixer","istio/pilot","istio/proxyv2","istio/sidecar_injector","jaegertracing/jaeger-agent","jaegertracing/jaeger-collector","jaegertracing/jaeger-es-index-cleaner","jaegertracing/jaeger-operator","jaegertracing/jaeger-query","jenkins/jenkins","jenkins/jnlp-slave","jimmidyson/configmap-reload","joosthofman/wget","kubesphere/alert-adapter","kubesphere/alerting","kubesphere/alerting-dbinit","kubesphere/builder-base","kubesphere/builder-go","kubesphere/builder-maven","kubesphere/builder-nodejs","kubesphere/elasticsearch-oss","kubesphere/etcd","kubesphere/examples-bookinfo-details-v1","kubesphere/examples-bookinfo-productpage-v1","kubesphere/examples-bookinfo-ratings-v1","kubesphere/examples-bookinfo-reviews-v1","kubesphere/examples-bookinfo-reviews-v2","kubesphere/examples-bookinfo-reviews-v3","kubesphere/fluent-bit","kubesphere/fluentbit-operator","kubesphere/java-11-centos7","kubesphere/java-11-runtime","kubesphere/java-8-centos7","kubesphere/java-8-runtime","kubesphere/jenkins-uc","kubesphere/k8s-dns-node-cache","kubesphere/ks-apiserver","kubesphere/ks-console","kubesphere/ks-controller-manager","kubesphere/ks-devops","kubesphere/ks-installer","kubesphere/ks-upgrade","kubesphere/kube-apiserver","kubesphere/kube-auditing-operator","kubesphere/kube-auditing-webhook","kubesphere/kube-controller-manager","kubesphere/kube-events-exporter","kubesphere/kube-events-operator","kubesphere/kube-events-ruler","kubesphere/kube-proxy","kubesphere/kube-rbac-proxy","kubesphere/kube-scheduler","kubesphere/kube-state-metrics","kubesphere/kubectl","kubesphere/linux-utils","kubesphere/log-sidecar-injector","kubesphere/metrics-server","kubesphere/netshoot","kubesphere/nfs-client-provisioner","kubesphere/nginx-ingress-controller","kubesphere/node-disk-manager","kubesphere/node-disk-operator","kubesphere/node-exporter","kubesphere/nodejs-4-centos7","kubesphere/nodejs-6-centos7","kubesphere/nodejs-8-centos7","kubesphere/notification","kubesphere/notification-manager","kubesphere/notification-manager-operator","kubesphere/pause","kubesphere/prometheus-config-reloader","kubesphere/prometheus-operator","kubesphere/provisioner-localpv","kubesphere/python-27-centos7","kubesphere/python-34-centos7","kubesphere/python-35-centos7","kubesphere/python-36-centos7","kubesphere/s2i-binary","kubesphere/s2ioperator","kubesphere/s2irun","kubesphere/tomcat85-java11-centos7"]}
[root@localhost ~]# 

以上准备工作完成且再次检查配置文件无误后,执行安装。

执行安装

1.查看节点依赖情况

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node1 | y    | y    | y       | y        |       | y     |           | y      |            |             |                  | EDT 10:13:25 |
| node2 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:13:25 |
| node3 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:13:24 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

2.补全缺少的依赖

cd /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms
[root@localhost centos-7-amd64-rpms]# yum localinstall socat-1.7.3.2-2.el7.x86_64.rpm 

[root@localhost centos-7-amd64-rpms]# yum localinstall -y conntrack-tools-1.4.4-7.el7.x86_64.rpm 

[root@localhost centos-7-amd64-rpms]# yum localinstall -y nfs-utils-1.3.0-0.66.el7_8.x86_64.rpm 

[root@localhost centos-7-amd64-rpms]# yum localinstall -y ceph-common-10.2.5-4.el7.x86_64.rpm 

[root@localhost centos-7-amd64-rpms]# yum localinstall -y glusterfs-client-xlators-6.0-29.el7.x86_64.rpm 
[root@localhost centos-7-amd64-rpms]# yum localinstall -y glusterfs-6.0-29.el7.x86_64.rpm 
[root@localhost centos-7-amd64-rpms]# yum localinstall -y glusterfs-fuse-6.0-29.el7.x86_64.rpm 

3.再次执行安装


[root@node1 kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
| node1 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
| node2 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[10:53:49 EDT] Downloading Installation Files               
INFO[10:53:49 EDT] Downloading kubeadm ...                      
INFO[10:53:49 EDT] Downloading kubelet ...                      
INFO[10:53:50 EDT] Downloading kubectl ...                      
INFO[10:53:50 EDT] Downloading kubecni ...                      
INFO[10:53:50 EDT] Downloading helm ...                         
INFO[10:53:51 EDT] Configurating operating system ...           
[node2 192.168.56.109] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 192.168.56.108] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 192.168.56.110] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[10:53:54 EDT] Installing docker ...                        
INFO[10:53:55 EDT] Start to download images on all nodes        
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[10:53:59 EDT] Generating etcd certs                        
INFO[10:54:01 EDT] Synchronizing etcd certs                     
INFO[10:54:01 EDT] Creating etcd service                        
INFO[10:54:05 EDT] Starting etcd cluster                        
[node1 192.168.56.108] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[10:54:13 EDT] Refreshing etcd configuration                
INFO[10:54:13 EDT] Backup etcd data regularly                   
INFO[10:54:14 EDT] Get cluster status                           
[node1 192.168.56.108] MSG:
Cluster will be created.
INFO[10:54:14 EDT] Installing kube binaries                     
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[10:54:32 EDT] Initializing kubernetes cluster              
[node1 192.168.56.108] MSG:
W1002 10:54:33.546978    7304 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1002 10:54:33.547575    7304 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:54:33.547601    7304 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.2.15 127.0.0.1 192.168.56.108 192.168.56.109 192.168.56.110 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.078002    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.089428    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.091411    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.007113 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rajfez.t9320hox3sddbowz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz     --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2     --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz     --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
node/node1 untainted
[node1 192.168.56.108] MSG:
node/node1 labeled
[node1 192.168.56.108] MSG:
service "kube-dns" deleted
[node1 192.168.56.108] MSG:
service/coredns created
[node1 192.168.56.108] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[node1 192.168.56.108] MSG:
configmap/nodelocaldns created
[node1 192.168.56.108] MSG:
I1002 10:55:34.720063    9901 version.go:251] remote version is much newer: v1.19.2; falling back to: stable-1.17
W1002 10:55:36.884062    9901 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:36.884090    9901 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a9a0daeedbefb4b9a014f4b258b9916403f7136bea20d28ec03aa926c41fcb3e
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
W1002 10:55:37.738867   10303 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:37.738964   10303 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join lb.kubesphere.local:6443 --token 025byf.2t2mvldlr9wm1ycx     --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
NAME    STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node1   NotReady   master,worker   34s   v1.17.9   192.168.56.108   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.4
INFO[10:55:38 EDT] Deploying network plugin ...                 
[node1 192.168.56.108] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[10:55:40 EDT] Joining nodes to cluster                     
[node3 192.168.56.110] MSG:
W1002 10:55:41.544472   12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
W1002 10:55:43.067290   12557 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.
[node2 192.168.56.109] MSG:
W1002 10:55:41.963749    8533 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
W1002 10:55:43.520053    8533 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.
[node3 192.168.56.110] MSG:
node/node3 labeled
[node2 192.168.56.109] MSG:
node/node2 labeled
INFO[10:55:54 EDT] Congradulations! Installation is successful. 

至此安装成功

问题处理

1 .创建集群时node节点下载镜像时hang住

执行创建集群脚本过程中,node节点无法下载到镜像

确认master节点上确认容器仓库容器是否正常运行后,在子节点上测试local registry是否部署成功

[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog
curl: (7) Failed connect to dockerhub.kubekey.local:443; Connection refused

由于镜像仓库在master节点,故关闭master的防火墙、核对node节点的hosts配置

//禁用防火墙
systemctl stop firewalld && systemctl disable firewalld

//禁用selinux,临时修改

setenforce 0

//禁用selinux,永久修改,重启服务器后生效

sed -i ‘7s/enforcing/disabled/‘ /etc/selinux/config

//修改hosts文件

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.108 dockerhub.kubekey.local

修改后重新测试连通:

[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt 
{"repositories":[]}

返回内容为空,检查容器卷是否正常挂载,容器卷对应的宿主机目录空间是否足够,如果不够可以参考《》进行扩容

2.CPU数量错误

[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node2 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:33:26 |
| node3 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:33:20 |
| node1 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:33:25 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[10:38:49 EDT] Downloading Installation Files               
INFO[10:38:49 EDT] Downloading kubeadm ...                      
INFO[10:38:49 EDT] Downloading kubelet ...                      
INFO[10:38:50 EDT] Downloading kubectl ...                      
INFO[10:38:50 EDT] Downloading kubecni ...                      
INFO[10:38:50 EDT] Downloading helm ...                         
INFO[10:38:51 EDT] Configurating operating system ...           
[node3 192.168.56.110] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
[node2 192.168.56.109] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
[node1 192.168.56.108] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
INFO[10:39:03 EDT] Installing docker ...                        
INFO[10:39:04 EDT] Start to download images on all nodes        
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[10:39:47 EDT] Generating etcd certs                        
INFO[10:39:49 EDT] Synchronizing etcd certs                     
INFO[10:39:49 EDT] Creating etcd service                        
[node1 192.168.56.108] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
INFO[10:39:52 EDT] Starting etcd cluster                        
[node1 192.168.56.108] MSG:
Configuration file will be created
INFO[10:39:52 EDT] Refreshing etcd configuration                
Waiting for etcd to start
INFO[10:39:59 EDT] Backup etcd data regularly                   
INFO[10:40:00 EDT] Get cluster status                           
[node1 192.168.56.108] MSG:
Cluster will be created.
INFO[10:40:00 EDT] Installing kube binaries                     
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[10:40:14 EDT] Initializing kubernetes cluster              
[node1 192.168.56.108] MSG:
[preflight] Running pre-flight checks
W1002 10:40:16.450314   18027 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1002 10:40:16.457940   18027 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system‘s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node1 192.168.56.108] MSG:
[preflight] Running pre-flight checks
W1002 10:40:17.496840   18149 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1002 10:40:17.511391   18149 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system‘s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[10:40:18 EDT] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 
W1002 10:40:17.728409   18174 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1002 10:40:17.729013   18174 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:40:17.729026   18174 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=192.168.56.108
WARN[10:40:18 EDT] Task failed ...                              
WARN[10:40:18 EDT] error: interrupted by error                  
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
  -f, --filename string          Path to a configuration file
  -h, --help                     help for cluster
      --skip-pull-images         Skip pre pull images
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
  -y, --yes                      Skip pre-check of the installation

Global Flags:
      --debug   Print detailed information (default true)

Failed to init kubernetes cluster: interrupted by error
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# 

在master上执行sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 发现虚拟机cpu数不满足,修改vcpu数后重新创建集群即可。

K8S系列-1.使用Kubesphere部署多节点K8S

原文:https://www.cnblogs.com/elfcafe/p/13779619.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!