两类:
[root@master ~]# kubectl explain pods.spec.nodeSelector [root@master schedule]# pwd /root/manifests/schedule [root@master schedule]# vim pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: mageedu.com/created-by: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 nodeSelector: #节点选择器 disktype: ssd #该pod运行在有disktype=ssd标签的node节点上
[root@master schedule]# kubectl apply -f pod-demo.yaml pod/pod-demo created [root@master schedule]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-demo 1/1 Running 0 8m13s 10.244.1.6 node01 <none> <none> [root@master schedule]# kubectl get nodes --show-labels |grep node01 node01 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01 #可见新创建的pod已经运行在node01上了,因为node01上有disktype=ssd标签;
接下来我们给node02打上标签,修改一下资源定义清单文件,再创建pod:
将node02打上标签,pod资源清单里面的节点选择器里,改为和node02一样的标签;
[root@master schedule]# kubectl delete -f pod-demo.yaml [root@master ~]# kubectl label nodes node02 disktype=harddisk node/node02 labeled [root@master schedule]# vim pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: mageedu.com/created-by: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 nodeSelector: disktype: harddisk [root@master schedule]# kubectl get nodes --show-labels |grep node02 node02 Ready <none> 76d v1.13.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02 [root@master schedule]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-demo 1/1 Running 0 104s 10.244.2.5 node02 <none> <none>
可见pod已经运行在node02上了;
[root@master scheduler]# kubectl explain pods.spec.affinity [root@master scheduler]# kubectl explain pods.spec.affinity.nodeAffinity preferredDuringSchedulingIgnoredDuringExecution:软亲和, requiredDuringSchedulingIgnoredDuringExecution:硬亲和,表示必须满足 [root@master ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions #硬亲和性 [root@master schedule]# vim pod-nodeaffinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: zone operator: In values: - foo - bar [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo.yaml pod/pod-node-affinity-demo created [root@master schedule]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo 0/1 Pending 0 76s #此时pod是Pending, 是因为没有节点满足条件;
下面我们再创建一个软亲和性的pod:
#软亲和性,就算没有符合条件的节点,也会找一个勉强运行; [root@master schedule]# kubectl delete -f pod-nodeaffinity-demo.yaml pod "pod-node-affinity-demo" deleted [root@master schedule]# vim pod-nodeaffinity-demo2.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo2 namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: zone operator: In values: - foo - bar weight: 60 [root@master schedule]# kubectl apply -f pod-nodeaffinity-demo2.yaml pod/pod-node-affinity-demo2 created [root@master schedule]# kubectl get pods #可见pod已经运行了 NAME READY STATUS RESTARTS AGE pod-node-affinity-demo2 1/1 Running 0 74s pod-node-affinity-demo-2 运行起来了,因为这个pod我们是定义的软亲和性,即使没有符合条件的及诶单,也会找个节点让Pod运行起来
2
原文:https://www.cnblogs.com/weiyiming007/p/10573379.html