目录
有状态应用副本集
无状态的, 更关注的是群体
有状态的, 更关注的是个体
[root@master configmap]# cat ../volume/pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: 172.27.1.241
accessModes: ["ReadWriteMany", "ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: 172.27.1.241
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: 172.27.1.241
accessModes: ["ReadWriteMany", "ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: 172.27.1.241
accessModes: ["ReadWriteMany", "ReadWriteOnce"]
capacity:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: 172.27.1.241
accessModes: ["ReadWriteMany", "ReadWriteOnce"]
capacity:
storage: 10Gi
[root@master volume]# kubectl apply -f pv-demo.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volume]# kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
pv001 5Gi RWO,RWX Retain Available 5s Filesystem
pv002 5Gi RWO Retain Available 5s Filesystem
pv003 5Gi RWO,RWX Retain Available 5s Filesystem
pv004 10Gi RWO,RWX Retain Available 5s Filesystem
pv005 10Gi RWO,RWX Retain Available 5s Filesystem
[root@master manifests]# cat statefulset-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
namespace: default
labels:
app: myapp # service 名称
spec:
ports:
- port: 80
name: web
clusterIP: None # 配置headless service
selector:
app: myapp-pod # 匹配Pod 标签
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp # 名称
replicas: 3 # 三个副本
selector:
matchLabels:
app: myapp-pod # 匹配Pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/shar/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata # pvc名称
spec:
accessModes: ["ReadWriteOnce"] # 权限
resources:
requests:
storage: 5Gi # pv 大小
创建
[root@master manifests]# kubectl apply -f statefulset-demo.yaml
service/myapp-svc unchanged
statefulset.apps/myapp created
[root@master manifests]# kubectl get sts
NAME READY AGE
myapp 3/3 5s
[root@master manifests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 92s
myapp-1 1/1 Running 0 91s
myapp-2 1/1 Running 0 32s
StatefulSet 会自动创建pvc, 然后去绑定对应符合要求的PV
[root@master manifests]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 2m42s
myappdata-myapp-1 Bound pv003 5Gi RWO,RWX 66s
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 7s
[root@master manifests]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 20m
pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 20m
pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 20m
pv004 10Gi RWO,RWX Retain Available 20m
pv005 10Gi RWO,RWX Retain Available 20m
OrderedReady:上述的启停顺序,默认设置。
spec:
podManagementPolicy: OrderedReady
Parallel:告诉StatefulSet控制器并行启动或终止所有Pod,并且在启动或终止另一个Pod之前不等待前一个Pod变为Running and Ready或完全终止。
spec:
podManagementPolicy: Parallel
[root@master manifests]# kubectl scale sts myapp --replicas=5 #扩容到5个pod
[root@master manifests]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 14m
myappdata-myapp-1 Bound pv003 5Gi RWO,RWX 14m
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 14m
myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 7s
myappdata-myapp-4 Bound pv005 10i RWO,RWX 7s
[root@master manifests]# kubectl scale sts myapp --replicas=2 #缩减到2个pod
[root@master manifests]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 14m
myappdata-myapp-1 Bound pv003 5Gi RWO,RWX 14m
kubectl explain sts.spec.updateStrategy.rollingUpdate
partition: 这种更新策略的含义是, 若当前statefulSet的副本数为5个,则Pod名为pod-0~pod-4,那么此时定义partition=4, 就意味着我要更新大于等于4的Pod,而只有pod-4的ID 4 是大于等于4的,所以只有pod-4会被更新,其它不会,这就是金丝雀更新。若后期发现pod-4更新后,工作一切正常,那么就可以调整partition=0,这样只要大于等于0的pod ID都将被更新。
修改滚动更新策略,查看效果
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}'
#修改statefulset的image
kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
#查看已经修改成功了
[root@master statfulset]# kubectl get sts myapp -o wide
NAME READY AGE CONTAINERS IMAGES
myapp 4/4 16h myapp ikubernetes/myapp:v2
因为策略写的是从第二个容器开始更新
通过命令
kubectl get pod myapp-1 -o yaml
可以看到2
之前的image没有改变通过命令
kubectl get pod myapp-2 -o yaml
可以看到2
之后的image都已经改变了
#再重新把partition改为0, 则把之前的pod的image都修改生效了
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
[root@master statfulset]# kubectl get pods -l app=myapp-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image
NAME IMAGE
myapp-0 ikubernetes/myapp:v2
myapp-1 ikubernetes/myapp:v2
myapp-2 ikubernetes/myapp:v2
myapp-3 ikubernetes/myapp:v2
分区更新操作
将spec.updateStrategy.rollingUpdate.partition设置为Pod副本数量时,即意味着所有Pod资源都不会处于可直接更新的分区内,直到partition小于Pod数时,才会开始更新
此时,即便删除某Pod,也会按旧版本进行重建,即暂存状态的更新对所有Pod资源均不产生影响
go-template
自定义资源输出信息kubectl get pod myapp-1 -o go-template --template='{{.status.podIP}}'
[root@master statfulset]# kubectl get pod myapp-1 -o go-template --template='{{range .spec.containers}}{{.image}}{{end}}'
ikubernetes/myapp:v2
因为这里查看containers的image信息是一个列表信息, 所以要用到range
关于更多的go-template
的使用方法, 可以参考这位老哥写的博客: https://www.bbsmax.com/A/gAJGgjX3JZ/
访问测试:pod_name.service.ns_name.svc.cluster.local (myapp-2.myapp.default.svc.cluster.local)
原文:https://www.cnblogs.com/peng-zone/p/11671413.html