首页 > 其他 > 详细

Ingress

时间:2019-11-29 14:21:18      阅读:138      评论:0      收藏:0      [点我收藏+]

一、Ingress介绍

K8s暴露服务的方式:LoadBlancer Service、ExternalName、NodePort Service、Ingress

 技术分享图片

这里nginx-ingress-controller 为例  官方参考地址https://kubernetes.io/docs/concepts/services-networking/ingress/
原理:Ingress Controller通过与Kubernetes API交互,动态的去感知集群中Ingress 规则变化,然后读取它,按照它自己模板生成一段 Nginx 配置,再写到 nginx-ingress-controller的Pod 里,最后 reload 一下
四层 调度器 不负责建立会话(看工作模型nat,dr,fullnat,tunn) client需要与后端建立会话
七层的调度器: client 只需要和调度器建立连接调度器管理会话

技术分享图片

 

Ingress的资源类型有以下几种:
1、单Service资源型Ingress #只设置spec.backend,不设置其他的规则
2、基于URL路径进行流量转发 #根据spec.rules.http.paths 区分对同一个站点的不同的url的请求,并转发到不同主机
3、基于主机名称的虚拟主机 #spec.rules.host 设置不同的host来区分不同的站点
4、TLS类型的Ingress资源 #通过Secret获取TLS私钥和证书 (名为 tls.crt 和 tls.key)

Ingress controller #HAproxy/nginx/Traefik/Envoy (服务网格)
要调度的肯定不止一个服务,url 区分不同的虚拟主机,server,一个server定向不同的一组pod


service使用label selector始终关心 watch自己的pod,一旦pod发生变化,自己也理解作出相应的改变
ingress controller 借助于service(headless)关注pod的状态变化,service会把状态变化及时反馈给ingress
service对后端pod进行分类(headless),ingress在发现service分类的pod资源发生改变的时候,及时作出反应
ingress基于service对pod的分类,获取分类的pod ip列表,并注入ip列表信息到ingress中

创建ingress需要的步骤:
1. ingress controller
2. 配置前端,server虚拟主机
3. 根据service收集到的pod 信息,生成upstream server,反映在ingress并注册到ingress controller中

 

二、安装

安装步骤 :https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
介绍:https://github.com/kubernetes/ingress-nginx: ingress-nginx
默认会监听所有的namespace,如果想要特定的监听--watch-namespace
如果单个host定义了不同路径,ingress会 合并配置


1、部署nginx-ingress-controller控制器
技术分享图片
[root@master1 ingress]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
内容包含:
1)创建Namespace[ingress-nginx]
2)创建ConfigMap[nginx-configuration]、ConfigMap[tcp-services]、ConfigMap[udp-services]
3)创建RoleBinding[nginx-ingress-role-nisa-binding]=Role[nginx-ingress-role]+ServiceAccount[nginx-ingress-serviceaccount]
4)创建ClusterRoleBinding[nginx-ingress-clusterrole-nisa-binding]=ClusterRole[nginx-ingress-clusterrole]+ServiceAccount[nginx-ingress-serviceaccount]
5)Deployment[nginx-ingress-controller]应用ConfigMap[nginx-configuration]、ConfigMap[tcp-services]、ConfigMap[udp-services]作为配置,
nginx-ingress-contoller步骤分解

 

技术分享图片
[root@master01 ingress]# cat mandatory.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx   #新建一个命名空间

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount   #新建一个服务账号  并绑定权限 进行APIserver访问
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true  #使用宿主机网路
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: 10.192.27.111/library/nginx-ingress-controller:0.20.0 #拉取镜像的地址
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---
[root@master01 ingress]# 
mandatory.yaml
[root@localhost ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
[root@master01 ingress]# kubectl apply -f mandatory.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.extensions/nginx-ingress-controller created

[root@master01 ingress]# kubectl get all -n ingress-nginx
NAME                                           READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-ffc9559bd-27nh7   0/1     Running   1          44s  #pod  READY状态没有准备好

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller   0/1     1            0           44s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-controller-ffc9559bd   1         1         0       44s
技术分享图片
#查看详情 分析原因  健康检查连接失败 Readiness probe failed: Get http://10.192.27.115:10254/healthz: dial tcp 10.192.27.115:10254: connect: connection refused
[root@master01 ingress]# kubectl describe pod/nginx-ingress-controller-ffc9559bd-27nh7
Error from server (NotFound): pods "nginx-ingress-controller-ffc9559bd-27nh7" not found
[root@master01 ingress]# kubectl describe pod/nginx-ingress-controller-ffc9559bd-27nh7
Error from server (NotFound): pods "nginx-ingress-controller-ffc9559bd-27nh7" not found
[root@master01 ingress]# kubectl describe pod/nginx-ingress-controller-ffc9559bd-27nh7 -n ingress-nginx
Name:               nginx-ingress-controller-ffc9559bd-27nh7
Namespace:          ingress-nginx
Priority:           0
PriorityClassName:  <none>
Node:               10.192.27.115/10.192.27.115
Start Time:         Fri, 29 Nov 2019 09:12:50 +0800
Labels:             app.kubernetes.io/name=ingress-nginx
                    app.kubernetes.io/part-of=ingress-nginx
                    pod-template-hash=ffc9559bd
Annotations:        prometheus.io/port: 10254
                    prometheus.io/scrape: true
Status:             Running
IP:                 10.192.27.115
Controlled By:      ReplicaSet/nginx-ingress-controller-ffc9559bd
Containers:
  nginx-ingress-controller:
    Container ID:  docker://13a9b010ae5df5fbc4146c26a0b8132e1e6a2c5411648a104826ac70727196c4
    Image:         10.192.27.111/library/nginx-ingress-controller:0.20.0
    Image ID:      docker-pullable://10.192.27.111/library/nginx-ingress-controller@sha256:97bbe36d965aedce82f744669d2f78d2e4564c43809d43e80111cecebcb952d0
    Ports:         80/TCP, 443/TCP
    Host Ports:    80/TCP, 443/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-configuration
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --publish-service=$(POD_NAMESPACE)/ingress-nginx
      --annotations-prefix=nginx.ingress.kubernetes.io
    State:          Running
      Started:      Fri, 29 Nov 2019 09:13:57 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Fri, 29 Nov 2019 09:13:27 +0800
      Finished:     Fri, 29 Nov 2019 09:13:57 +0800
    Ready:          False
    Restart Count:  2
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-ffc9559bd-27nh7 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-scfz7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nginx-ingress-serviceaccount-token-scfz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-serviceaccount-token-scfz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                    Message
  ----     ------     ----               ----                    -------
  Normal   Scheduled  94s                default-scheduler       Successfully assigned ingress-nginx/nginx-ingress-controller-ffc9559bd-27nh7 to 10.192.27.115
  Normal   Pulled     27s (x3 over 93s)  kubelet, 10.192.27.115  Container image "10.192.27.111/library/nginx-ingress-controller:0.20.0" already present on machine
  Normal   Created    27s (x3 over 93s)  kubelet, 10.192.27.115  Created container
  Normal   Started    27s (x3 over 93s)  kubelet, 10.192.27.115  Started container
  Warning  Unhealthy  27s (x6 over 77s)  kubelet, 10.192.27.115  Liveness probe failed: Get http://10.192.27.115:10254/healthz: dial tcp 10.192.27.115:10254: connect: connection refused
  Normal   Killing    27s (x2 over 57s)  kubelet, 10.192.27.115  Killing container with id docker://nginx-ingress-controller:Container failed liveness probe.. Container will be killed and recreated.
  Warning  Unhealthy  19s (x8 over 89s)  kubelet, 10.192.27.115  Readiness probe failed: Get http://10.192.27.115:10254/healthz: dial tcp 10.192.27.115:10254: connect: connection refused  #K8S之INGRESS-NGINX部署一直提示健康检查10254端口不通过问题就处理https://www.cnblogs.com/xingyunfashi/p/11493270.html
[root@master01 ingress]# 
K8S之INGRESS-NGINX部署一直提示健康检查10254端口不通过问题
技术分享图片
[root@master01 ingress]# kubectl delete -f mandatory.yaml 
namespace "ingress-nginx" deleted
configmap "nginx-configuration" deleted
configmap "tcp-services" deleted
configmap "udp-services" deleted
serviceaccount "nginx-ingress-serviceaccount" deleted
clusterrole.rbac.authorization.k8s.io "nginx-ingress-clusterrole" deleted
role.rbac.authorization.k8s.io "nginx-ingress-role" deleted
rolebinding.rbac.authorization.k8s.io "nginx-ingress-role-nisa-binding" deleted
clusterrolebinding.rbac.authorization.k8s.io "nginx-ingress-clusterrole-nisa-binding" deleted
deployment.extensions "nginx-ingress-controller" deleted
[root@master01 ingress]# 

#解决方法 在node节点加参数
[root@node01 image]# vim /opt/kubernetes/cfg/kube-proxy
[root@node01 image]# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 --hostname-override=10.192.27.115 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --masquerade-all=true \  #每个node添加这行
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

[root@node01 image]# systemctl restart kube-proxy
[root@node01 image]# ps -ef | grep kube-proxy
root       3243      1  1 09:35 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.192.27.115 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --masquerade-all=true --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root       3393  18416  0 09:35 pts/0    00:00:00 grep --color=auto kube-proxy


[root@node02 cfg]# vim /opt/kubernetes/cfg/kube-proxy
[root@node02 cfg]# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 --hostname-override=10.192.27.116 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --masquerade-all=true \  ##每个node添加这行
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

[root@node02 cfg]# systemctl restart kube-proxy
[root@node02 cfg]# ps -ef | grep kube-proxy
root      2950     1  1 09:34 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.192.27.116 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --masquerade-all=true --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root      3117 13951  0 09:34 pts/0    00:00:00 grep --color=auto kube-proxy
[root@node02 cfg]# 
[root@master01 ingress]# kubectl apply -f mandatory.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.extensions/nginx-ingress-controller created

[root@master01 ingress]# kubectl get all -n ingress-nginx   #创建成功
NAME                                           READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-ffc9559bd-kml76   1/1     Running   0          7s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller   1/1     1            1           7s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-controller-ffc9559bd   1         1         1       7s
[root@master01 ingress]# 
解决方法

 

#分配到了node2 10.192.27.116
[root@master01 ingress]# kubectl get pods -o wide -n ingress-nginx 
NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
nginx-ingress-controller-ffc9559bd-5k6fd   1/1     Running   0          10m   10.192.27.116   10.192.27.116   <none>           <none>
[root@master01 ingress]# 
#在node02节点上查看  情况
[root@node02 cfg]# ps -ef | grep nginx-ingress-controller
root      1547 13951  0 11:43 pts/0    00:00:00 grep --color=auto nginx-ingress-controller
33       11410 11392  0 10:08 ?        00:00:00 /usr/bin/dumb-init /bin/bash /entrypoint.sh /nginx-ingress-controller --configmap=ingress-nginx/nginx-configuration --tcp-services-configmap=ingress-nginx/tcp-services --udp-services-configmap=ingress-nginx/udp-services --publish-service=ingress-nginx/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io
33       11515 11410  0 10:08 ?        00:00:00 /bin/bash /entrypoint.sh /nginx-ingress-controller --configmap=ingress-nginx/nginx-configuration --tcp-services-configmap=ingress-nginx/tcp-services --udp-services-configmap=ingress-nginx/udp-services --publish-service=ingress-nginx/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io
33       11520 11515  1 10:08 ?        00:01:04 /nginx-ingress-controller --configmap=ingress-nginx/nginx-configuration --tcp-services-configmap=ingress-nginx/tcp-services --udp-services-configmap=ingress-nginx/udp-services --publish-service=ingress-nginx/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io

[root@node02 cfg]# netstat -anput | grep :80 | grep LISTEN  #80端口已监听
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      11828/nginx: master 
tcp6       0      0 :::80                   :::*                    LISTEN      11828/nginx: master 
[root@node02 cfg]# 

 

 

2、新建pod 和service进行 测试
技术分享图片
[root@master01 ingress]# cat deploy-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
  labels:
    app: nginx

spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deploy
  template:
    metadata:
      labels:
        app: nginx-deploy

    spec:
      containers:
      - name: nginx
        image: 10.192.27.111/library/nginx:1.14
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80


---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-mxxl
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30080
  selector:
    app: nginx-deploy

[root@master01 ingress]# 
cat deploy-nginx.yaml
[root@master01 ingress]# kubectl apply -f deploy-nginx.yaml 
deployment.apps/nginx-deployment created
service/nginx-service-mxxl created
[root@master01 ingress]# kubectl get all 
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-5dbdf48958-29dbd   1/1     Running   0          11s
pod/nginx-deployment-5dbdf48958-mz2xh   1/1     Running   0          11s
pod/nginx-deployment-5dbdf48958-q9lx4   1/1     Running   0          11s

NAME                         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes           ClusterIP   10.0.0.1     <none>        443/TCP        17d
service/nginx-service-mxxl   NodePort    10.0.0.209   <none>        80:30080/TCP   11s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           11s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-5dbdf48958   3         3         3       11s
[root@master01 ingress]# 
[root@master01 ingress]# kubectl get ep
NAME                 ENDPOINTS                                      AGE
kubernetes           10.192.27.100:6443,10.192.27.114:6443          17d
nginx-service-mxxl   172.17.43.4:80,172.17.43.5:80,172.17.46.4:80   31s
[root@master01 ingress]# 
3、新建http 80端口的ingress进行测试
[root@master01 ingress]# cat ingress-test.yaml 
apiVersion: extensions/v1beta1                    #要和上面发布的deployment在同一个名称空间中
kind: Ingress
metadata:
  name: simple-fanout-example   
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /   #说明自己使用的ingress controller是哪一个
spec:
  rules:
  - host: foo.bar.com  # 网站域名
    http:
      paths:
      - path: /     #访问的URI
        backend:
          serviceName: nginx-service-mxxl   #service 域名
          servicePort: 80     #service port
#      - path: /foo
#        backend:
#          serviceName: service1
#          servicePort: 4200
#      - path: /bar
#        backend:
#          serviceName: service2
#          servicePort: 8080
[root@master01 ingress]# 

[root@master01 ingress]# kubectl create -f ingress-test.yaml 
ingress.extensions/simple-fanout-example created
[root@master01 ingress]# kubectl get ingress 
NAME                    HOSTS         ADDRESS   PORTS   AGE
simple-fanout-example   foo.bar.com             80      21s
[root@master01 ingress]# 

在系统hosts文件中加
10.192.27.116 foo.bar.com  

由于ingress-controller分配到node2 10.192.27.116上
这里说明:生产环境中可以将一组node加上标签 ingress-controller 可以DaemonSet的形式分配到这些node中 ,再使用一个外部负载均衡器

访问网站 http://foo.bar.com

 技术分享图片

 

Ingress

原文:https://www.cnblogs.com/linux985/p/11957371.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!