前段时间参加openshift培训,通过产品部门的讲解,刷新了我对OpenShift一些的认识,今天先从最弱的环节网络做一些了解吧。
下面这个图比较直观。
找个环境先看看
[root@node1 ~]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:dc:99:1a brd ff:ff:ff:ff:ff:ff inet 192.168.56.104/24 brd 192.168.56.255 scope global noprefixroute enp0s3 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fedc:991a/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:1e:32:6a:69 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 42:ea:54:f3:41:32 brd ff:ff:ff:ff:ff:ff 9: br0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000 link/ether 8e:be:50:76:b7:45 brd ff:ff:ff:ff:ff:ff 10: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether 56:e0:7e:45:e9:5b brd ff:ff:ff:ff:ff:ff inet6 fe80::54e0:7eff:fe45:e95b/64 scope link valid_lft forever preferred_lft forever 11: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 1a:57:ef:7d:84:2a brd ff:ff:ff:ff:ff:ff inet 10.130.0.1/23 brd 10.130.1.255 scope global tun0 valid_lft forever preferred_lft forever inet6 fe80::1857:efff:fe7d:842a/64 scope link valid_lft forever preferred_lft forever 12: veth4da99f3a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 4e:7c:96:3b:db:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::4c7c:96ff:fe3b:db3c/64 scope link valid_lft forever preferred_lft forever 13: veth4596fc96@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 2a:83:b7:1b:e0:81 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::2883:b7ff:fe1b:e081/64 scope link valid_lft forever preferred_lft forever 14: veth04caa6b2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 3a:5b:9a:62:9c:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::385b:9aff:fe62:9cf4/64 scope link valid_lft forever preferred_lft forever 15: veth14f14b18@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether ca:d2:96:48:84:be brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::c8d2:96ff:fe48:84be/64 scope link valid_lft forever preferred_lft forever 16: veth31713a78@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether ce:c3:8f:3e:b9:41 brd ff:ff:ff:ff:ff:ff link-netnsid 4 inet6 fe80::ccc3:8fff:fe3e:b941/64 scope link valid_lft forever preferred_lft forever 17: veth9ff2f96a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 16:f8:11:d2:28:7a brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::14f8:11ff:fed2:287a/64 scope link valid_lft forever preferred_lft forever 18: veth86a4a302@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 56:75:ab:7a:17:28 brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::5475:abff:fe7a:1728/64 scope link valid_lft forever preferred_lft forever 19: vethb4141622@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 46:d1:df:02:5c:d3 brd ff:ff:ff:ff:ff:ff link-netnsid 7 inet6 fe80::44d1:dfff:fe02:5cd3/64 scope link valid_lft forever preferred_lft forever 20: vethae772509@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether c6:12:af:03:f0:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 8 inet6 fe80::c412:afff:fe03:f0b7/64 scope link valid_lft forever preferred_lft forever 25: veth4fbfae38@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default link/ether 82:f3:e8:87:9b:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 9 inet6 fe80::80f3:e8ff:fe87:9be9/64 scope link valid_lft forever preferred_lft forever
可以看到一堆的interface, 在一个节点上,OpenShift SDN往往会生成如下的类型的接口(interface)
br0
: OVS的网桥设备 The OVS bridge device that containers will be attached to. OpenShift SDN also configures a set of non-subnet-specific flow rules on this bridge.
tun0
: 访问外部网络的网关 An OVS internal port (port 2 on br0
). This gets assigned the cluster subnet gateway address, and is used for external network access. OpenShift SDN configures netfilter
and routing rules to enable access from the cluster subnet to the external network via NAT.
vxlan_sys_4789
: 访问其他节点Pod的设备 The OVS VXLAN device (port 1 on br0
), which provides access to containers on remote nodes. Referred to as vxlan0
in the OVS rules.
vethX
(in the main netns): 和Pod关联的虚拟网卡 A Linux virtual ethernet peer of eth0
in the Docker netns. It will be attached to the OVS bridge on one of the other ports.
这个说明可以结合上面那个图来看,就很清晰了。
基于brctl命令是看不到ovs的网桥的,只能看到 docker0
[root@node1 ~]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.02421e326a69 no
如果要看到br0网桥,并且了解附在上面的ip信息,需要安装openvswitch.
基于 https://access.redhat.com/solutions/3710321
OpenShift在3.10以上版本不预装openvswitch,如果需要安装参照一下步骤。
# subscription-manager register # subscription-manager list --available # subscription-manager attach --pool=<pool_id> subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-optional-rpms yum install -y openvswitch
基于命令查看网桥
[root@node1 network-scripts]# ovs-vsctl list-br
br0
查看网桥上的端口
[root@node1 network-scripts]# ovs-ofctl -O OpenFlow13 dump-ports-desc br0 OFPST_PORT_DESC reply (OF1.3) (xid=0x2): 1(vxlan0): addr:a2:01:b9:7d:a6:c3 config: 0 state: LIVE speed: 0 Mbps now, 0 Mbps max 2(tun0): addr:de:48:d6:e7:a9:91 config: 0 state: LIVE speed: 0 Mbps now, 0 Mbps max 3(veth04fb5821): addr:66:20:89:fa:e0:9f config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 4(vethbdebc4f7): addr:d6:f8:92:06:3e:da config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 5(veth9fd3926f): addr:5e:60:13:c8:30:9e config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 6(veth3b466fa9): addr:ee:3f:1b:cb:cf:9b config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 7(veth866c42b5): addr:be:7e:e6:d2:2f:f1 config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 8(veth2446bc66): addr:96:5b:fe:87:10:30 config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 9(veth1afcb012): addr:36:3b:de:9d:82:8b config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 10(vethd31bb8c7): addr:06:8f:41:50:ba:72 config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 11(vethe02a7907): addr:36:c4:af:c3:c8:26 config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 12(veth06b13117): addr:0a:53:73:5e:21:d7 config: 0 state: LIVE current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max LOCAL(br0): addr:b2:a3:b4:47:9d:4e config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max
查看流量
[root@node1 network-scripts]# ovs-ofctl -O OpenFlow13 dump-flows br0 OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x0, duration=788.192s, table=0, n_packets=0, n_bytes=0, priority=250,ip,in_port=2,nw_dst=224.0.0.0/4 actions=drop cookie=0x0, duration=788.192s, table=0, n_packets=0, n_bytes=0, priority=200,arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.130.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=788.192s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=788.192s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_dst=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 cookie=0x0, duration=788.192s, table=0, n_packets=98, n_bytes=4116, priority=200,arp,in_port=2,arp_spa=10.130.0.1,arp_tpa=10.128.0.0/14 actions=goto_table:30
开了个头,先这样,更多的网络诊断,可以参考
https://docs.openshift.com/container-platform/3.11/admin_guide/sdn_troubleshooting.html
原文:https://www.cnblogs.com/ericnie/p/11374298.html