8.1 Kubernetes网络基础
8.1.1 网络模型概述
Kubernetes采用扁平化网络模型,遵循以下原则:
- Pod间直接通信:所有Pod可以直接通信,无需NAT
- 节点与Pod通信:节点可以与所有Pod直接通信
- IP地址一致性:Pod看到的自己的IP地址与其他Pod看到的一致
# 网络模型说明
apiVersion: v1
kind: ConfigMap
metadata:
name: network-model
data:
principles: |
1. 每个Pod都有唯一的IP地址
2. Pod内容器共享网络命名空间
3. Pod间可以直接通过IP通信
4. Service提供稳定的网络端点
5. Ingress提供外部访问入口
8.1.2 网络组件架构
apiVersion: v1
kind: ConfigMap
metadata:
name: network-components
data:
cni: |
Container Network Interface
- 负责Pod网络配置
- 支持多种网络插件
- 标准化网络接口
kube-proxy: |
Kubernetes网络代理
- 实现Service负载均衡
- 维护网络规则
- 支持多种代理模式
dns: |
集群DNS服务
- 提供服务发现
- 解析Service名称
- 支持自定义DNS策略
8.2 CNI网络插件
8.2.1 Flannel网络插件
# Flannel配置
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
app: flannel
spec:
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.15.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
8.2.2 Calico网络插件
# Calico配置
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
---
# Calico网络策略示例
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
selector: all()
types:
- Ingress
- Egress
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
selector: app == 'backend'
types:
- Ingress
ingress:
- action: Allow
source:
selector: app == 'frontend'
destination:
ports:
- 8080
8.2.3 Cilium网络插件
# Cilium配置
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
cluster-name: "kubernetes"
cluster-id: "1"
enable-ipv4: "true"
enable-ipv6: "false"
enable-bpf-masquerade: "true"
enable-host-reachable-services: "true"
enable-local-redirect-policy: "true"
kube-proxy-replacement: "strict"
operator-api-serve-addr: "127.0.0.1:9234"
ipam: "kubernetes"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: cilium
template:
metadata:
labels:
k8s-app: cilium
spec:
hostNetwork: true
containers:
- name: cilium-agent
image: quay.io/cilium/cilium:v1.12.0
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: cilium-run
mountPath: /var/run/cilium
- name: config-map
mountPath: /tmp/cilium/config-map
readOnly: true
volumes:
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
- name: config-map
configMap:
name: cilium-config
8.3 网络策略 (NetworkPolicy)
8.3.1 基础网络策略
# 拒绝所有入站流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# 拒绝所有出站流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
---
# 允许特定标签的Pod通信
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
8.3.2 高级网络策略
# 基于命名空间的网络策略
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090
---
# 基于IP块的网络策略
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-access
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.0.1.0/24
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # 阻止访问元数据服务
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
8.3.3 微服务网络策略
# 三层架构网络策略
# Frontend层
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-netpol
namespace: ecommerce
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
- to: [] # 允许DNS查询
ports:
- protocol: UDP
port: 53
---
# Backend层
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-netpol
namespace: ecommerce
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
- to: [] # 允许DNS查询
ports:
- protocol: UDP
port: 53
---
# Database层
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-netpol
namespace: ecommerce
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
8.4 Service网络
8.4.1 Service类型详解
# ClusterIP Service
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 80
targetPort: 8080
protocol: TCP
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
---
# NodePort Service
apiVersion: v1
kind: Service
metadata:
name: frontend-nodeport
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 80
targetPort: 3000
nodePort: 30080
protocol: TCP
---
# LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: api-loadbalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
type: LoadBalancer
selector:
app: api
ports:
- port: 443
targetPort: 8443
protocol: TCP
loadBalancerSourceRanges:
- 10.0.0.0/8
- 172.16.0.0/12
8.4.2 Headless Service
# Headless Service for StatefulSet
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql-headless
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
8.4.3 ExternalName Service
# ExternalName Service
apiVersion: v1
kind: Service
metadata:
name: external-database
spec:
type: ExternalName
externalName: database.example.com
ports:
- port: 5432
targetPort: 5432
---
# 使用ExternalName Service的Pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DATABASE_HOST
value: "external-database.default.svc.cluster.local"
- name: DATABASE_PORT
value: "5432"
8.5 Ingress控制器
8.5.1 Nginx Ingress Controller
# Nginx Ingress Controller部署
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ingress-controller
template:
metadata:
labels:
app: nginx-ingress-controller
spec:
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
httpGet:
path: /healthz
port: 10254
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 10254
initialDelaySeconds: 10
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
type: LoadBalancer
selector:
app: nginx-ingress-controller
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
8.5.2 高级Ingress配置
# 多域名Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-domain-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- api.example.com
- web.example.com
secretName: example-tls
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: web.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
---
# 基于路径的路由
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-based-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Content-Type-Options: nosniff";
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /web(/|$)(.*)
pathType: Prefix
backend:
service:
name: web-service
port:
number: 3000
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: static-service
port:
number: 80
8.5.3 Ingress中间件和认证
# 基本认证Ingress
apiVersion: v1
kind: Secret
metadata:
name: basic-auth
type: Opaque
data:
auth: YWRtaW46JGFwcjEkSDY1dnBkJE8vVjFIeGZIRnlRZkVOVVBOUUJQZjE= # admin:admin
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auth-ingress
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
spec:
rules:
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
---
# OAuth认证Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://auth.example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.example.com/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-User,X-Auth-Request-Email"
spec:
rules:
- host: secure.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: secure-service
port:
number: 80
8.6 DNS和服务发现
8.6.1 CoreDNS配置
# CoreDNS ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
example.com:53 {
errors
cache 30
forward . 8.8.8.8 8.8.4.4
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
containers:
- name: coredns
image: k8s.gcr.io/coredns/coredns:v1.8.6
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
8.6.2 自定义DNS策略
# 自定义DNS配置的Pod
apiVersion: v1
kind: Pod
metadata:
name: custom-dns-pod
spec:
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
searches:
- example.com
- internal.example.com
options:
- name: ndots
value: "2"
- name: edns0
containers:
- name: app
image: nginx
---
# 使用ClusterFirst策略的Pod
apiVersion: v1
kind: Pod
metadata:
name: cluster-dns-pod
spec:
dnsPolicy: ClusterFirst
containers:
- name: app
image: nginx
env:
- name: SERVICE_HOST
value: "backend-service.default.svc.cluster.local"
8.7 负载均衡
8.7.1 kube-proxy模式
# kube-proxy配置
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-proxy
namespace: kube-system
data:
config.conf: |
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
scheduler: "rr" # rr, lc, dh, sh, sed, nq
strictARP: true
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
labels:
k8s-app: kube-proxy
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: k8s.gcr.io/kube-proxy:v1.25.0
command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: kube-proxy
mountPath: /var/lib/kube-proxy
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: kube-proxy
configMap:
name: kube-proxy
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
8.7.2 外部负载均衡器
# MetalLB配置
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: metallb-system
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
- name: production
protocol: bgp
addresses:
- 10.0.0.0/24
bgp-advertisements:
- aggregation-length: 32
localpref: 100
communities:
- 65535:65282
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: speaker
namespace: metallb-system
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
spec:
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.13.5
args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: METALLB_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
ports:
- name: monitoring
containerPort: 7472
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_RAW
8.8 网络监控和故障排查
8.8.1 网络监控脚本
#!/bin/bash
# network-monitor.sh
echo "=== Kubernetes网络监控 ==="
# 检查网络插件状态
echo "\n1. 网络插件状态:"
kubectl get pods -n kube-system | grep -E "flannel|calico|cilium|weave"
# 检查kube-proxy状态
echo "\n2. kube-proxy状态:"
kubectl get pods -n kube-system | grep kube-proxy
kubectl get ds -n kube-system kube-proxy
# 检查CoreDNS状态
echo "\n3. CoreDNS状态:"
kubectl get pods -n kube-system | grep coredns
kubectl get svc -n kube-system kube-dns
# 检查Service状态
echo "\n4. Service状态:"
kubectl get svc --all-namespaces
# 检查Ingress状态
echo "\n5. Ingress状态:"
kubectl get ingress --all-namespaces
# 检查网络策略
echo "\n6. 网络策略:"
kubectl get networkpolicy --all-namespaces
# 检查节点网络状态
echo "\n7. 节点网络状态:"
for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do
echo "\n节点: $node"
kubectl describe node $node | grep -A 10 "Addresses:"
done
# 检查Pod网络连通性
echo "\n8. Pod网络连通性测试:"
kubectl run network-test --image=busybox --rm -it --restart=Never -- nslookup kubernetes.default.svc.cluster.local
8.8.2 网络故障排查脚本
#!/bin/bash
# network-troubleshoot.sh
POD_NAME=$1
NAMESPACE=${2:-default}
if [ -z "$POD_NAME" ]; then
echo "用法: $0 <pod-name> [namespace]"
exit 1
fi
echo "=== 网络故障排查: $POD_NAME ==="
# 检查Pod状态
echo "\n1. Pod基本信息:"
kubectl get pod $POD_NAME -n $NAMESPACE -o wide
kubectl describe pod $POD_NAME -n $NAMESPACE | grep -A 10 "IP:"
# 检查Pod网络配置
echo "\n2. Pod网络配置:"
kubectl exec $POD_NAME -n $NAMESPACE -- ip addr show
kubectl exec $POD_NAME -n $NAMESPACE -- ip route show
# 检查DNS解析
echo "\n3. DNS解析测试:"
kubectl exec $POD_NAME -n $NAMESPACE -- nslookup kubernetes.default.svc.cluster.local
kubectl exec $POD_NAME -n $NAMESPACE -- cat /etc/resolv.conf
# 检查Service连通性
echo "\n4. Service连通性测试:"
for svc in $(kubectl get svc -n $NAMESPACE -o jsonpath='{.items[*].metadata.name}'); do
echo "测试Service: $svc"
kubectl exec $POD_NAME -n $NAMESPACE -- nc -zv $svc 80 2>&1 || true
done
# 检查网络策略影响
echo "\n5. 网络策略检查:"
kubectl get networkpolicy -n $NAMESPACE
kubectl describe pod $POD_NAME -n $NAMESPACE | grep -A 5 "Labels:"
# 检查节点网络
echo "\n6. 节点网络检查:"
NODE=$(kubectl get pod $POD_NAME -n $NAMESPACE -o jsonpath='{.spec.nodeName}')
echo "Pod运行在节点: $NODE"
kubectl describe node $NODE | grep -A 10 "Addresses:"
# 检查网络事件
echo "\n7. 相关网络事件:"
kubectl get events -n $NAMESPACE --field-selector involvedObject.name=$POD_NAME
8.8.3 网络性能测试
# 网络性能测试Pod
apiVersion: v1
kind: Pod
metadata:
name: network-perf-server
spec:
containers:
- name: iperf3-server
image: networkstatic/iperf3
command: ["iperf3", "-s"]
ports:
- containerPort: 5201
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-server
spec:
selector:
app: iperf3-server
ports:
- port: 5201
targetPort: 5201
---
apiVersion: v1
kind: Pod
metadata:
name: network-perf-client
spec:
containers:
- name: iperf3-client
image: networkstatic/iperf3
command: ["sleep", "3600"]
restartPolicy: Never
#!/bin/bash
# network-performance-test.sh
echo "=== 网络性能测试 ==="
# 启动iperf3服务器
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: iperf3-server
labels:
app: iperf3-server
spec:
containers:
- name: iperf3
image: networkstatic/iperf3
command: ["iperf3", "-s"]
ports:
- containerPort: 5201
EOF
# 等待服务器启动
sleep 10
# 获取服务器IP
SERVER_IP=$(kubectl get pod iperf3-server -o jsonpath='{.status.podIP}')
echo "服务器IP: $SERVER_IP"
# 运行客户端测试
echo "\n开始网络性能测试..."
kubectl run iperf3-client --image=networkstatic/iperf3 --rm -it --restart=Never -- iperf3 -c $SERVER_IP -t 30
# 清理资源
kubectl delete pod iperf3-server
echo "\n网络性能测试完成"
8.9 网络安全
8.9.1 Pod安全策略
# Pod安全策略
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted-network
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
8.9.2 网络加密
# Istio mTLS配置
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: frontend-authz
namespace: production
spec:
selector:
matchLabels:
app: frontend
rules:
- from:
- source:
principals: ["cluster.local/ns/production/sa/backend"]
to:
- operation:
methods: ["GET", "POST"]
8.10 本章总结
本章详细介绍了Kubernetes中的网络管理,主要内容包括:
网络基础
- 网络模型:扁平化网络架构和通信原则
- CNI插件:Flannel、Calico、Cilium等网络插件
- 网络组件:kube-proxy、CoreDNS等核心组件
网络策略
- 基础策略:入站和出站流量控制
- 高级策略:基于标签、命名空间、IP块的策略
- 微服务策略:多层架构的网络隔离
服务发现
- Service类型:ClusterIP、NodePort、LoadBalancer、ExternalName
- Ingress控制器:Nginx、Traefik等入口控制器
- DNS配置:CoreDNS和自定义DNS策略
负载均衡
- kube-proxy模式:iptables、ipvs等代理模式
- 外部负载均衡:MetalLB、云厂商负载均衡器
- 会话保持:基于IP的会话亲和性
监控和故障排查
- 网络监控:监控网络组件和连通性
- 故障排查:诊断网络问题的方法和工具
- 性能测试:网络性能基准测试
网络安全
- 网络隔离:使用网络策略实现微分段
- 流量加密:mTLS和服务网格安全
- 访问控制:基于身份的网络访问控制
最佳实践
- 合理规划网络CIDR和IP地址分配
- 使用网络策略实现最小权限原则
- 定期监控网络性能和安全状态
- 建立网络故障排查流程
下一章我们将学习Kubernetes的监控和日志管理,包括Prometheus、Grafana、ELK等监控解决方案。