4.1 ReplicaSet基础
1. ReplicaSet概念
ReplicaSet是Kubernetes中用于确保指定数量的Pod副本始终运行的控制器。它是ReplicationController的下一代替代品,提供了更灵活的标签选择器。
ReplicaSet特性: - 维护指定数量的Pod副本 - 支持集合式标签选择器 - 自动替换失败的Pod - 支持水平扩缩容
2. 基本ReplicaSet示例
simple-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset
labels:
app: nginx
tier: frontend
spec:
# 指定副本数量
replicas: 3
# 选择器,用于匹配Pod
selector:
matchLabels:
app: nginx
tier: frontend
# Pod模板
template:
metadata:
labels:
app: nginx
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
3. 高级选择器示例
advanced-selector-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: advanced-replicaset
spec:
replicas: 2
selector:
matchLabels:
app: web
matchExpressions:
- key: tier
operator: In
values:
- frontend
- backend
- key: environment
operator: NotIn
values:
- debug
template:
metadata:
labels:
app: web
tier: frontend
environment: production
spec:
containers:
- name: web
image: nginx:1.20
ports:
- containerPort: 80
4.2 Deployment详解
1. Deployment概念
Deployment是Kubernetes中用于管理Pod和ReplicaSet的高级控制器,提供了声明式的更新方式。
Deployment优势: - 声明式更新 - 滚动更新和回滚 - 版本历史管理 - 暂停和恢复部署 - 扩缩容管理
2. 基本Deployment示例
basic-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
# 副本数量
replicas: 3
# 选择器
selector:
matchLabels:
app: nginx
# Pod模板
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# 健康检查
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
3. 完整的Web应用Deployment
web-app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
namespace: default
labels:
app: web-app
version: v1.0
annotations:
deployment.kubernetes.io/revision: "1"
spec:
replicas: 5
# 更新策略
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
# 进度截止时间
progressDeadlineSeconds: 600
# 修订历史限制
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
version: v1.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
# 优雅终止时间
terminationGracePeriodSeconds: 30
# 初始化容器
initContainers:
- name: init-db
image: busybox:1.35
command: ['sh', '-c']
args:
- |
until nc -z mysql-service 3306; do
echo "Waiting for MySQL..."
sleep 2
done
echo "MySQL is ready!"
containers:
- name: web-app
image: myapp:1.0
ports:
- containerPort: 8080
name: http
- containerPort: 8081
name: metrics
# 环境变量
env:
- name: DB_HOST
value: "mysql-service"
- name: DB_PORT
value: "3306"
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: database.name
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: database.password
# 资源限制
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# 健康检查
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
# 卷挂载
volumeMounts:
- name: app-config
mountPath: /etc/config
- name: app-logs
mountPath: /var/log/app
volumes:
- name: app-config
configMap:
name: app-config
- name: app-logs
emptyDir: {}
# 节点选择
nodeSelector:
node-type: worker
# 容忍度
tolerations:
- key: "node-type"
operator: "Equal"
value: "spot"
effect: "NoSchedule"
4.3 更新策略
1. 滚动更新(RollingUpdate)
rolling-update-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rolling-update-deployment
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
# 最大不可用Pod数量
maxUnavailable: 2
# 最大超出期望副本数的Pod数量
maxSurge: 2
selector:
matchLabels:
app: rolling-app
template:
metadata:
labels:
app: rolling-app
spec:
containers:
- name: app
image: nginx:1.20
ports:
- containerPort: 80
# 优雅关闭
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
# 就绪探针确保新Pod准备好接收流量
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
2. 重新创建(Recreate)
recreate-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: recreate-deployment
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: recreate-app
template:
metadata:
labels:
app: recreate-app
spec:
containers:
- name: app
image: nginx:1.20
ports:
- containerPort: 80
3. 蓝绿部署示例
blue-green-deployment.yaml
# 蓝色版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
labels:
app: myapp
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
---
# 绿色版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
labels:
app: myapp
version: green
spec:
replicas: 0 # 初始时为0,部署时扩容
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:2.0
ports:
- containerPort: 8080
---
# Service(通过修改selector切换流量)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
version: blue # 切换到green实现蓝绿部署
ports:
- port: 80
targetPort: 8080
4.4 扩缩容管理
1. 手动扩缩容
scaling-commands.sh
#!/bin/bash
echo "=== Deployment扩缩容操作 ==="
DEPLOYMENT_NAME="nginx-deployment"
# 查看当前副本数
echo "1. 查看当前副本数:"
kubectl get deployment $DEPLOYMENT_NAME
# 扩容到5个副本
echo "\n2. 扩容到5个副本:"
kubectl scale deployment $DEPLOYMENT_NAME --replicas=5
# 等待扩容完成
echo "\n3. 等待扩容完成:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 查看Pod状态
echo "\n4. 查看Pod状态:"
kubectl get pods -l app=nginx
# 缩容到2个副本
echo "\n5. 缩容到2个副本:"
kubectl scale deployment $DEPLOYMENT_NAME --replicas=2
# 等待缩容完成
echo "\n6. 等待缩容完成:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 查看最终状态
echo "\n7. 查看最终状态:"
kubectl get deployment $DEPLOYMENT_NAME
kubectl get pods -l app=nginx
echo "\n=== 扩缩容操作完成 ==="
2. 水平Pod自动扩缩容(HPA)
hpa-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-deployment
spec:
replicas: 2
selector:
matchLabels:
app: hpa-app
template:
metadata:
labels:
app: hpa-app
spec:
containers:
- name: app
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hpa-deployment-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-deployment
minReplicas: 2
maxReplicas: 10
metrics:
# CPU利用率
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
# 内存利用率
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# 自定义指标
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "100"
# 扩缩容行为
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
- type: Pods
value: 2
periodSeconds: 60
selectPolicy: Max
3. 垂直Pod自动扩缩容(VPA)
vpa-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: vpa-deployment
spec:
replicas: 2
selector:
matchLabels:
app: vpa-app
template:
metadata:
labels:
app: vpa-app
spec:
containers:
- name: app
image: nginx:1.20
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: vpa-deployment-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: vpa-deployment
updatePolicy:
updateMode: "Auto" # Auto, Recreate, Initial, Off
resourcePolicy:
containerPolicies:
- containerName: app
minAllowed:
cpu: 50m
memory: 64Mi
maxAllowed:
cpu: 500m
memory: 512Mi
controlledResources: ["cpu", "memory"]
4.5 更新和回滚
1. 更新Deployment
update-deployment.sh
#!/bin/bash
echo "=== Deployment更新操作 ==="
DEPLOYMENT_NAME="nginx-deployment"
# 查看当前状态
echo "1. 查看当前状态:"
kubectl get deployment $DEPLOYMENT_NAME
kubectl describe deployment $DEPLOYMENT_NAME | grep Image
# 更新镜像
echo "\n2. 更新镜像到nginx:1.21:"
kubectl set image deployment/$DEPLOYMENT_NAME nginx=nginx:1.21
# 或者使用patch命令
# kubectl patch deployment $DEPLOYMENT_NAME -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","image":"nginx:1.21"}]}}}}'
# 查看更新状态
echo "\n3. 查看更新状态:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 查看更新历史
echo "\n4. 查看更新历史:"
kubectl rollout history deployment $DEPLOYMENT_NAME
# 查看特定版本详情
echo "\n5. 查看版本详情:"
kubectl rollout history deployment $DEPLOYMENT_NAME --revision=2
# 验证更新
echo "\n6. 验证更新:"
kubectl describe deployment $DEPLOYMENT_NAME | grep Image
kubectl get pods -l app=nginx
echo "\n=== 更新操作完成 ==="
2. 回滚操作
rollback-deployment.sh
#!/bin/bash
echo "=== Deployment回滚操作 ==="
DEPLOYMENT_NAME="nginx-deployment"
# 查看回滚历史
echo "1. 查看回滚历史:"
kubectl rollout history deployment $DEPLOYMENT_NAME
# 回滚到上一个版本
echo "\n2. 回滚到上一个版本:"
kubectl rollout undo deployment $DEPLOYMENT_NAME
# 等待回滚完成
echo "\n3. 等待回滚完成:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 回滚到指定版本
echo "\n4. 回滚到指定版本(版本1):"
kubectl rollout undo deployment $DEPLOYMENT_NAME --to-revision=1
# 等待回滚完成
echo "\n5. 等待回滚完成:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 验证回滚
echo "\n6. 验证回滚:"
kubectl describe deployment $DEPLOYMENT_NAME | grep Image
kubectl get pods -l app=nginx
# 查看最新历史
echo "\n7. 查看最新历史:"
kubectl rollout history deployment $DEPLOYMENT_NAME
echo "\n=== 回滚操作完成 ==="
3. 暂停和恢复部署
pause-resume-deployment.sh
#!/bin/bash
echo "=== Deployment暂停和恢复操作 ==="
DEPLOYMENT_NAME="nginx-deployment"
# 暂停部署
echo "1. 暂停部署:"
kubectl rollout pause deployment $DEPLOYMENT_NAME
# 进行多个更新(不会触发滚动更新)
echo "\n2. 进行多个更新:"
kubectl set image deployment/$DEPLOYMENT_NAME nginx=nginx:1.21
kubectl set resources deployment $DEPLOYMENT_NAME -c nginx --limits=cpu=200m,memory=256Mi
# 查看状态(应该显示暂停状态)
echo "\n3. 查看状态:"
kubectl get deployment $DEPLOYMENT_NAME
kubectl rollout status deployment $DEPLOYMENT_NAME
# 恢复部署
echo "\n4. 恢复部署:"
kubectl rollout resume deployment $DEPLOYMENT_NAME
# 等待更新完成
echo "\n5. 等待更新完成:"
kubectl rollout status deployment $DEPLOYMENT_NAME
# 查看最终状态
echo "\n6. 查看最终状态:"
kubectl get deployment $DEPLOYMENT_NAME
kubectl describe deployment $DEPLOYMENT_NAME
echo "\n=== 暂停和恢复操作完成 ==="
4.6 高级配置
1. 多环境部署
multi-env-deployment.yaml
# 开发环境
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
namespace: development
labels:
app: myapp
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: myapp
env: dev
template:
metadata:
labels:
app: myapp
env: dev
spec:
containers:
- name: app
image: myapp:dev
env:
- name: ENV
value: "development"
- name: LOG_LEVEL
value: "debug"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
# 测试环境
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-test
namespace: testing
labels:
app: myapp
env: test
spec:
replicas: 2
selector:
matchLabels:
app: myapp
env: test
template:
metadata:
labels:
app: myapp
env: test
spec:
containers:
- name: app
image: myapp:test
env:
- name: ENV
value: "testing"
- name: LOG_LEVEL
value: "info"
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 400m
memory: 512Mi
---
# 生产环境
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-prod
namespace: production
labels:
app: myapp
env: prod
spec:
replicas: 5
selector:
matchLabels:
app: myapp
env: prod
template:
metadata:
labels:
app: myapp
env: prod
spec:
containers:
- name: app
image: myapp:1.0
env:
- name: ENV
value: "production"
- name: LOG_LEVEL
value: "warn"
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
# 生产环境额外的健康检查
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
2. 金丝雀部署
canary-deployment.yaml
# 稳定版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-stable
labels:
app: myapp
version: stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
version: stable
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
---
# 金丝雀版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-canary
labels:
app: myapp
version: canary
spec:
replicas: 1 # 10%的流量
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: app
image: myapp:2.0
ports:
- containerPort: 8080
---
# Service(同时路由到两个版本)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp # 不指定version,路由到所有版本
ports:
- port: 80
targetPort: 8080
3. A/B测试部署
ab-testing-deployment.yaml
# A版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-version-a
labels:
app: myapp
version: a
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: a
template:
metadata:
labels:
app: myapp
version: a
spec:
containers:
- name: app
image: myapp:version-a
env:
- name: FEATURE_FLAG
value: "false"
ports:
- containerPort: 8080
---
# B版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-version-b
labels:
app: myapp
version: b
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: b
template:
metadata:
labels:
app: myapp
version: b
spec:
containers:
- name: app
image: myapp:version-b
env:
- name: FEATURE_FLAG
value: "true"
ports:
- containerPort: 8080
---
# A版本Service
apiVersion: v1
kind: Service
metadata:
name: app-service-a
spec:
selector:
app: myapp
version: a
ports:
- port: 80
targetPort: 8080
---
# B版本Service
apiVersion: v1
kind: Service
metadata:
name: app-service-b
spec:
selector:
app: myapp
version: b
ports:
- port: 80
targetPort: 8080
---
# Ingress进行流量分割
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ab-testing-ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "50"
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service-a
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: app-service-b
port:
number: 80
4.7 监控和故障排查
1. 监控脚本
monitor-deployment.sh
#!/bin/bash
DEPLOYMENT_NAME=$1
if [ -z "$DEPLOYMENT_NAME" ]; then
echo "Usage: $0 <deployment-name>"
exit 1
fi
echo "=== Deployment监控: $DEPLOYMENT_NAME ==="
# 基本信息
echo "1. Deployment基本信息:"
kubectl get deployment $DEPLOYMENT_NAME -o wide
# ReplicaSet信息
echo "\n2. ReplicaSet信息:"
kubectl get replicaset -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}')
# Pod状态
echo "\n3. Pod状态:"
kubectl get pods -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') -o wide
# 事件
echo "\n4. 相关事件:"
kubectl get events --field-selector involvedObject.name=$DEPLOYMENT_NAME --sort-by='.lastTimestamp'
# 资源使用情况
echo "\n5. 资源使用情况:"
kubectl top pods -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') 2>/dev/null || echo "Metrics server未安装"
# HPA状态(如果存在)
echo "\n6. HPA状态:"
kubectl get hpa | grep $DEPLOYMENT_NAME || echo "未配置HPA"
# 部署历史
echo "\n7. 部署历史:"
kubectl rollout history deployment $DEPLOYMENT_NAME
# 当前部署状态
echo "\n8. 当前部署状态:"
kubectl rollout status deployment $DEPLOYMENT_NAME
echo "\n=== 监控完成 ==="
2. 故障排查脚本
troubleshoot-deployment.sh
#!/bin/bash
DEPLOYMENT_NAME=$1
if [ -z "$DEPLOYMENT_NAME" ]; then
echo "Usage: $0 <deployment-name>"
exit 1
fi
echo "=== Deployment故障排查: $DEPLOYMENT_NAME ==="
# 检查Deployment状态
echo "1. Deployment状态检查:"
kubectl describe deployment $DEPLOYMENT_NAME
# 检查ReplicaSet状态
echo "\n2. ReplicaSet状态检查:"
for rs in $(kubectl get replicaset -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') -o jsonpath='{.items[*].metadata.name}'); do
echo "\nReplicaSet: $rs"
kubectl describe replicaset $rs
done
# 检查Pod状态
echo "\n3. Pod状态检查:"
for pod in $(kubectl get pods -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') -o jsonpath='{.items[*].metadata.name}'); do
echo "\nPod: $pod"
kubectl describe pod $pod
echo "\nPod日志:"
kubectl logs $pod --tail=20
done
# 检查Service(如果存在)
echo "\n4. Service检查:"
SERVICE_NAME=$(kubectl get service -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ ! -z "$SERVICE_NAME" ]; then
kubectl describe service $SERVICE_NAME
else
echo "未找到相关Service"
fi
# 检查网络连通性
echo "\n5. 网络连通性检查:"
POD_NAME=$(kubectl get pods -l app=$(kubectl get deployment $DEPLOYMENT_NAME -o jsonpath='{.spec.selector.matchLabels.app}') -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ ! -z "$POD_NAME" ]; then
echo "测试Pod网络连通性:"
kubectl exec $POD_NAME -- nslookup kubernetes.default.svc.cluster.local || echo "DNS解析失败"
kubectl exec $POD_NAME -- ping -c 3 8.8.8.8 || echo "外网连通性失败"
fi
# 检查资源限制
echo "\n6. 资源限制检查:"
kubectl describe nodes | grep -A 5 "Allocated resources" || echo "无法获取节点资源信息"
echo "\n=== 故障排查完成 ==="
3. 性能测试
performance-test.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: performance-test
spec:
template:
spec:
containers:
- name: performance-test
image: busybox:1.35
command: ['sh', '-c']
args:
- |
echo "开始性能测试..."
# 安装工具
wget -O /tmp/wrk https://github.com/wg/wrk/releases/download/4.2.0/wrk
chmod +x /tmp/wrk
# 获取Service IP
SERVICE_IP=$(nslookup app-service | grep Address | tail -1 | awk '{print $2}')
# 执行性能测试
echo "测试目标: http://$SERVICE_IP"
/tmp/wrk -t12 -c400 -d30s http://$SERVICE_IP/
echo "性能测试完成"
restartPolicy: Never
backoffLimit: 4
总结
本章详细介绍了Deployment和ReplicaSet的核心概念和使用方法,包括:
核心概念
- ReplicaSet - 维护Pod副本数量的控制器
- Deployment - 管理ReplicaSet和Pod的高级控制器
- 更新策略 - 滚动更新、重新创建等部署策略
部署策略
- 滚动更新 - 零停机时间的渐进式更新
- 蓝绿部署 - 快速切换的部署方式
- 金丝雀部署 - 渐进式流量切换
- A/B测试 - 并行版本测试
扩缩容管理
- 手动扩缩容 - 通过kubectl命令调整副本数
- HPA - 基于指标的水平自动扩缩容
- VPA - 垂直Pod自动扩缩容
版本管理
- 更新操作 - 镜像更新、配置更新
- 回滚操作 - 版本回退、历史管理
- 暂停恢复 - 部署过程控制
高级特性
- 多环境部署 - 开发、测试、生产环境配置
- 监控告警 - 部署状态监控
- 故障排查 - 问题诊断和解决
最佳实践
- 资源规划 - 合理设置资源请求和限制
- 健康检查 - 配置存活和就绪探针
- 更新策略 - 选择合适的部署策略
- 监控运维 - 完善的监控和告警体系
注意事项
- 版本兼容 - 确保应用版本兼容性
- 资源管理 - 避免资源不足导致的调度失败
- 网络配置 - 确保网络连通性
- 存储管理 - 处理有状态应用的存储需求
下一章我们将学习Service和Ingress,了解如何为应用提供网络访问和负载均衡。