服务器之家:专注于VPS、云服务器配置技术及软件下载分享
分类导航

云服务器|WEB服务器|FTP服务器|邮件服务器|虚拟主机|服务器安全|DNS服务器|服务器知识|Nginx|IIS|Tomcat|

服务器之家 - 服务器技术 - 服务器知识 - K8S部署Redis单节点Rdb数据持久化故障演练恢复

K8S部署Redis单节点Rdb数据持久化故障演练恢复

2021-10-27 23:03运维小弟 服务器知识

k8s全称kubernetes,这个名字大家应该都不陌生,k8s是为容器服务而生的一个可移植容器的编排管理工具,越来越多的公司正在拥抱k8s。

K8S部署Redis单节点Rdb数据持久化故障演练恢复

环境:

K8S部署Redis单节点Rdb数据持久化故障演练恢复

背景: 采用NFS存储卷的方式 持久化存储redis 需要保存的文件

一、部署NFS服务器

  1. #服务器安装nfs服务,提供nfs存储功能
  2. 1、安装nfs-utils
  3. yuminstallnfs-utils(centos)
  4. 或者apt-getinstallnfs-kernel-server(ubuntu)
  5. 2、启动服务
  6. systemctlenablenfs-server
  7. systemctlstartnfs-server
  8. 3、创建共享目录完成共享配置
  9. mkdir/home/nfs#创建共享目录
  10. 4、编辑共享配置
  11. vim/etc/exports
  12. #语法格式:共享文件路径客户机地址(权限)#这里的客户机地址可以是IP,网段,域名,也可以是任意*
  13. /home/nfs*(rw,async,no_root_squash)
  1. 服务自检命令
  2. exportfs-arv
  3. 5、重启服务
  4. systemctlrestartnfs-server
  5. 6、本机查看nfs共享目录
  6. #showmount-e服务器IP地址(如果提示命令不存在,则需要yuminstallshowmount)
  7. showmount-e127.0.0.1
  8. /home/nfs*
  9. 7、客户端模拟挂载[所有k8s的节点都需要安装客户端]
  10. [root@master-1~]#yuminstallnfs-utils(centos)
  11. 或者apt-getinstallnfs-common(ubuntu)
  12. [root@master-1~]#mkdir/test
  13. [root@master-1~]#mount-tnfs172.16.201.209:/home/nfs/test
  14. #取消挂载
  15. [root@master-1~]#umount/test

二、配置PV 动态供给(NFS StorageClass),创建pvc

#部署NFS实现自动创建PV插件: 一共设计到4个yaml 文件 ,官方的文档有详细的说明。

https://github.com/kubernetes-incubator/external-storage

K8S部署Redis单节点Rdb数据持久化故障演练恢复
K8S部署Redis单节点Rdb数据持久化故障演练恢复
  1. root@k8s-master1:~#mkdir/root/pvc
  2. root@k8s-master1:~#cd/root/pvc

创建rbac.yaml 文件

  1. root@k8s-master1:pvc#catrbac.yaml
  2. kind:ServiceAccount
  3. apiVersion:v1
  4. metadata:
  5. name:nfs-client-provisioner
  6. ---
  7. kind:ClusterRole
  8. apiVersion:rbac.authorization.k8s.io/v1
  9. metadata:
  10. name:nfs-client-provisioner-runner
  11. rules:
  12. -apiGroups:[""]
  13. resources:["persistentvolumes"]
  14. verbs:["get","list","watch","create","delete"]
  15. -apiGroups:[""]
  16. resources:["persistentvolumeclaims"]
  17. verbs:["get","list","watch","update"]
  18. -apiGroups:["storage.k8s.io"]
  19. resources:["storageclasses"]
  20. verbs:["get","list","watch"]
  21. -apiGroups:[""]
  22. resources:["events"]
  23. verbs:["create","update","patch"]
  24. ---
  25. kind:ClusterRoleBinding
  26. apiVersion:rbac.authorization.k8s.io/v1
  27. metadata:
  28. name:run-nfs-client-provisioner
  29. subjects:
  30. -kind:ServiceAccount
  31. name:nfs-client-provisioner
  32. namespace:default
  33. roleRef:
  34. kind:ClusterRole
  35. name:nfs-client-provisioner-runner
  36. apiGroup:rbac.authorization.k8s.io
  37. ---
  38. kind:Role
  39. apiVersion:rbac.authorization.k8s.io/v1
  40. metadata:
  41. name:leader-locking-nfs-client-provisioner
  42. rules:
  43. -apiGroups:[""]
  44. resources:["endpoints"]
  45. verbs:["get","list","watch","create","update","patch"]
  46. ---
  47. kind:RoleBinding
  48. apiVersion:rbac.authorization.k8s.io/v1
  49. metadata:
  50. name:leader-locking-nfs-client-provisioner
  51. subjects:
  52. -kind:ServiceAccount
  53. name:nfs-client-provisioner
  54. #replacewithnamespacewhereprovisionerisdeployed
  55. namespace:default
  56. roleRef:
  57. kind:Role
  58. name:leader-locking-nfs-client-provisioner
  59. apiGroup:rbac.authorization.k8s.io

创建deployment.yaml 文件

#官方默认的镜像地址,国内可能无法下载,可以使用 image:

fxkjnj/nfs-client-provisioner:latest

#定义NFS 服务器的地址,共享目录名称

  1. root@k8s-master1:pvc#catdeployment.yaml
  2. apiVersion:v1
  3. kind:ServiceAccount
  4. metadata:
  5. name:nfs-client-provisioner
  6. ---
  7. kind:Deployment
  8. apiVersion:apps/v1
  9. metadata:
  10. name:nfs-client-provisioner
  11. spec:
  12. replicas:1
  13. strategy:
  14. type:Recreate
  15. selector:
  16. matchLabels:
  17. app:nfs-client-provisioner
  18. template:
  19. metadata:
  20. labels:
  21. app:nfs-client-provisioner
  22. spec:
  23. serviceAccountName:nfs-client-provisioner
  24. containers:
  25. -name:nfs-client-provisioner
  26. image:fxkjnj/nfs-client-provisioner:latest
  27. volumeMounts:
  28. -name:nfs-client-root
  29. mountPath:/persistentvolumes
  30. env:
  31. -name:PROVISIONER_NAME
  32. value:fuseim.pri/ifs
  33. -name:NFS_SERVER
  34. value:172.16.201.209
  35. -name:NFS_PATH
  36. value:/home/nfs
  37. volumes:
  38. -name:nfs-client-root
  39. nfs:
  40. server:172.16.201.209
  41. path:/home/nfs

创建class.yaml

# archiveOnDelete: "true" 表示当PVC 删除后,后端数据不直接删除,而是归档

  1. root@k8s-master1:pvc#catclass.yaml
  2. apiVersion:storage.k8s.io/v1
  3. kind:StorageClass
  4. metadata:
  5. name:managed-nfs-storage
  6. provisioner:fuseim.pri/ifs#orchooseanothername,mustmatchdeployment'senvPROVISIONER_NAME'
  7. parameters:
  8. archiveOnDelete:"true"

创建pvc.yaml

#指定storageClassName 存储卷的名字

# requests:

storage: 100Gi 指定需要多大的存储

#注意,这里pvc ,我们创建在redis 命名空间下了,如果没有redis 还需要先创建才行, kubectl create namespace redis

  1. root@k8s-master1:pvc#catpvc.yaml
  2. apiVersion:v1
  3. kind:PersistentVolumeClaim
  4. metadata:
  5. name:nfs-redis
  6. namespace:redis
  7. spec:
  8. storageClassName:"managed-nfs-storage"
  9. accessModes:
  10. -ReadWriteMany
  11. resources:
  12. requests:
  13. storage:100Gi
  1. #部署
  2. root@k8s-master1:pvc#kubectlapply-f.
  3. #查看存储卷
  4. root@k8s-master1:pvc#kubectlgetsc
  5. NAMEPROVISIONERRECLAIMPOLICYVOLUMEBINDINGMODEALLOWVOLUMEEXPANSIONAGE
  6. managed-nfs-storagefuseim.pri/ifsDeleteImmediatefalse25h
  7. #查看pvc
  8. root@k8s-master1:pvc#kubectlgetpvc-nredis
  9. NAMESTATUSVOLUMECAPACITYACCESSMODESSTORAGECLASSAGE
  10. nfs-redisBoundpvc-8eacbe25-3875-4f78-91ca-ba83b6967a8a100GiRWXmanaged-nfs-storage21h

三、编写redis yaml 文件

  1. root@k8s-master1:~#mkdir/root/redis
  2. root@k8s-master1:~#cd/root/redis

编写 redis.conf 配置文件,以configmap 的方式挂载到容器中

# require 配置redis 密码

#save 5 1 ,表示 每5秒有一个key 变动 就写入到 dump.rdb 文件中

# appendonly no ,表示下次可以使用dump.rdb 来恢复 redis 快照的数据

# 注意namespace 为redis

  1. root@k8s-master1:redis#catredis-configmap-rdb.yml
  2. kind:ConfigMap
  3. apiVersion:v1
  4. metadata:
  5. name:redis-config
  6. namespace:redis
  7. labels:
  8. app:redis
  9. data:
  10. redis.conf:|-
  11. protected-modeno
  12. port6379
  13. tcp-backlog511
  14. timeout0
  15. tcp-keepalive300
  16. daemonizeno
  17. supervisedno
  18. pidfile/data/redis_6379.pid
  19. loglevelnotice
  20. logfile""
  21. databases16
  22. always-show-logoyes
  23. save51
  24. save30010
  25. save6010000
  26. stop-writes-on-bgsave-erroryes
  27. rdbcompressionyes
  28. rdbchecksumyes
  29. dbfilenamedump.rdb
  30. dir/data
  31. replica-serve-stale-datayes
  32. replica-read-onlyyes
  33. repl-diskless-syncno
  34. repl-diskless-sync-delay5
  35. repl-disable-tcp-nodelayno
  36. replica-priority100
  37. requirepass123
  38. lazyfree-lazy-evictionno
  39. lazyfree-lazy-expireno
  40. lazyfree-lazy-server-delno
  41. replica-lazy-flushno
  42. appendonlyno
  43. appendfilename"appendonly.aof"
  44. appendfsynceverysec
  45. no-appendfsync-on-rewriteno
  46. auto-aof-rewrite-percentage100
  47. auto-aof-rewrite-min-size64mb
  48. aof-load-truncatedyes
  49. aof-use-rdb-preambleyes
  50. lua-time-limit5000
  51. slowlog-log-slower-than10000
  52. slowlog-max-len128
  53. latency-monitor-threshold0
  54. notify-keyspace-events""
  55. hash-max-ziplist-entries512
  56. hash-max-ziplist-value64
  57. list-max-ziplist-size-2
  58. list-compress-depth0
  59. set-max-intset-entries512
  60. zset-max-ziplist-entries128
  61. zset-max-ziplist-value64
  62. hll-sparse-max-bytes3000
  63. stream-node-max-bytes4096
  64. stream-node-max-entries100
  65. activerehashingyes
  66. client-output-buffer-limitnormal000
  67. client-output-buffer-limitreplica256mb64mb60
  68. client-output-buffer-limitpubsub32mb8mb60
  69. hz10
  70. dynamic-hzyes
  71. aof-rewrite-incremental-fsyncyes
  72. rdb-save-incremental-fsyncyes

编写 redis-deployment.yml

#注意namespace 为redis

  1. root@k8s-master1:redis#catredis-deployment.yml
  2. apiVersion:apps/v1
  3. kind:Deployment
  4. metadata:
  5. name:redis
  6. namespace:redis
  7. labels:
  8. app:redis
  9. spec:
  10. replicas:3
  11. selector:
  12. matchLabels:
  13. app:redis
  14. template:
  15. metadata:
  16. labels:
  17. app:redis
  18. spec:
  19. #进行初始化操作,修改系统配置,解决Redis启动时提示的警告信息
  20. initContainers:
  21. -name:system-init
  22. image:busybox:1.32
  23. imagePullPolicy:IfNotPresent
  24. command:
  25. -"sh"
  26. -"-c"
  27. -"echo2048>/proc/sys/net/core/somaxconn&&echonever>/sys/kernel/mm/transparent_hugepage/enabled"
  28. securityContext:
  29. privileged:true
  30. runAsUser:0
  31. volumeMounts:
  32. -name:sys
  33. mountPath:/sys
  34. containers:
  35. -name:redis
  36. image:redis:5.0.8
  37. command:
  38. -"sh"
  39. -"-c"
  40. -"redis-server/usr/local/etc/redis/redis.conf"
  41. ports:
  42. -containerPort:6379
  43. resources:
  44. limits:
  45. cpu:1000m
  46. memory:1024Mi
  47. requests:
  48. cpu:1000m
  49. memory:1024Mi
  50. livenessProbe:
  51. tcpSocket:
  52. port:6379
  53. initialDelaySeconds:300
  54. timeoutSeconds:1
  55. periodSeconds:10
  56. successThreshold:1
  57. failureThreshold:3
  58. readinessProbe:
  59. tcpSocket:
  60. port:6379
  61. initialDelaySeconds:5
  62. timeoutSeconds:1
  63. periodSeconds:10
  64. successThreshold:1
  65. failureThreshold:3
  66. volumeMounts:
  67. -name:data
  68. mountPath:/data
  69. -name:config
  70. mountPath:/usr/local/etc/redis/redis.conf
  71. subPath:redis.conf
  72. volumes:
  73. -name:data
  74. persistentVolumeClaim:
  75. claimName:nfs-redis
  76. -name:config
  77. configMap:
  78. name:redis-config
  79. -name:sys
  80. hostPath:
  81. path:/sys

编写 redis-service.yml

#注意namespace 为redis

  1. #部署
  2. root@k8s-master1:~/kubernetes/redis#kubectlgetpod-nredis
  3. NAMEREADYSTATUSRESTARTSAGE
  4. redis-65f75db6bc-5skgr1/1Running021h
  5. redis-65f75db6bc-75m8m1/1Running021h
  6. redis-65f75db6bc-cp6cx1/1Running021h
  7. root@k8s-master1:~/kubernetes/redis#kubectlgetsvc-nredis
  8. NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
  9. redis-frontNodePort10.0.0.1696379:36379/TCP22h

四、测试,访问

使用redis 客户端工具,写入几个KEY 测试

K8S部署Redis单节点Rdb数据持久化故障演练恢复
K8S部署Redis单节点Rdb数据持久化故障演练恢复

删除pod,在自动新建pod后,查询键值是否存在

  1. root@k8s-master1:~#kubectlgetpods-nredis
  2. NAMEREADYSTATUSRESTARTSAGE
  3. redis-65f75db6bc-5skgr1/1Running05d20h
  4. redis-65f75db6bc-75m8m1/1Running05d20h
  5. redis-65f75db6bc-cp6cx1/1Running05d20h
  6. root@k8s-master1:~#kubectldelete-nredispodredis-65f75db6bc-5skgr
  7. pod"redis-65f75db6bc-5skgr"deleted
  8. #删除pod后,根据副本数,又重新拉取新的pod生成
  9. root@k8s-master1:~#kubectlgetpods-nredis
  10. NAMEREADYSTATUSRESTARTSAGE
  11. redis-65f75db6bc-tnnxp1/1Running054s
  12. redis-65f75db6bc-75m8m1/1Running05d20h
  13. redis-65f75db6bc-cp6cx1/1Running05d20h
K8S部署Redis单节点Rdb数据持久化故障演练恢复

查看nfs共享目录下是否存在 dump.rdb

K8S部署Redis单节点Rdb数据持久化故障演练恢复

五、故障演练恢复

(1)数据备份

源redis配置有持久化,直接拷贝持久化目录下的dump.rdb

直接到持久化的目录下,拷贝走dump.rdb 文件

源redis不支持持久化,则进入容器生成dump.rdb并拷出

进入容器:kubectl exec -it redis-xxx /bin/bash -n redis

进入redis命令台:redis-cli

密码认证:auth 123

保存数据,生成dump.rdb文件:save

退出redis命令台:quit

退出容器:exit

从容器中取出数据到本地:kubectl cp -n redis Pod_Name:/data/dump.rdb ./

传输至远程主机:scp dump.rdb root@目标服务器:/目录

(2)数据恢复

  • 停止redis,直接删除创建的deployment
  • 拷贝dump.rdb至目标redis的持久化目录下(注:将覆盖目标redis的数据)
  • 重启pod:kubectl apply -f redis-deployment.yml
  1. #拷贝持久化目录下的dump.rbd文件到root下
  2. cpdump.rdb/root
  3. #停止redis,也就是删除deployment
  4. root@k8s-master1:~/kubernetes/redis#kubectldelete-fredis-deployment.yml
  5. deployment.apps"redis"deleted
  6. root@k8s-master1:~/kubernetes/redis#kubectlgetpods-nredis
  7. Noresourcesfoundinredisnamespace.
  8. #拷贝dump.rdb至目标redis的持久化目录下
  9. cp/root/dump.rdb/home/nfs/redis-nfs-redis-pvc-8eacbe25-3875-4f78-91ca-ba83b6967a8a
  10. #重启pod
  11. root@k8s-master1:~/kubernetes/redis#kubectlapply-fredis-deployment.yml
  12. deployment.apps/rediscreated
  13. root@k8s-master1:~/kubernetes/redis#kubectlgetpods-nredis
  14. NAMEREADYSTATUSRESTARTSAGE
  15. redis-65f75db6bc-5jx4m0/1Init:0/103s
  16. redis-65f75db6bc-68jf50/1Init:0/103s
  17. redis-65f75db6bc-b9gvk0/1Init:0/103s
  18. root@k8s-master1:~/kubernetes/redis#kubectlgetpods-nredis
  19. NAMEREADYSTATUSRESTARTSAGE
  20. redis-65f75db6bc-5jx4m1/1Running020s
  21. redis-65f75db6bc-68jf51/1Running020s
  22. redis-65f75db6bc-b9gvk1/1Running020s

(3)验证数据,可发现源redis的数据已全部复现

K8S部署Redis单节点Rdb数据持久化故障演练恢复

原文链接:https://www.toutiao.com/a7023273935886205476/

延伸 · 阅读

精彩推荐