前言
使用本地磁盘作为pv
kubernetes从1.10版本开始支持local volume(本地卷),workload(不仅是statefulsets类型)可以充分利用本地磁盘的优势,从而获取比remote volume(如nas, nfs, cephfs、RBD)更好的性能。
在local volume出现之前,statefulsets也可以利用本地磁盘,方法是配置hostPath,并通过nodeSelector或者nodeAffinity绑定到具体node上。但hostPath的问题是,管理员需要手动管理集群各个node的目录,不太方便。
以上无论是hostPath还是local volume都不支持动态扩容,并且程序移植改动比较大。
由于项目的需要,需要支持动态创建和扩容pv/pvc
本文参考了以下两个开源项目:
https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner
https://github.com/rancher/local-path-provisioner
经过测试:
- kubernetes-sigs版不支持动态扩容/动态供给dynamically provisioning,而且需要提前手动在node节点上创建并且mount对应的挂载点。
- Rancher版本的local-path-provisioner支持动态创建挂载点,动态创建pv
下面两种方法都介绍一下安全和使用方式,最后推荐使用第三章介绍的local-path-provisioner来进行动态创建pv
第一章 使用sig-storage-local-static-provisioner
1.1 拉取官方源码进行安装
1 2 3 4 5 |
git clone https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner.git cd sig-storage-local-static-provisioner/ git checkout tags/v2.6.0 -b v2.6.0 helm template ./helm/provisioner -f ./helm/provisioner/values.yaml > local-volume-provisioner.generated.yaml kubectl create -f local-volume-provisioner.generated.yaml |
1.2创建storageclass
1 2 3 4 5 6 |
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast-disks provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer |
1.3挂载磁盘
其Provisioner本身其并不提供local volume,但它在各个节点上的provisioner会去动态的“发现”挂载点(discovery directory),当某node的provisioner在/mnt/fast-disks目录下发现有挂载点时,会创建PV,该PV的local.path就是挂载点,并设置nodeAffinity为该node。
可以用以下脚本通过mount bind方式创建和挂载磁盘
1 2 3 4 5 6 |
#!/bin/bash for i in $(seq 1 5); do mkdir -p /mnt/fast-disks-bind/vol${i} mkdir -p /mnt/fast-disks/vol${i} mount --bind /mnt/fast-disks-bind/vol${i} /mnt/fast-disks/vol${i} done |
下面是在各个node节点用以上脚本创建挂载点:
执行该脚本后,等待一会,执行查询pv命令,就可以发现自动创建了
1.4测试pod是否可以运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
apiVersion: apps/v1 kind: StatefulSet metadata: name: local-test spec: serviceName: "local-service" replicas: 3 selector: matchLabels: app: local-test template: metadata: labels: app: local-test spec: containers: - name: test-container image: busybox command: - "/bin/sh" args: - "-c" - "sleep 100000" volumeMounts: - name: local-vol mountPath: /tmp volumeClaimTemplates: - metadata: name: local-vol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "fast-disks" resources: requests: storage: 2Gi |
可以看到, 三个pod都正常运行起来了:
第二章 使用local-path-provisioner
2.1下载yaml文件
1 |
wget https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.25/deploy/local-path-storage.yaml |
2.2 修改
其中有几处需要做修改
2.2.1 删除调试模式
删除–debug
2.2.2修改reclaimPolicy
默认值是Delete,可以改成Retain,避免误删pv造成数据丢失
2.2.3修改StorageClass
增加annotations,使local-path storageclass成为默认storageclass
2.2.4修改最后面的configMap
修改paths目标路径为自己的本地磁盘存储目录
2.2.5下面是修改完的完整的local-path-storage.yaml:
注:镜像改更改为自己的镜像仓库地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
apiVersion: v1 kind: Namespace metadata: name: local-path-storage --- apiVersion: v1 kind: ServiceAccount metadata: name: local-path-provisioner-service-account namespace: local-path-storage --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: local-path-provisioner-role namespace: local-path-storage rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "patch", "update", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-path-provisioner-role rules: - apiGroups: [""] resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "patch", "update", "delete"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: local-path-provisioner-bind namespace: local-path-storage roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: local-path-provisioner-role subjects: - kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: local-path-provisioner-bind roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: local-path-provisioner-role subjects: - kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage --- apiVersion: apps/v1 kind: Deployment metadata: name: local-path-provisioner namespace: local-path-storage spec: replicas: 1 selector: matchLabels: app: local-path-provisioner template: metadata: labels: app: local-path-provisioner spec: serviceAccountName: local-path-provisioner-service-account containers: - name: local-path-provisioner image: rancher/local-path-provisioner:v0.0.25 imagePullPolicy: IfNotPresent command: - local-path-provisioner - start - --config - /etc/config/config.json volumeMounts: - name: config-volume mountPath: /etc/config/ env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumes: - name: config-volume configMap: name: local-path-config --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path annotations: storageclass.kubernetes.io/is-default-class: "true" # 默认storageclass provisioner: rancher.io/local-path volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain --- kind: ConfigMap apiVersion: v1 metadata: name: local-path-config namespace: local-path-storage data: config.json: |- { "nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/data/local-path-provisioner"] } ] } setup: |- #!/bin/sh set -eu mkdir -m 0777 -p "$VOL_DIR" teardown: |- #!/bin/sh set -eu rm -rf "$VOL_DIR" helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: priorityClassName: system-node-critical tolerations: - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule containers: - name: helper-pod image: busybox imagePullPolicy: IfNotPresent |
2.3安装
1 |
kubectl apply -f local-path-storage.yaml |
2.4使用
要使用这个动态pv时,storageClass设置为local-path就可以了,下面是测试: local-test-pod.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
apiVersion: apps/v1 kind: StatefulSet metadata: name: local-test spec: serviceName: "local-service" replicas: 3 selector: matchLabels: app: local-test template: metadata: labels: app: local-test spec: containers: - name: test-container image: busybox command: - "/bin/sh" args: - "-c" - "sleep 100000" volumeMounts: - name: local-vol mountPath: /tmp volumeClaimTemplates: - metadata: name: local-vol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 2Gi |
查看,可以看到分别自动创建了三个pv和pvc:
看下pod信息:
可以看到,3个pod被分别调度到了172.20.32.69(两个)和172.20.32.171(一个)上
看一下node节点相应目录内容:
可以看到, 172.20.32.69上创建了两个目录
172.20.32.171上创建了一个目录:
2.5定制化配置
如果需要,也可以进一步进行定制化配置,比如可以定制每个节点的存储路径,示例如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
"nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/data/local-path-provisioner"] }, { "node":"172.20.32.69", "paths":["/data/local-path-provisioner", "/data1"] }, { "node":"172.20.32.171", "paths":[] } ] |
DEFAULT_PATH_FOR_NON_LISTED_NODES表示默认存储位置,所有节点如果没有特别指定,都将使用这个路径。
172.20.32.69节点指定了两个目录,在分配pv时将随机指定一个目录创建pv。
172.20.32.171没有指定任何路径,那么该节点不会被分配pv
2.6 自动刷新配置
可以直接使用kubectl edit去修改上面的configmap,或者修改local-path-provisioner.yaml文件然后执行kubectl apply,local-path-provisioner会自动刷新配置
0 Comments