misterli's Blog.

史上最全之K8s使用nfs作为存储卷的五种方式

字数统计: 7.6k阅读时长: 41 min
2021/08/12

我们能将 NFS (网络文件系统) 挂载到Pod 中,不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享,NFS 卷可以由多个pod同时挂载

注意: 在使用 NFS 卷之前,你必须运行自己的 NFS 服务器并将目标 share 导出备用。

虽然官方不是很推荐使用nfs存储作为pv,但是实际中我们有时候因为种种原因需要使用nfs类型的volume来存储数据。

下面介绍kubernetes中使用nfs作为存储卷的几种方式

  1. deployment/statefulset中直接使用
  2. 创建类型为nfs的持久化存储卷,用于为PersistentVolumClaim提供存储卷
  3. 使用csi-driver-nfs提供StorageClass
  4. 使用NFS Subdir External Provisioner提供StorageClass
  5. 使用nfs-ganesha-server-and-external-provisioner提供StorageClass

我们事先在172.26.204.144机器上配置好了nfs server端,并共享了如下目录。

1
2
3
4
[root@node-02 ~]# showmount -e 172.26.204.144
Export list for 172.26.204.144:
/opt/nfs-deployment 172.26.0.0/16
/opt/kubernetes-nfs 172.26.0.0/16

deployment/statefulset中直接使用

如下示例我们为nginx使用nfs作为存储卷持久化/usr/share/nginx/html目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
nfs:
path: /opt/nfs-deployment
server: 172.26.204.144

我们进入pod内部可以发现实际上pod内部是使用把172.26.204.144:/opt/nfs-deployment mount 到 /usr/share/nginx/html

1
2
3
[root@master-01 test]# kubectl exec -it nginx-deployment-6dfb66cbd9-lv5c7  bash
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# mount |grep 172
172.26.204.144:/opt/nfs-deployment on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.26.204.144,local_lock=none,addr=172.26.204.144)

此时我们在172.26.204.144的/opt/nfs-deployment 目录里创建一个date.html文件,在pod中我们也可以实时的访问到这个文件

1
2
3
4
5
6
7
8
#172.26.204.144上操作
[root@node-02 ~]# date > /opt/nfs-deployment/date.html
[root@node-02 ~]# cat /opt/nfs-deployment/date.html
Sun Aug 8 01:36:15 CST 2021
#pod中查看
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# cd /usr/share/nginx/html/
root@nginx-deployment-6dfb66cbd9-lv5c7:/usr/share/nginx/html# ls
date.html

创建类型为nfs的持久化存储卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master-01 test]# cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/nfs-deployment
server: 172.26.204.144
[root@master-01 test]# kubectl apply -f pv-nfs.yaml
kpersistentvolume/pv-nfs created
[root@master-01 test]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 10Gi RWX Retain Available 4s

此时pv被创建后状态如下,我们还没有创建pvc使用这个pv,所以此时状态还是Available

1
2
3
4
[root@master-01 test]# kubectl get pv|grep nfs-pv1
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 5Gi RWO Recycle Available 6s

我们创建一个pvc使用这块pv后,pv的状态会变更为Bound

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#会根据大小和类型自动匹配到上面的PV
[root@master-01 test]# cat pvc-nfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
[root@master-01 test]# kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/pvc-nfs created
[root@master-01 test]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pv-nfs 10Gi RWX 2s
[root@master-01 test]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 10Gi RWX Retain Bound default/pvc-nfs 70s

我们此时可以创建一个服务使用这个pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@master-01 test]# cat dp-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-nfs
[root@master-01 test]# kubectl apply -f dp-pvc.yaml
deployment.apps/busybox created
#这里使用的还是上面那个nfs的目录 所以我们这里还能看到之前的文件
[root@master-01 test]# kubectl exec -it busybox-7cdd999d7d-dwcbq -- sh
/ # cat /data/date.html
Sun Aug 8 01:36:15 CST 202

NFS CSI Driver

NFS CSI Driver是K8s官方提供的CSI示例程序,只实现了CSI的最简功能,这个插件驱动本身只提供了集群中的资源和NFS服务器之间的通信层,使用这个驱动之前需要 Kubernetes 集群 1.14 或更高版本和预先存在的 NFS 服务器。

安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
#rbac规则
[root@master-01 deploy]# cat rbac-csi-nfs-controller.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-nfs-controller-sa
namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-external-provisioner-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-csi-provisioner-binding
subjects:
- kind: ServiceAccount
name: csi-nfs-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-external-provisioner-role
apiGroup: rbac.authorization.k8s.io
#
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: nfs.csi.k8s.io
spec:
attachRequired: false
volumeLifecycleModes:
- Persistent

#Controller由CSI Plugin+csi-provisioner+livenessprobe组成
[root@master-01 deploy]# cat csi-nfs-controller.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-nfs-controller
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: csi-nfs-controller
template:
metadata:
labels:
app: csi-nfs-controller
spec:
hostNetwork: true # controller also needs to mount nfs to create dir
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: csi-nfs-controller-sa
nodeSelector:
kubernetes.io/os: linux # add "kubernetes.io/role: master" to run controller on master node
priorityClassName: system-cluster-critical
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/controlplane"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: csi-provisioner
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
args:
- "-v=2"
- "--csi-address=$(ADDRESS)"
- "--leader-election"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- mountPath: /csi
name: socket-dir
resources:
limits:
cpu: 100m
memory: 400Mi
requests:
cpu: 10m
memory: 20Mi
- name: liveness-probe
image: k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --health-port=29652
- --v=2
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: nfs
image: mcr.microsoft.com/k8s/csi/nfs-csi:latest
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
imagePullPolicy: IfNotPresent
args:
- "-v=5"
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
ports:
- containerPort: 29652
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
volumeMounts:
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- mountPath: /csi
name: socket-dir
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: socket-dir
emptyDir: {}

### node-server由CSI Plugin+liveness-probe+node-driver-registrar组成
[root@master-01 deploy]# cat csi-nfs-node.yaml
---
# This YAML file contains driver-registrar & csi driver nodeplugin API objects
# that are necessary to run CSI nodeplugin for nfs
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-nfs-node
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-nfs-node
template:
metadata:
labels:
app: csi-nfs-node
spec:
hostNetwork: true # original nfs connection would be broken without hostNetwork setting
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: "Exists"
containers:
- name: liveness-probe
image: k8s.gcr.io/sig-storage/livenessprobe:v2.3.0
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --health-port=29653
- --v=2
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: node-driver-registrar
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/csi-nfsplugin /registration/csi-nfsplugin-reg.sock"]
args:
- --v=2
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi-nfsplugin/csi.sock
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: nfs
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: mcr.microsoft.com/k8s/csi/nfs-csi:latest
args:
- "-v=5"
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
ports:
- containerPort: 29653
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-nfsplugin
type: DirectoryOrCreate
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
name: registration-dir

这里由于部分镜像无法拉去我们可以更改为下面镜像

1
2
3
k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 ---> misterli/sig-storage-csi-provisioner:v2.1.0
k8s.gcr.io/sig-storage/livenessprobe:v2.3.0 ---> misterli/sig-storage-livenessprobe:v2.3.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 ---> misterli/sig-storage-csi-node-driver-registrar:v2.2.0

部署成功后如下

1
2
3
4
5
[root@master-01 deploy]# kubectl -n kube-system  get pod|grep csi
csi-nfs-controller-5d74c65b76-wb7qt 3/3 Running 0 8m13s
csi-nfs-controller-5d74c65b76-xhqfx 3/3 Running 0 8m13s
csi-nfs-node-bgtf7 3/3 Running 0 6m19s
csi-nfs-node-q8xvs 3/3 Running 0 6m19s

fsGroupPolicy功能是 Kubernetes 1.20 的 Beta 版,默认禁用,如果需要启用,请执行如下命令

1
2
3
4
5
6
7
8
9
10
11
12
kubectl delete CSIDriver nfs.csi.k8s.io
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: nfs.csi.k8s.io
spec:
attachRequired: false
volumeLifecycleModes:
- Persistent
fsGroupPolicy: File
EOF

使用

pv/pvc 使用(静态配置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-csi
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
csi:
driver: nfs.csi.k8s.io
readOnly: false
volumeHandle: unique-volumeid # #确保它是集群中的唯一 ID
volumeAttributes:
server: 172.26.204.144
share: /opt/nfs-deployment
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs-csi-static
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: pv-nfs-csi
storageClassName: ""
参数 意义 示例 是否必选 Default value
volumeAttributes.server nfs server 地址 Domain name nfs-server.default.svc.cluster.local Or IP address 127.0.0.1 Yes
volumeAttributes.share nfs共享路径 / Yes

更多参数含义可参考https://kubernetes.io/zh/docs/concepts/storage/volumes/#out-of-tree-volume-plugins

部署后我们可以发现pvc-nfs-csi-static 已经bound 我们创建的pv-nfs-csi这个pv了

1
2
3
4
[root@master-01 example]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pv-nfs-csi 10Gi RWX Retain Bound default/pvc-nfs-csi-static 48

我们创建一个服务验证一下这个pvc 是否可以正常使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@master-01 test]# cat dp-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-nfs-csi-static
1
2
3
4
5
[root@master-01 test]# kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created
[root@master-01 test]# kubectl exec -it busybox-cd6d67ddc-zdrfp sh
/ # ls /data
date.html

存储类使用(动态配置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 172.26.204.144
share: /opt/nfs-deployment
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-csi-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: nfs-csi
参数 意义 示例 是否必选 Default value
parameters.server nfs server 地址 Domain name nfs-server.default.svc.cluster.local Or IP address 127.0.0.1 Yes
parameters.share nfs共享路径 / Yes

这里我们创建一个statefulset类型的服务验证一下是否可以正常使用volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-nfs
labels:
app: nginx
spec:
serviceName: statefulset-nfs
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: statefulset-nfs
image: mcr.microsoft.com/oss/nginx/nginx:1.19.5
command:
- "/bin/bash"
- "-c"
- set -euo pipefail; while true; do echo $(date) >> /mnt/nfs/outfile; sleep 1; done
volumeMounts:
- name: persistent-storage
mountPath: /mnt/nfs
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: nfs-csi
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
1
2
3
4
5
6
7
8
9
10
11
[root@master-01 example]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 4s
[root@master-01 example]# kubectl apply -f statefulset.yaml
statefulset.apps/statefulset-nfs created
[root@master-01 example]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistent-storage-statefulset-nfs-0 Bound pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94 10Gi RWO nfs-csi 4s
[root@master-01 example]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94 10Gi RWO Delete Bound default/persistent-storage-statefulset-nfs-0 nfs-csi 14s

我们进入pod 内部查看发现,pod按预设在/mnt/nfs创建了一个名为outfile的文件

1
2
3
4
5
##我们进入pod内部查看
[root@master-01 example]# kubectl exec -it statefulset-nfs-0 -- bash
root@statefulset-nfs-0:/# cd /mnt/nfs
root@statefulset-nfs-0:/mnt/nfs# ls
outfile

我们在nfs server 上查看

1
2
3
4
5
## nfs server上查看
[root@node-02 ~]# ls /opt/nfs-deployment/
date.html pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94
[root@node-02 ~]# ls /opt/nfs-deployment/pvc-5269b7f4-33ec-48d1-85fb-9d869d611e94/
outfile

NFS Subdir External Provisioner

NFS subdir external provisioner 使用现有的的NFS 服务器来支持通过 Persistent Volume Claims 动态供应 Kubernetes Persistent Volumes。持久卷默认被配置为${namespace}-${pvcName}-${pvName},使用这个必须已经拥有 NFS 服务器。

安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
class-delete.yaml  class-nfs.yaml  class.yaml  deployment.yaml  rbac.yaml  test-claim.yaml  test-pod.yaml  test.yaml
[root@master-01 deploy]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

[root@master-01 deploy]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.26.204.144
- name: NFS_PATH
value: /opt/kubernetes-nfs
volumes:
- name: nfs-client-root
nfs:
server: 172.26.204.144
path: /opt/kubernetes-nfs

注意:

这里需要修改env中的NFS_SERVERNFS_PATH ,以及volumes中的serverpath

镜像无法拉取可以更改为misterli/k8s.gcr.io_sig-storage_nfs-subdir-external-provisioner:v4.0.2

创建存储类

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
allowVolumeExpansion: true
parameters:
pathPattern: "${.PVC.namespace}-${.PVC.name}"
onDelete: delete

parameters:

名称 描述 默认
onDelete 如果存在且有删除值,则删除该目录,如果存在且有保留值,则保存该目录。 将在共享上以名称归档: archived-<volume.Name>
archiveOnDelete 如果它存在并且值为假,则删除该目录。如果onDelete存在,archiveOnDelete将被忽略。 将在共享上以名称归档: archived-<volume.Name>
pathPattern 指定用于通过 PVC 元数据(例如标签、注释、名称或命名空间)创建目录路径的模板。要指定元数据,请使用${.PVC.<metadata>}. 示例:如果文件夹应命名为 like <pvc-namespace>-<pvc-name>,则使用${.PVC.namespace}-${.PVC.name}as pathPattern。 ${namespace}-${pvcName}-${pvName}

验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@master-01 deploy]# cat test-claim.yaml  
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteOnce
#- ReadWriteMany
resources:
requests:
storage: 1024Mi
[root@master-01 deploy]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
namespace: devops
spec:
containers:
- name: test-pod
image: busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && sleep 300 && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master-01 deploy]# kubectl apply -f test-claim.yaml  -f test-pod.yaml 
persistentvolumeclaim/test-claim created
pod/test-pod created

[root@master-01 deploy]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-6bb052e0-d57d-4de6-855c-22070ff56931 1Gi RWO managed-nfs-storage 5s

[root@master-01 deploy]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6bb052e0-d57d-4de6-855c-22070ff56931 1Gi RWO Delete Bound default/test-claim managed-nfs-storage 12s
manual 206d

我们在nfs server端可以看到相关的目录中按照我们定义的命名规则创建了目录

1
2
[root@node-02 ~]# ls /opt/kubernetes-nfs/
default-test-claim

集群模式

启用集群模式比较简单,我们只需将副本数设置为三个,并且设置环境变量ENABLE_LEADER_ELECTION值为true即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: misterli/k8s.gcr.io_sig-storage_nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.26.204.144
- name: NFS_PATH
value: /opt/kubernetes-nfs
- name: ENABLE_LEADER_ELECTION
value: "true"
volumes:
- name: nfs-client-root
nfs:
server: 172.26.204.144
path: /opt/kubernetes-nfs

部署后我们在日志中看到其中一个pod 被选举为leader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
##第一个pod日志
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-zcc6w
I0808 09:53:10.674682 1 leaderelection.go:242] attempting to acquire leader lease devops/k8s-sigs.io-nfs-subdir-external-provisioner...

##第二个pod日志
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-h7xb6
I0808 09:53:10.671051 1 leaderelection.go:242] attempting to acquire leader lease devops/k8s-sigs.io-nfs-subdir-external-provisioner...

###第三个pod日志 , 看到successfully acquired lease 确定被选举为leader
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-rs97c
I0808 09:53:10.531170 1 leaderelection.go:242] attempting to acquire leader lease devops/k8s-sigs.io-nfs-subdir-external-provisioner...
I0808 09:53:28.143466 1 leaderelection.go:252] successfully acquired lease devops/k8s-sigs.io-nfs-subdir-external-provisioner
I0808 09:53:28.143742 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"devops", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"a5a7a644-c682-4ce6-8e05-7ca4e5257776", APIVersion:"v1", ResourceVersion:"109115588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593 became leader
I0808 09:53:28.144326 1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593!
I0808 09:53:28.244537 1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-rs97c_24635026-51c7-4e48-8521-938c7ed83593!

我们把被选举为leader的pod 删除,此时观察日志我们可以发现此时别的pod 被选举为leader

1
2
3
4
5
6
[root@master-01 deploy]# kubectl -n devops logs -f nfs-client-provisioner-7bb7fb9945-zcc6w 
I0808 09:53:10.674682 1 leaderelection.go:242] attempting to acquire leader lease devops/k8s-sigs.io-nfs-subdir-external-provisioner...
I0808 09:59:04.948561 1 leaderelection.go:252] successfully acquired lease devops/k8s-sigs.io-nfs-subdir-external-provisioner
I0808 09:59:04.948766 1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-zcc6w_6bd10d15-ab04-46c3-bafe-566ccc32f71c!
I0808 09:59:04.948812 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"devops", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"a5a7a644-c682-4ce6-8e05-7ca4e5257776", APIVersion:"v1", ResourceVersion:"109117083", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7bb7fb9945-zcc6w_6bd10d15-ab04-46c3-bafe-566ccc32f71c became leader
I0808 09:59:05.049034 1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-7bb7fb9945-zcc6w_6bd10d15-ab04-46c3-bafe-566ccc32f71c

nfs-ganesha-server-and-external-provisioner

nfs-ganesha-server-and-external-provisioner是 Kubernetes 1.14+ 的树外动态供应商。它可以快速轻松地部署并且几乎可以在任何地方使用的共享存储,它不需要外置的nfs server,它会内置一个nfs server,并且为每一个pvc创建一个目录,并export该目录。

安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
namespace: devops
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
namespace: devops
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: devops
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: nfs-provisioner
namespace: devops
spec:
selector:
matchLabels:
app: nfs-provisioner
serviceName: "nfs-provisioner"
replicas: 1
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
terminationGracePeriodSeconds: 10
containers:
- name: nfs-provisioner
image: k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.0 #如果无法拉取,请使用misterli/sig-storage-nfs-provisioner:v3.0.0
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /opt/nfs-ganesha-server-and-external-provisioner

创建存储类

1
kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: nfs-ganeshaprovisioner: example.com/nfsmountOptions:  - vers=4.1

注意:

  • StorageClass中的provisioner要和我们部署文件中args里定义的provisioner 一致
  • 部署文件中volumes可以使用pvc也可以使用hostpath,但是不支持使用nfs类型的存储,并且使用hostpath时建议结合nodeselector使用
  • 部署的 pod 将使用service的 IP 作为 NFS 服务器 IP 来放置它的PersistentVolumes,而不是它自己的 pod IP,service的名称是通过SERVICE_NAMEenv 变量传入的。每个 Pod 必须始终有一个服务,否则将失败,这意味着部署不能扩展到超过 1 个副本。为了扩展(多个实例进行领导者选举),需要使用新名称、匹配标签+选择器和SERVICE_NAME变量创建新的deployment/statefulset和seevice。
  • 部署文件类型为deployment或statefulset都可以,statefulset类型的服务可以获得稳定的主机名,但是注意不能使用headless类型的service

使用

我们使用上面的存储类创建一个pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@master-01 nfs-ganesha-server-and-external-provisioner]# cat pvc.yaml 

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-ganesha
spec:
storageClassName: nfs-ganesha
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Mi
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl apply -f pvc.yaml
persistentvolumeclaim/nfs-ganesha created
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-ganesha Bound pvc-b81ea9a5-ccce-4df8-b562-2eb40c7118bb 30Mi RWX nfs-ganesha 4s
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl get pv pvc-acb63fcc-0b1b-43d4-9111-395721235ae8 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-b81ea9a5-ccce-4df8-b562-2eb40c7118bb
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 30Mi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: nfs-ganesha
namespace: default
resourceVersion: "110594411"
uid: b81ea9a5-ccce-4df8-b562-2eb40c7118bb
mountOptions:
- vers=4.1
nfs:
path: /export/pvc-b81ea9a5-ccce-4df8-b562-2eb40c7118bb
server: 10.111.171.27
persistentVolumeReclaimPolicy: Delete
storageClassName: nfs-ganesha
volumeMode: Filesystem
status:
phase: Bound
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl get svc -n devops
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nfs-provisioner ClusterIP 10.111.171.27 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 25m

我们这里可以看到创建的pv中使用的nfs-server的地址是我们之前创建的service的地址,同时我们可以在挂载的hostpath的目录下看到创建的目录

1
2
[root@node-02 nfs-ganesha-server-and-external-provisioner]# ls /opt/nfs-ganesha-server-and-external-provisioner/
ganesha.log nfs-provisioner.identity pvc-b81ea9a5-ccce-4df8-b562-2eb40c7118bb v4old v4recov vfs.conf

集群模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: devops
---
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner-node-01
namespace: devops
labels:
app: nfs-provisioner-node-01
spec:
ports:
- name: nfs
port: 2049
- name: nfs-udp
port: 2049
protocol: UDP
- name: nlockmgr
port: 32803
- name: nlockmgr-udp
port: 32803
protocol: UDP
- name: mountd
port: 20048
- name: mountd-udp
port: 20048
protocol: UDP
- name: rquotad
port: 875
- name: rquotad-udp
port: 875
protocol: UDP
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
- name: statd
port: 662
- name: statd-udp
port: 662
protocol: UDP
selector:
app: nfs-provisioner-node-01
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: nfs-provisioner-node-01
namespace: devops
spec:
selector:
matchLabels:
app: nfs-provisioner-node-01
serviceName: "nfs-provisioner-node-01"
replicas: 1
template:
metadata:
labels:
app: nfs-provisioner-node-01
spec:
serviceAccount: nfs-provisioner
terminationGracePeriodSeconds: 10
containers:
- name: nfs-provisioner-node-01
image: misterli/sig-storage-nfs-provisioner:v3.0.0
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
- "-leader-elect=true"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner-node-01
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume-node-01
mountPath: /export
volumeClaimTemplates:
- metadata:
name: export-volume-node-01
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: longhorn
---
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner-node-02
namespace: devops
labels:
app: nfs-provisioner-node-02
spec:
ports:
- name: nfs
port: 2049
- name: nfs-udp
port: 2049
protocol: UDP
- name: nlockmgr
port: 32803
- name: nlockmgr-udp
port: 32803
protocol: UDP
- name: mountd
port: 20048
- name: mountd-udp
port: 20048
protocol: UDP
- name: rquotad
port: 875
- name: rquotad-udp
port: 875
protocol: UDP
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
- name: statd
port: 662
- name: statd-udp
port: 662
protocol: UDP
selector:
app: nfs-provisioner-node-02
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: nfs-provisioner-node-02
namespace: devops
spec:
selector:
matchLabels:
app: nfs-provisioner-node-02
serviceName: "nfs-provisioner-node-02"
replicas: 1
template:
metadata:
labels:
app: nfs-provisioner-node-02
spec:
serviceAccount: nfs-provisioner
terminationGracePeriodSeconds: 10
containers:
- name: nfs-provisioner-node-02
image: misterli/sig-storage-nfs-provisioner:v3.0.0
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
- "-leader-elect=true"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner-node-02
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume-node-02
mountPath: /export
volumeClaimTemplates:
- metadata:
name: export-volume-node-02
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: longhorn

---
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner-node-03
namespace: devops
labels:
app: nfs-provisioner-node-03
spec:
ports:
- name: nfs
port: 2049
- name: nfs-udp
port: 2049
protocol: UDP
- name: nlockmgr
port: 32803
- name: nlockmgr-udp
port: 32803
protocol: UDP
- name: mountd
port: 20048
- name: mountd-udp
port: 20048
protocol: UDP
- name: rquotad
port: 875
- name: rquotad-udp
port: 875
protocol: UDP
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
- name: statd
port: 662
- name: statd-udp
port: 662
protocol: UDP
selector:
app: nfs-provisioner-node-03
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: nfs-provisioner-node-03
namespace: devops
spec:
selector:
matchLabels:
app: nfs-provisioner-node-03
serviceName: "nfs-provisioner-node-03"
replicas: 1
template:
metadata:
labels:
app: nfs-provisioner-node-03
spec:
serviceAccount: nfs-provisioner
terminationGracePeriodSeconds: 10
containers:
- name: nfs-provisioner-node-03
image: misterli/sig-storage-nfs-provisioner:v3.0.0
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
- "-leader-elect=true"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner-node-03
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume-node-03
mountPath: /export
volumeClaimTemplates:
- metadata:
name: export-volume-node-03
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: longhorn

集群模式的话,我们pvc使用存储类动态创建持久卷时会随机使用一个三个pod中的一个pod提供nfs server服务,当这个pod出现波动后,后续创建的pvc会使用另外一个pod去提供nfs server服务。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl apply -f demo/pvc.yaml 
persistentvolumeclaim/nfs-ganesha created
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-ganesha Bound pvc-ed76f5ad-3f47-4a99-a1c8-97e8257b5700 30Mi RWX nfs-ganesha 1m24s
[root@master-01 nfs-ganesha-server-and-external-provisioner]# kubectl get pv pvc-ed76f5ad-3f47-4a99-a1c8-97e8257b5700 -o=go-template={{.spec.nfs}}
map[path:/export/pvc-ed76f5ad-3f47-4a99-a1c8-97e8257b5700 server:10.103.133.251]
[root@master-01 demo]# kubectl -n devops get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nfs-provisioner-node-01 ClusterIP 10.100.139.17 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 11m
nfs-provisioner-node-02 ClusterIP 10.103.133.251 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 11m
nfs-provisioner-node-03 ClusterIP 10.100.137.250 <none> 2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP 11m
[root@master-01 demo]# kubectl get pod -n devops
NAME READY STATUS RESTARTS AGE
nfs-provisioner-node-01-0 1/1 Running 0 11m
nfs-provisioner-node-02-0 1/1 Running 0 11m
nfs-provisioner-node-03-0 1/1 Running 0 11m

我们可以看到pvc nfs-ganesha 使用的是nfs server地址为10.103.133.251,对应的pod为nfs-provisioner-node-02-0,我们将这个pod上出再创建一个pvc

1
2
3
4
5
6
7
8
9
10
[root@master-01 demo]# kubectl -n devops  delete pod nfs-provisioner-node-02-0 
pod "nfs-provisioner-node-02-0" deleted
[root@master-01 demo]# kubectl apply -f pvc-1.yaml
persistentvolumeclaim/nfs-ganesha-1 created
[root@master-01 demo]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-ganesha Bound pvc-ed76f5ad-3f47-4a99-a1c8-97e8257b5700 30Mi RWX nfs-ganesha 12m
nfs-ganesha-1 Bound pvc-7cc30bf4-287b-4fc9-a0a3-91b4e0603675 30Mi RWX nfs-ganesha 8s
[root@master-01 demo]# kubectl get pv pvc-7cc30bf4-287b-4fc9-a0a3-91b4e0603675 -o=go-template={{.spec.nfs}}
map[path:/export/pvc-7cc30bf4-287b-4fc9-a0a3-91b4e0603675 server:10.100.139.17]

我们可以发现新创建的pvc nfs-ganesha-1 使用的是nfs server地址为10.100.139.17,对应的pod为nfs-provisioner-node-01-0

⚠️注意: 当我们提供nfs server 服务的pod 被delete时 ,使用这个服务创建的pv以及pod都会异常,这个需要注意。

总结

从实际使用上讲使用nfs作为存储卷并不是一个好的方式,使用中会遇到千奇百怪的问题,kubernetes官方也不太推荐生产使用nfs提供存储卷,但是实际上有时候无可奈何的要去用,建议数据重要的服务使用分布式存储如ceph或者通过云厂商提供的csi挂载云盘作为存储卷,再不行就使用local pv. 当然如果我们购买了云厂商提供的商用nfs,我们挂载使用就没什么太大的问题了,毕竟有问题了锅能甩出去。

nfs作为存储卷的已知问题:

  • 不保证配置的存储。您可以分配超过 NFS 共享的总大小。共享也可能没有足够的存储空间来实际容纳请求。
  • 未实施预配的存储限制。无论供应的大小如何,应用程序都可以扩展以使用所有可用存储。
  • 目前不支持任何形式的存储调整大小/扩展操作。您最终将处于错误状态:Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

上述nfs-ganesha-server-and-external-provisioner的集群模式实际使用时只能保证后续能继续申请存储卷,不能保证之前创建的存储卷可以正常使用,使用时候慎重。NFS Subdir External Provisioner的集群模式删除当前被选举为leader的pod后会重新进行选举,不影响之前的pod挂载的pv正常使用

下面总结一下这几种方式

名称 优缺点
nfs-ganesha-server-and-external-provisioner 提供nfs的pod出现问题时,pvc无法使用 不推荐使用
NFS Subdir External Provisioner 只支持配置一个nfs server端 推荐使用
NFS CSI Driver 虽然功能上较简单,但是可以在存储类中配置nfs server端地址,较为灵活 推荐使用
CATALOG
  1. 1. 在deployment/statefulset中直接使用
  2. 2. 创建类型为nfs的持久化存储卷
  3. 3. NFS CSI Driver
    1. 3.1. 安装
    2. 3.2. 使用
      1. 3.2.1. pv/pvc 使用(静态配置)
      2. 3.2.2. 存储类使用(动态配置)
  4. 4. NFS Subdir External Provisioner
    1. 4.1. 安装
    2. 4.2. 创建存储类
    3. 4.3. 验证
    4. 4.4. 集群模式
  5. 5. nfs-ganesha-server-and-external-provisioner
    1. 5.1. 安装
    2. 5.2. 使用
    3. 5.3. 集群模式
  6. 6. 总结