I am trying to use gcePersistentDisk as ReadOnlyMany so that my pods on multiple nodes can read the data on this disk. Following the documentation here for the same.
To create and later format the gce Persistent Disk, I have followed the instructions in the documentation here. Following this doc, I have sshed into one of the nodes and have formatted the disk. See below the complete error and also the other yaml files.
kubectl describe pods -l podName
Name: punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-d016/10.160.0.12
Start Time: Thu, 25 Apr 2019 23:55:38 +0530
Labels: app.kubernetes.io/instance=punk-fly
app.kubernetes.io/name=nodejs
pod-template-hash=1866836461
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nodejs
Status: Pending
IP:
Controlled By: ReplicaSet/punk-fly-nodejs-deployment-5dbbd7b8b5
Containers:
nodejs:
Container ID:
Image: rajesh12/smartserver:server
Image ID:
Port: 3002/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
MYSQL_HOST: mysqlservice
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: password
Mounts:
/usr/src/ from helm-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
helm-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-readonly-pvc
ReadOnly: true
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs to gke-mycluster-default-pool-b1c1d316-d016
Normal SuccessfulAttachVolume 1m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f"
Warning FailedMount 10s (x8 over 1m) kubelet, gke-mycluster-default-pool-b1c1d316-d016 MountVolume.MountDevice failed for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f" : failed to mount unformatted volume as read only
Warning FailedMount 0s kubelet, gke-mycluster-default-pool-b1c1d316-d016 Unable to mount volumes for pod "punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs_default(86293044-6787-11e9-ad35-42010aa0000f)": timeout expired waiting for volumes to attach or mount for pod "default"/"punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs". list of unmounted volumes=[helm-vol]. list of unattached volumes=[helm-vol default-token-jpkzg]
readonly_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ""
capacity:
storage: 1G
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: mydisk0
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
deployment.yaml
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
containers:
- name: {{ .Values.app.backendName }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tagServer }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MYSQL_HOST
value: mysqlservice
- name: MYSQL_DATABASE
value: app
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http-backend
containerPort: 3002
volumeMounts:
- name: helm-vol
mountPath: /usr/src/
It sounds like your PVC is dynamically provisioning a new volume that is not formatted with the default StorageClass
It could be that your Pod is being created in a different availability from where you have the PV provisioned. The gotcha with having multiple Pod readers for the gce volume is that the Pods always have to be in the same availability zone.
Some options:
Simply create and format the PV on the same availability zone where your node is.
When you define your PV you could specify Node Affinity to make sure it always gets assigned to a specific node.
Define a StorageClass that specifies the filesystem
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
And then use it in your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
storageClassName: mysc
The volume will be automatically provisioned and formatted.
I received the same error message when trying to provision a persistent disk with an access mode of ReadWriteOnce. What fixed the issue for me was removing the property readOnly: true from the volumes declaration of the Deployment spec. In the case of your deployment.yaml file, that would be this block:
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
Try removing that line and see if the error goes away.
I had the same error and managed to fix it using a few lines from a related article about using preexisting disks
https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd
You need to add storageClassName and volumeName to your persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
volumeName: pv-demo
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 500G
Related
I am trying to deploy this app here on Kubernetes GKE.
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.network/taiga: "true"
io.kompose.service: taiga-gateway
spec:
containers:
- image: nginx:1.19-alpine
name: taiga-gateway
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: taiga-gateway-claim0
- mountPath: /taiga/static
name: taiga-static-data
- mountPath: /taiga/media
name: taiga-media-data
restartPolicy: Always
volumes:
- name: taiga-gateway-claim0
persistentVolumeClaim:
claimName: taiga-gateway-claim0
- name: taiga-static-data
persistentVolumeClaim:
claimName: taiga-static-data
- name: taiga-media-data
persistentVolumeClaim:
claimName: taiga-media-data
status: {}
In kubernetes I am creating the volume this way
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-static-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-media-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-db-data
..etc
So, I end up with the following error
Normal Scheduled 4m39s default-scheduler
Successfully assigned default/taiga-gateway-77976dc77-ppbvb to
gke-taiga-cluster-default-pool-cccc58aa-jwfr Warning
FailedAttachVolume 4m39s attachdetach-controller Multi-Attach
error for volume "pvc-a28ae890-fe9d-4985-8183-ce54d7ed57d8" Volume is
already used by pod(s) taiga-async-6c7d9dbd7b-vl79s Warning
FailedAttachVolume 4m39s attachdetach-controller Multi-Attach
error for volume "pvc-3cab3ec0-88d9-4c70-96c8-97d7b48e755c" Volume is
already used by pod(s) taiga-async-6c7d9dbd7b-vl79s Normal
SuccessfulAttachVolume 4m32s attachdetach-controller
AttachVolume.Attach succeeded for volume
"pvc-d2e39951-094f-447c-86ff-c36639786111" Warning FailedMount
2m36s kubelet Unable to attach or mount volumes:
unmounted volumes=[taiga-static-data taiga-media-data], unattached
volumes=[taiga-static-data taiga-media-data kube-api-access-6lw4n
taiga-gateway-claim0]: timed out waiting for the condition Warning
FailedMount 19s kubelet Unable to
attach or mount volumes: unmounted volumes=[taiga-media-data
taiga-static-data], unattached volumes=[taiga-media-data
kube-api-access-6lw4n taiga-gateway-claim0 taiga-static-data]: timed
out waiting for the condition
PVC status
testuser#cloudshell:~/kube-deploy (taiga-pursuit)$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
taiga-async-rabbitmq-data Bound pvc-3c8eb896-1bca-4047-915f-01e9a2ca5911 1Gi RWO standard 23m
taiga-db-data Bound pvc-ab03a878-1783-4f03-92bc-c9ef96a7d36d 5Gi RWO standard 23m
taiga-events-rabbitmq-data Bound pvc-0c4da3b5-bfb6-4cd9-b45d-e4e7b44a83e5 1Gi RWO standard 23m
taiga-gateway-claim0 Bound pvc-d2e39951-094f-447c-86ff-c36639786111 5Gi RWO standard 24m
taiga-media-data Bound pvc-a28ae890-fe9d-4985-8183-ce54d7ed57d8 5Gi RWO standard 23m
taiga-static-data Bound pvc-3cab3ec0-88d9-4c70-96c8-97d7b48e755c 5Gi RWO standard 23m
Is the warning that these are being shared causing it to fail mounting or is it something else? I am not able to figure out
I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000
I am working with Docker Desktop on Windows 10 as well as Windows Subsystem for Linux (WSL). I have a containerized app that I deploy to the local K8s cluster (courtesy of Docker Desktop). Typical story: all was working fine until one day a Docker Desktop update came and ruined everything). DD version I have now is 2.3.0.2 stable.
I have a pod with MySQL with defined pv, pvc and storage class. When I deploy my app to the cluster I can see that pv and pvc are bound but the pod is stuck at ContainerCreating state:
$ kubectl describe pod mysql-6779d8fb8b-d25wz
Name: mysql-6779d8fb8b-d25wz
Namespace: typo3-connector
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Wed, 13 May 2020 14:21:43 +0200
Labels: app=mysql
pod-template-hash=6779d8fb8b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-6779d8fb8b
Containers:
mysql:
Container ID:
Image: lw-mysql
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
mysql-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wr6g9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-wr6g9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wr6g9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled <unknown> default-scheduler Successfully assigned typo3-connector/mysql-6779d8fb8b-d25wz to docker-desktop
Warning FailedMount 35s (x8 over 99s) kubelet, docker-desktop MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
The error is
MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
But the path actually exists on disk (in WSL that is).
The pv:
$ kubectl describe pv mysql-pv
Name: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"mysql-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: typo3-connector/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [docker-desktop]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /c/kubernetes/typo3-8/mysql-storage/
Events: <none>
The pvc:
$ kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: typo3-connector
StorageClass: local-storage
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"typo3-connector"},"spe...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-6779d8fb8b-d25wz
Events: <none>
I tried running it from PowerShell but no luck. I get the same result.
But it was all working fine before the update.
Is it configuration based issue?
EDIT
Here's the manifest file:
apiVersion: v1
kind: Namespace
metadata:
name: typo3-connector
---
apiVersion: v1
data:
MYSQL_PASSWORD: ZHVtbXk=
MYSQL_ROOT_PASSWORD: ZHVtbXk=
MYSQL_USER: ZHVtbXk=
kind: Secret
metadata:
name: mysql-credentials
namespace: typo3-connector
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: typo3-connector
spec:
ports:
- name: mysql-backend
port: 3306
protocol: TCP
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: typo3-connector
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- envFrom:
- secretRef:
name: mysql-credentials
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-persistent-storage
imagePullSecrets:
- name: lwdockerregistry
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
local:
path: /c/kubernetes/typo3-8/mysql-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: typo3-connector
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
You must use this format for the path:
/run/desktop/mnt/host/c/someDir/volumeDir
So in your case it is:
/run/desktop/mnt/host/c/kubernetes/typo3-8/mysql-storage/
source for solution
UPDATE: You really don't want to use windows host directories inside WSL2 distro because it is really really poor performance. Use the WSL2 distro directories instead that you can access just as easily from the host windows on this address \\wsl$
As described in the following example taken from here, the path shouldn't end with /.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Changing your path to path: /c/kubernetes/typo3-8/mysql-storage makes the magic and your PVC works as designed.
I would like to run a lcoal FTP server ProFtpd with minikube with docker image : vipconsult/proftpd
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
name: ftp-local
namespace: influx
labels:
app: ftp-local
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: ftp-local
component: core
spec:
initContainers:
containers:
- image: vipconsult/proftpd
name: ftp-local
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: USERNAME
value: test
- name: PASSWORD
value: test
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
and volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
When I deploy it, I have:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-58d5d774d-zdztf
chown: cannot access '/etc/proftpd/ftpd.passwd': No such file or directory
Documentation states:
to make the users persistent share the passwords file with the host or a data -v /home/docker/proftpd/ftpd.passwd:/etc/proftpd/ftpd.passwd
so, I tried to mount this file:
deployment ( just showing volume section )
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
- name: ftp-persistent-users
mountPath: /etc/proftpd/ftpd.passwd
subPath: ftpd.passwd
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
- name: ftp-persistent-users
persistentVolumeClaim:
claimName: ftp-storage-users
and volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage-user
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage-user
hostPath:
path: /data/ftp/config
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage-user
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
First I am a bit frustrated to use a 1GB volume for a single file, and then I get:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-67cfd77497-7rt5m
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: mod_auth_file/1.0: unable to use AuthUserFile '/etc/proftpd/ftpd.passwd': Is a directory
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: fatal: AuthUserFile: unable to use /etc/proftpd/ftpd.passwd: Is a directory on line 193 of '/etc/proftpd/proftpd.conf'
Here are my volumes:
git:(devops) ✗ kubectl get pv,pvc -n influx
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ftp-storage 1Gi RWO Retain Bound influx/ftp-storage 22s
persistentvolume/ftp-storage-user 1Gi RWO Retain Bound influx/ftp-storage-user 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ftp-storage Bound ftp-storage 1Gi RWO standard 22s
persistentvolumeclaim/ftp-storage-user Bound ftp-storage-user 1Gi RWO standard 22s
Now describing PVs:
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage
Name: ftp-storage
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage"},"spec":{"accessModes":["ReadWriteOnce"],"c...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/storage
HostPathType:
Events: <none>
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage-user
Name: ftp-storage-user
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage-user"},"spec":{"accessModes":["ReadWriteOnce...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage-user
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/config
HostPathType:
Events: <none>
What am I doing wrong ?
I have a failing service with Kubernetes, it seems that service doesnt want to mount volume.
Unable to mount volumes for pod "metadata-api-local": timeout expired waiting for volumes to attach or mount for pod "metadata"/"metadata-api-local". list of unmounted volumes=[metadata-api-claim]. list of unattached volumes=[metadata-api-claim default-token-8lqmp]
Here is the log:
➜ metadata_api git:(develop) ✗ kubectl describe pod -n metadata metadata-api-local-f5bddb8f7-clmwq
Name: metadata-api-local-f5bddb8f7-clmwq
Namespace: metadata
Priority: 0
Node: minikube/192.168.0.85
Start Time: Wed, 18 Sep 2019 16:59:02 +0200
Labels: app=metadata-api-local
pod-template-hash=f5bddb8f7
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/metadata-api-local-f5bddb8f7
Containers:
metadata-api-local:
Container ID:
Image: metadata_api:local
Image ID:
Port: 18000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
metadata-env Secret Optional: false
Environment: <none>
Mounts:
/var/lib/nodered-peer from metadata-api-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8lqmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
metadata-api-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: metadata-api-claim
ReadOnly: false
default-token-8lqmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8lqmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned metadata/metadata-api-local-f5bddb8f7-clmwq to minikube
Warning FailedMount 47s (x6 over 12m) kubelet, minikube Unable to mount volumes for pod "metadata-api-local-f5bddb8f7-clmwq_metadata(94cbb26c-4907-4512-950a-29a25ad1ef20)": timeout expired waiting for volumes to attach or mount for pod "metadata"/"metadata-api-local-f5bddb8f7-clmwq". list of unmounted volumes=[metadata-api-claim]. list of unattached volumes=[metadata-api-claim default-token-8lqmp]
Here is my metadata_pvc.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: metadata-api-pv
namespace: metadata
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: metadata
name: metadata-api-claim
hostPath:
path: /data/metadata-api
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metadata-api-claim
namespace: metadata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: metadata-postgres-volume
namespace: metadata
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: metadata
name: metadata-postgres-claim
hostPath:
path: /data/metadata-postgres
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metadata-postgres-claim
namespace: metadata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
When I list pv, I get:
➜ metadata_api git:(develop) ✗ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
metadata-api-pv 1Gi RWO Retain Available metadata/metadata-api-claim 12m
metadata-postgres-volume 1Gi RWO Retain Available metadata/metadata-postgres-claim 12m
➜ metadata_api git:(develop) ✗ kubectl get pvc
No resources found.
What is failing ?
You shouldn't specify claimRef, that field is automatically generated by Kubernetes controllers. Instead you should use storage classes for both your PersistentVolumes and PersistentVolumeClaims as that is the mechanism used to match them. Adding the storageClassName: name field to both your PersistentVolumes and PersistentVolumeClaims should fix your issue.