Read-only file system in hostpath.so unable to mount volume - docker

When I am giving hostpath it is showing that it is read-only file system since I am new to kubernetes I didn't find any other way please let me know such that is there any other way of implementation of volumes and I am doing this on GKE
here is my yaml code
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
creationTimestamp: "2019-11-22T10:52:16Z"
generation: 17
labels:
app: dataset
name: dataset
namespace: default
resourceVersion: "283767"
selfLink: /apis/apps/v1/namespaces/default/deployments/dataset
uid: 26111fe8-0d16-11ea-a66e-42010aa00042
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: dataset
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: dataset
spec:
containers:
- env:
- name: RABBIT_MQ_HOST
valueFrom:
configMapKeyRef:
key: RABBIT_MQ_HOST
name: dataset-config
- name: RABBIT_MQ_USER
valueFrom:
configMapKeyRef:
key: RABBIT_MQ_USER
name: dataset-config
- name: RABBIT_MQ_PASSWORD
valueFrom:
configMapKeyRef:
key: RABBIT_MQ_PASSWORD
name: dataset-config
- name: DATASET_DB_HOST
valueFrom:
configMapKeyRef:
key: DATASET_DB_HOST
name: dataset-config
- name: DATASET_DB_NAME
valueFrom:
configMapKeyRef:
key: DATASET_DB_NAME
name: dataset-config
- name: LICENSE_SERVER
valueFrom:
configMapKeyRef:
key: LICENSE_SERVER
name: dataset-config
- name: DATASET_THUMBNAIL_SIZE
valueFrom:
configMapKeyRef:
key: DATASET_THUMBNAIL_SIZE
name: dataset-config
- name: GATEWAY_URL
valueFrom:
configMapKeyRef:
key: GATEWAY_URL
name: dataset-config
- name: DEFAULT_DATASOURCE_ID
valueFrom:
configMapKeyRef:
key: DEFAULT_DATASOURCE_ID
name: dataset-config
- name: RABBIT_MQ_QUEUE_NAME
valueFrom:
configMapKeyRef:
key: RABBIT_MQ_QUEUE_NAME
name: dataset-config
- name: RABBIT_MQ_PATTERN
valueFrom:
configMapKeyRef:
key: RABBIT_MQ_PATTERN
name: dataset-config
image: gcr.io/gcr-testing-258008/dataset#sha256:8416ec9b023d4a4587a511b855c2735b25a16dbb1a15531d8974d0ef89ad3d73
imagePullPolicy: IfNotPresent
name: dataset-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: ./data/uploads
name: dataset-volume-uploads
- mountPath: ./data/thumbnails
name: dataset-volume-thumbnails
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /build/uploads
type: ""
name: dataset-volume-uploads
- hostPath:
path: /build/thumbnails
type: ""
name: dataset-volume-thumbnails
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2019-11-23T07:19:13Z"
lastUpdateTime: "2019-11-23T07:19:13Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-11-23T06:31:03Z"
lastUpdateTime: "2019-11-23T07:24:42Z"
message: ReplicaSet "dataset-75b46f868f" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 17
readyReplicas: 2
replicas: 3
unavailableReplicas: 1
updatedReplicas: 1
Here is my description of pod
Path: /build/uploads
HostPathType:
dataset-volume-thumbnails:
Type: HostPath (bare host directory volume)
Path: /build/thumbnails
HostPathType:
default-token-x2wmw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x2wmw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 96s default-scheduler Successfully assigned default/dataset-75b46f868f-wffm7 to gke-teric-ai-default-pool-41929025-fxnx
Warning BackOff 15s (x6 over 93s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Back-off restarting failed container
Normal Pulled 2s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Container image "gcr.io/gcr-testing-258008/dataset#sha256:8416ec9b023d4a4587a511b855c2735b25a16dbb1a15531d8974d0ef89ad3d73" already present on machine
Normal Created 2s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Created container
Warning Failed 1s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Error: failed to start container "dataset-sha256": Error response from daemon: error while creating mount source path '/build/uploads': mkdir /build/uploads: read-only file system
So here is the problem even though I am giving chmod permissions dynamically it is not allowing to do write operations.i have tried persistent volumes it is also not worked so please tell me know in which way I have to mount volumes.

First of all, do not use hostPath, go for persistence storage
I would use StatefulSets instead of Deployment if you need to store some data.
I was able to create both hostPath on my GKE instance manually as root user.
I guess you have to specify type for hostPath to create request directory if it doesn't exist. type: DirectoryOrCreate you can read more about hostPath and available type values.
Moreover, if you are using hostPath permissions of your user inside a container must match ownership on the node so it makes it more complicated, of course, you could run it as root, but it is not recommended way.
To sum it up, just use persistent storage provisioned by google. If you encounter problems with permissions you probably need init contaner to change permission or you have to set proper fsGroup for your container.

Related

Service/pod starts up without any issues but cannot connect to service

So, I have a service (Python Flask/uwsgi app) which I deploy via helm on Kubernetes which locally runs on my Docker Desktop. I have successfully deployed the application, the pods are running fine, no issues with application, app logs also look good. But when I hit the url in the browser/via curl, I get a connection refused error.
Here's what my kubectl describe pod looks like:
Name: rhs-servicebase-6f676c458c-s6b8j
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 29 Aug 2022 07:54:30 -0700
Labels: app.kubernetes.io/instance=rhs
app.kubernetes.io/name=servicebase
pod-template-hash=6f676c458c
Annotations: <none>
Status: Running
IP: 10.1.0.44
IPs:
IP: 10.1.0.44
Controlled By: ReplicaSet/rhs-servicebase-6f676c458c
Containers:
servicebase:
Container ID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
Image: localhost:5000/referral-health-signal:latest
Image ID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
Port: 9000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 29 Aug 2022 07:54:36 -0700
Ready: True
Restart Count: 0
Environment:
TZ: UTC
LOG_FILE: application.log
S3_BUCKET: livongo-int-healthsignal
AWS_ACCESS_KEY_ID: <set to the key 'aws-access-key-id' in secret 'referral-health-signal'> Optional: false
AWS_SECRET_ACCESS_KEY: <set to the key 'aws-secret-access-key' in secret 'referral-health-signal'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5vtmd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-referral-health-signal:
Type: Secret (a volume populated by a Secret)
SecretName: referral-health-signal
Optional: false
kube-api-access-5vtmd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/rhs-servicebase-6f676c458c-s6b8j to docker-desktop
Normal Pulling 15m kubelet Pulling image "localhost:5000/referral-health-signal:latest"
Normal Pulled 15m kubelet Successfully pulled image "localhost:5000/referral-health-signal:latest" in 347.31179ms
Normal Created 15m kubelet Created container servicebase
Normal Started 15m kubelet Started container servicebase
My deployment status:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
rhs-servicebase 1/1 1 1 24m
My service status:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
rhs-servicebase NodePort 10.97.229.137 <none> 9000:30001/TCP 29m
But when I do nc -zv localhost 30001 I see:
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
An lsof -i tcp:30001 also produces an empty result.
Pod spec:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-08-29T14:54:30Z"
generateName: rhs-servicebase-6f676c458c-
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
pod-template-hash: 6f676c458c
name: rhs-servicebase-6f676c458c-s6b8j
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: rhs-servicebase-6f676c458c
uid: 0d076b86-44e8-41fa-b6ee-b73a066f20e0
resourceVersion: "23554"
uid: 6d251ab1-c240-4241-9041-7c03b4d0d9c8
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-5vtmd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: docker-desktop
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
- name: kube-api-access-5vtmd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
image: referral-health-signal:latest
imageID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
lastState: {}
name: servicebase
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-08-29T14:54:36Z"
hostIP: 192.168.65.4
phase: Running
podIP: 10.1.0.44
podIPs:
- ip: 10.1.0.44
qosClass: BestEffort
startTime: "2022-08-29T14:54:30Z"
Deployment spec:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-08-29T14:54:30Z"
generation: 1
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: servicebase
helm.sh/chart: servicebase-0.1.5
name: rhs-servicebase
namespace: default
resourceVersion: "23558"
uid: aae0086a-ecfd-4c80-b633-92307e2f059d
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-08-29T14:54:36Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-08-29T14:54:30Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: ReplicaSet "rhs-servicebase-6f676c458c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I'm so lost, I don't even know where to start debugging. I'm on MacOS (Big Sur).

Zookeeper pod can't access mounted persistent volume claim

I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000

WSL Kubernetes Pod stuck in ContainerCreating state due to volume mount

I am working with Docker Desktop on Windows 10 as well as Windows Subsystem for Linux (WSL). I have a containerized app that I deploy to the local K8s cluster (courtesy of Docker Desktop). Typical story: all was working fine until one day a Docker Desktop update came and ruined everything). DD version I have now is 2.3.0.2 stable.
I have a pod with MySQL with defined pv, pvc and storage class. When I deploy my app to the cluster I can see that pv and pvc are bound but the pod is stuck at ContainerCreating state:
$ kubectl describe pod mysql-6779d8fb8b-d25wz
Name: mysql-6779d8fb8b-d25wz
Namespace: typo3-connector
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Wed, 13 May 2020 14:21:43 +0200
Labels: app=mysql
pod-template-hash=6779d8fb8b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-6779d8fb8b
Containers:
mysql:
Container ID:
Image: lw-mysql
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
mysql-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wr6g9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-wr6g9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wr6g9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled <unknown> default-scheduler Successfully assigned typo3-connector/mysql-6779d8fb8b-d25wz to docker-desktop
Warning FailedMount 35s (x8 over 99s) kubelet, docker-desktop MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
The error is
MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
But the path actually exists on disk (in WSL that is).
The pv:
$ kubectl describe pv mysql-pv
Name: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"mysql-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: typo3-connector/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [docker-desktop]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /c/kubernetes/typo3-8/mysql-storage/
Events: <none>
The pvc:
$ kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: typo3-connector
StorageClass: local-storage
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"typo3-connector"},"spe...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-6779d8fb8b-d25wz
Events: <none>
I tried running it from PowerShell but no luck. I get the same result.
But it was all working fine before the update.
Is it configuration based issue?
EDIT
Here's the manifest file:
apiVersion: v1
kind: Namespace
metadata:
name: typo3-connector
---
apiVersion: v1
data:
MYSQL_PASSWORD: ZHVtbXk=
MYSQL_ROOT_PASSWORD: ZHVtbXk=
MYSQL_USER: ZHVtbXk=
kind: Secret
metadata:
name: mysql-credentials
namespace: typo3-connector
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: typo3-connector
spec:
ports:
- name: mysql-backend
port: 3306
protocol: TCP
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: typo3-connector
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- envFrom:
- secretRef:
name: mysql-credentials
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-persistent-storage
imagePullSecrets:
- name: lwdockerregistry
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
local:
path: /c/kubernetes/typo3-8/mysql-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: typo3-connector
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
You must use this format for the path:
/run/desktop/mnt/host/c/someDir/volumeDir
So in your case it is:
/run/desktop/mnt/host/c/kubernetes/typo3-8/mysql-storage/
source for solution
UPDATE: You really don't want to use windows host directories inside WSL2 distro because it is really really poor performance. Use the WSL2 distro directories instead that you can access just as easily from the host windows on this address \\wsl$
As described in the following example taken from here, the path shouldn't end with /.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Changing your path to path: /c/kubernetes/typo3-8/mysql-storage makes the magic and your PVC works as designed.

Kubernetes: failed to mount unformatted volume as read only

I am trying to use gcePersistentDisk as ReadOnlyMany so that my pods on multiple nodes can read the data on this disk. Following the documentation here for the same.
To create and later format the gce Persistent Disk, I have followed the instructions in the documentation here. Following this doc, I have sshed into one of the nodes and have formatted the disk. See below the complete error and also the other yaml files.
kubectl describe pods -l podName
Name: punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-d016/10.160.0.12
Start Time: Thu, 25 Apr 2019 23:55:38 +0530
Labels: app.kubernetes.io/instance=punk-fly
app.kubernetes.io/name=nodejs
pod-template-hash=1866836461
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nodejs
Status: Pending
IP:
Controlled By: ReplicaSet/punk-fly-nodejs-deployment-5dbbd7b8b5
Containers:
nodejs:
Container ID:
Image: rajesh12/smartserver:server
Image ID:
Port: 3002/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
MYSQL_HOST: mysqlservice
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: password
Mounts:
/usr/src/ from helm-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
helm-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-readonly-pvc
ReadOnly: true
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs to gke-mycluster-default-pool-b1c1d316-d016
Normal SuccessfulAttachVolume 1m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f"
Warning FailedMount 10s (x8 over 1m) kubelet, gke-mycluster-default-pool-b1c1d316-d016 MountVolume.MountDevice failed for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f" : failed to mount unformatted volume as read only
Warning FailedMount 0s kubelet, gke-mycluster-default-pool-b1c1d316-d016 Unable to mount volumes for pod "punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs_default(86293044-6787-11e9-ad35-42010aa0000f)": timeout expired waiting for volumes to attach or mount for pod "default"/"punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs". list of unmounted volumes=[helm-vol]. list of unattached volumes=[helm-vol default-token-jpkzg]
readonly_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ""
capacity:
storage: 1G
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: mydisk0
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
deployment.yaml
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
containers:
- name: {{ .Values.app.backendName }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tagServer }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MYSQL_HOST
value: mysqlservice
- name: MYSQL_DATABASE
value: app
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http-backend
containerPort: 3002
volumeMounts:
- name: helm-vol
mountPath: /usr/src/
It sounds like your PVC is dynamically provisioning a new volume that is not formatted with the default StorageClass
It could be that your Pod is being created in a different availability from where you have the PV provisioned. The gotcha with having multiple Pod readers for the gce volume is that the Pods always have to be in the same availability zone.
Some options:
Simply create and format the PV on the same availability zone where your node is.
When you define your PV you could specify Node Affinity to make sure it always gets assigned to a specific node.
Define a StorageClass that specifies the filesystem
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
And then use it in your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
storageClassName: mysc
The volume will be automatically provisioned and formatted.
I received the same error message when trying to provision a persistent disk with an access mode of ReadWriteOnce. What fixed the issue for me was removing the property readOnly: true from the volumes declaration of the Deployment spec. In the case of your deployment.yaml file, that would be this block:
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
Try removing that line and see if the error goes away.
I had the same error and managed to fix it using a few lines from a related article about using preexisting disks
https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd
You need to add storageClassName and volumeName to your persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
volumeName: pv-demo
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 500G

Does kubectl replace not update environment variables?

I am adding an environment variable to a Kubernetes replication controller spec, but when I update the running RC from the spec, the environment variable isn't added to it. How come?
I update the RC according to the following spec, where the environment variable IRON_PASSWORD gets added since the previous revision, but the running RC isn't updated correspondingly, kubectl replace -f docker/podspecs/web-controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: web
labels:
app: web
spec:
replicas: 1
selector:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: quay.io/aknuds1/muzhack
# Always pull latest version of image
imagePullPolicy: Always
env:
- name: APP_URI
value: https://staging.muzhack.com
- name: IRON_PASSWORD
value: password
ports:
- name: http-server
containerPort: 80
imagePullSecrets:
- name: quay.io
After updating the RC according to the spec, it looks like this (kubectl get pod web-scpc3 -o yaml), notice that IRON_PASSWORD is missing:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"c1c4185f-0867-11e6-b557-42010af000f7","apiVersion":"v1","resourceVersion":"17714"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
web'
creationTimestamp: 2016-04-22T08:54:00Z
generateName: web-
labels:
app: web
name: web-scpc3
namespace: default
resourceVersion: "17844"
selfLink: /api/v1/namespaces/default/pods/web-scpc3
uid: c1c5035f-0867-11e6-b557-42010af000f7
spec:
containers:
- env:
- name: APP_URI
value: https://staging.muzhack.com
image: quay.io/aknuds1/muzhack
imagePullPolicy: Always
name: web
ports:
- containerPort: 80
name: http-server
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-vfutp
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: quay.io
nodeName: gke-staging-default-pool-f98acf11-ba7d
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-vfutp
secret:
secretName: default-token-vfutp
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-04-22T09:00:49Z
message: 'containers with unready status: [web]'
reason: ContainersNotReady
status: "False"
type: Ready
containerStatuses:
- containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
image: quay.io/aknuds1/muzhack
imageID: docker://8fef42c3eba5abe59c853e9ba811b3e9f10617a257396f48e564e3206e0e1103
lastState:
terminated:
containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
exitCode: 1
finishedAt: 2016-04-22T09:00:48Z
reason: Error
startedAt: 2016-04-22T09:00:46Z
name: web
ready: false
restartCount: 6
state:
waiting:
message: Back-off 5m0s restarting failed container=web pod=web-scpc3_default(c1c5035f-0867-11e6-b557-42010af000f7)
reason: CrashLoopBackOff
hostIP: 10.132.0.3
phase: Running
podIP: 10.32.0.3
startTime: 2016-04-22T08:54:00Z
Replacing the ReplicationController object does not actually recreate the underlying pods, so the pods keep the spec from the previous configuration of the RC until they need to be recreated. If you delete the running pod, the new one that gets created to replace it will have the new environment variable.
This is what the kubectl rolling update command is for, and a part of the reason why the Deployment type was added to Kubernetes 1.2.

Resources