kubernetes: can't deploy jenkins images with persistent volume with RW access - docker

With kubernetes, I'm trying to deploy jenkins image & a persistent volume mapped to a NFS share (which is mounted on all my workers)
So, this is my share on my workers :
[root#pp-tmp-test24 /opt]# df -Th /opt/jenkins.persistent
Filesystem Type Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP nfs4 10G 9.5M 10G 1% /opt/jenkins.persistent
And My data on this share
[root#pp-tmp-test24 /opt/jenkins.persistent]# ls -l
total 0
-rwxr-xr-x. 1 root root 0 Oct 2 11:53 newfile
[root#pp-tmp-test24 /opt/jenkins.persistent]# cat newfile
hello
Here It is my yaml files to deploy it
My PersistentVolume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-nfs
labels:
type: type-nfs
spec:
storageClassName: class-nfs
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /opt/jenkins.persistent
My PersistentVolumeClaim yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc-nfs
namespace: ns-jenkins
spec:
storageClassName: class-nfs
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: type-nfs
And my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: ns-jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- image: jenkins
#- image: httpd:latest
name: jenkins
ports:
- containerPort: 8080
protocol: TCP
name: jenkins-web
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /var/foo
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: jenkins-pvc-nfs
After kubectl create -f command, all is looking good :
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins-pv-nfs 10Gi RWX Recycle Bound ns-jenkins/jenkins-pvc-nfs class-nfs 37s
# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ns-jenkins jenkins-pvc-nfs Bound jenkins-pv-nfs 10Gi RWX class-nfs 35s
# kubectl get pods -A |grep jenkins
ns-jenkins jenkins-5bdb8678c-x6vht 1/1 Running 0 14s
# kubectl describe pod jenkins-5bdb8678c-x6vht -n ns-jenkins
Name: jenkins-5bdb8678c-x6vht
Namespace: ns-jenkins
Priority: 0
Node: pp-tmp-test25.mydomain/172.31.68.225
Start Time: Wed, 02 Oct 2019 11:48:23 +0200
Labels: app=jenkins
pod-template-hash=5bdb8678c
Annotations: <none>
Status: Running
IP: 10.244.5.47
Controlled By: ReplicaSet/jenkins-5bdb8678c
Containers:
jenkins:
Container ID: docker://8a3e4871ed64b371818bac59e24d6912e5d2b13c8962c1639d36797fbce8082e
Image: jenkins
Image ID: docker-pullable://docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Oct 2019 11:48:26 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/foo from jenkins-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dz6cd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc-nfs
ReadOnly: false
default-token-dz6cd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dz6cd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39s default-scheduler Successfully assigned ns-jenkins/jenkins-5bdb8678c-x6vht to pp-tmp-test25.mydomain
Normal Pulling 38s kubelet, pp-tmp-test25.mydomain Pulling image "jenkins"
Normal Pulled 36s kubelet, pp-tmp-test25.mydomain Successfully pulled image "jenkins"
Normal Created 36s kubelet, pp-tmp-test25.mydomain Created container jenkins
Normal Started 36s kubelet, pp-tmp-test25.mydomain Started container jenkins
On my worker, this is my container
# docker ps |grep jenkins
8a3e4871ed64 docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668 "/bin/tini -- /usr..." 2 minutes ago Up 2 minutes k8s_jenkins_jenkins-5bdb8678c-x6vht_ns-jenkins_64b66dae-a1da-4d90-83fd-ff433638dc9c_0
So I launch a shell on my container, and I can see my data on /var/foo :
# docker exec -t -i 8a3e4871ed64 /bin/bash
jenkins#jenkins-5bdb8678c-x6vht:/$ df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ cat newfile
hello
I'm trying to write data in my /var/foo/newfile but the Permission is denied
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ echo "world" >> newfile
bash: newfile: Permission denied
Same thing in my /var/foo/ directory, I can't write data
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ touch newfile2
touch: cannot touch 'newfile2': Permission denied
So, I tried an another image like httpd:latest in my deployment yaml (keeping the same name in my yaml definition)
[...]
containers:
#- image: jenkins
- image: httpd:latest
[...]
# docker ps |grep jenkins
fa562400405d docker.io/httpd#sha256:39d7d9a3ab93c0ad68ee7ea237722ed1b0016ff6974d80581022a53ec1e58797 "httpd-foreground" 50 seconds ago Up 48 seconds k8s_jenkins_jenkins-7894877f96-6dj85_ns-jenkins_540b12bd-69df-44d8-b3df-20a0a96cc851_0
In my new container, this time I can Read-Write data :
root#jenkins-7894877f96-6dj85:/usr/local/apache2# df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.6M 10G 1% /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ
total 0
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 0 Oct 2 10:06 newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 6 Oct 2 09:55 newfile
root#jenkins-7894877f96-6dj85:/var/foo# echo "world" >> newfile
root#jenkins-7894877f96-6dj85:/var/foo# touch newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root 0 Oct 2 10:06 newfile2
What I'm doing wrong ? Does the pb is due to jenkins images who do not allow RW access ? Same pb with a local storage (on my worker) with persistent volume.
Other thing, perhaps it is stupid : with my jenkins image, I would like to mount the /var/jenkins_home dir to a persistent volume in order to keep jenkins's configuration files. But if I try to mount /var/jenkins_home instead of /var/foo, pod is crashinglookbackoff (because there is already data stored in /var/jenkins_home).
thank you all for your help !

I noticed You are trying to write as jenkins user on jenkins-5bdb8678c-x6vht that might not have write permissions in that root:root directory.
You might want to change that directory permissions to match jenkins user privileges.
Try to verify that this is causing this issue by using sudo before writing to file.
If you sudo is not installed then exec in with --user flag as root user. So its just like in other cases where writing worked.
docker exec -t -i -u root 8a3e4871ed64 /bin/bash

#Piotr Malec Thank you. Yes I realized that : jenkins is the default user when I connect to my container :
docker exec -t -i 46d2497d440d /bin/bash
jenkins#jenkins-7bcdd5db57-8qgth:/$
So I have changed permissions on this /opt/jenkins.persistent to 777 on my worker, in order to try and now I have RW perm on this mount :
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var
jenkins#jenkins-7bcdd5db57-8qgth:/$ ls -l
[...]
drwxrwxrwx. 2 root root 4096 Oct 4 13:41 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ touch newfile
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ ls
newfile
So I added jenkins user account on my worker and set chown jenkins:jenkins on my /opt/jenkins.persistent directory. Now, inside my container I have RW perm :
jenkins#jenkins-7bcdd5db57-8qgth:/var$ ls -l
[...]
drwxr-xr-x. 2 jenkins jenkins 4096 Oct 4 13:53 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/var$ cd foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ touch newfile2
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ ls -l
-rw-r--r--. 1 jenkins jenkins 0 Oct 4 13:53 newfile2

Related

Openshift missing permissions to create a file

The spring boot application is deployed on openshift 4. This application needs to create a file on the nfs-share.
The openshift container has configured a volume mount on the type NFS.
The container on openshift creates a pod with random userid as
sh-4.2$ id
uid=1031290500(1031290500) gid=0(root) groups=0(root),1031290500
The mount point is /nfs/abc
sh-4.2$ ls -la /nfs/
ls: cannot access /nfs/abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 25 09:34 .
drwxr-xr-x. 1 root root 50 Nov 25 10:09 ..
d?????????? ? ? ? ? ? abc
on the docker image I created a user "technical" with uid= gid=48760 as shown below.
FROM quay.repository
MAINTAINER developer
LABEL description="abc image" \
name="abc" \
version="1.0"
ARG APP_HOME=/opt/app
ARG PORT=8080
ENV JAR=app.jar \
SPRING_PROFILES_ACTIVE=default \
JAVA_OPTS=""
RUN mkdir $APP_HOME
ADD $JAR $APP_HOME/
WORKDIR $APP_HOME
EXPOSE $PORT
ENTRYPOINT java $JAVA_OPTS -Dspring.profiles.active=$SPRING_PROFILES_ACTIVE -jar $JAR
my deployment config file is as shown below
spec:
volumes:
- name: bad-import-file
persistentVolumeClaim:
claimName: nfs-test-pvc
containers:
- resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
terminationMessagePath: /dev/termination-log
name: abc
env:
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: abc-configmap
key: spring.profiles.active
- name: DB_URL
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.url
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.username
- name: BAD_IMPORT_PATH
valueFrom:
configMapKeyRef:
name: abc-configmap
key: bad.import.path
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: abc-secret
key: db.password
ports:
- containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: bad-import-file
mountPath: /nfs/abc
dnsPolicy: ClusterFirst
securityContext:
runAsGroup: 44337
runAsNonRoot: true
supplementalGroups:
- 44337
the PV request is as follows
apiVersion: v1
kind: PersistentVolume
metadata:
name: abc-tuc-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: classic-nfs
mountOptions:
- hard
- nfsvers=3
nfs:
path: /tm03v06_vol3014
server: tm03v06cl02.jit.abc.com
readOnly: false
Now the openshift user has id
sh-4.2$ id
uid=1031290500(1031290500) gid=44337(technical) groups=44337(technical),1031290500
RECENT UPDATE
Just to be clear with the problem, Below I have two commands from the same pod terminal,
sh-4.2$ cd /nfs/
sh-4.2$ ls -la (The first command I tried immediately after pod creation.)
total 8
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
drwxrwx---. 14 technical technical 8192 Nov 28 19:06 abc
sh-4.2$ ls -la(few seconds later on the same pod terminal)
ls: cannot access abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
d?????????? ? ? ? ? ? abc
So the problem is that I see these question marks(???) on the mount point.
The mounting is working correctly but I cannot access this /nfs/abc directory and I see this ????? for some reason
UPDATE
sh-4.2$ ls -la /nfs/abc/
ls: cannot open directory /nfs/abc/: Stale file handle
sh-4.2$ ls -la /nfs/abc/ (after few seconds on the same pod terminal)
ls: cannot access /nfs/abc/: Permission denied
Could this STALE FILE HANDLE be the reason for this issue?
TL;DR
You can use the anyuid security context to run the pod to avoid having OpenShift assign an arbitrary UID, and set the permissions on the volume to the known UID of the user.
OpenShift will override the user ID the image itself may specify that it should run as:
The user ID isn't actually entirely random, but is an assigned user ID which is unique to your project. In fact, your project is assigned a range of user IDs that applications can be run as. The set of user IDs will not overlap with other projects. You can see what range is assigned to a project by running oc describe on the project.
The purpose of assigning each project a distinct range of user IDs is so that in a multitenant environment, applications from different projects never run as the same user ID. When using persistent storage, any files created by applications will also have different ownership in the file system.
... this is a blessing and a curse, when using shared persistent volume claims for example (e.g. PVC's mounted in ReadWriteMany with multiple pods that read / write data - files created by one pod won't be accessible by the other pod because of the incorrect file ownership and permissions).
One way to get around this issue is using the anyuid security context which "provides all features of the restricted SCC, but allows users to run with any UID and any GID".
When using the anyuid security context, we know the user and group ID's the pod(s) are going to run as, and we can set the permissions on the shared volume in advance. For example, where all pods run with the restricted security context by default:
When running the pod with the anyuid security context, OpenShift doesn't assign an arbitrary UID from the range of UID's allocated for the namespace:
This is just for example, but an image that is built with a non-root user with a fixed UID and GID (e.g. 1000:1000) would run in OpenShift as that user, files would be created with the ownership of that user (e.g. 1000:1000), permissions can be set on the PVC to the known UID and GID of the user set to run the service. For example, we can create a new PVC:
cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
namespace: k8s
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: portworx-shared-sc
EOF
... then mount it in a pod:
kubectl run -i --rm --tty ansible --image=lazybit/ansible:v4.0.0 --restart=Never -n k8s --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"serviceAccountName": "default",
"containers": [
{
"name": "nginx",
"imagePullPolicy": "Always",
"image": "lazybit/ansible:v4.0.0",
"command": ["ash"],
"stdin": true,
"stdinOnce": true,
"tty": true,
"env": [
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
}
],
"volumeMounts": [
{
"mountPath": "/data",
"name": "data"
}
]
}
],
"volumes": [
{
"name": "data",
"persistentVolumeClaim": {
"claimName": "data"
}
}
]
}
}'
... and create files in the PVC as the USER set in the Dockerfile.

Kubernetes VolumeMount Path contains Timestamp

I'm using the following tech:
helm
argocd
k8s
I created a secret:
╰ kubectl create secret generic my-secret --from-file=my-secret=/Users/superduper/project/src/main/resources/config-file.json --dry-run=client -o yaml
apiVersion: v1
data:
my-secret: <content>
kind: Secret
metadata:
creationTimestamp: null
name: my-secret
I then added the secret to my pod via a volume mount:
volumeMounts:
- mountPath: "/etc/config"
name: config
readOnly: true
volumes:
- name: config
secret:
secretName: my-secret
but the problem is that when i view the /etc/config diretory, the contents shows my-secret under a timestamp directory:
directory:/etc/config/..2021_07_10_20_14_55.980073047
file:/etc/config/..2021_07_10_20_14_55.980073047/my-secret
is this normal? is there anyway i can get rid of that timestamp so I can programmatically grab the config secret?
This is the way Kubernetes mounts Secrets and ConfigMaps by default in order to propagate changes downward to those volume mounts if an upstream change occurs. If you would rather not use a symlink and want to forfeit that ability, use the subPath directive and your mount will appear as you wish.
volumeMounts:
- mountPath: /etc/config/my-secret
name: config
subPath: my-secret
readOnly: true
volumes:
- name: config
secret:
secretName: my-secret
$ k exec alpine -it -- /bin/ash
/ # ls -lah /etc/config/
total 12K
drwxr-xr-x 2 root root 4.0K Jul 10 22:58 .
drwxr-xr-x 1 root root 4.0K Jul 10 22:58 ..
-rw-r--r-- 1 root root 9 Jul 10 22:58 my-secret
/ # cat /etc/config/my-secret
hi there

minikube shared volume not shows files after sometime

I have to share local .ssh directory content to pod. I search for hat and got answer from one of the post to share start as --mount-string.
$ minikube start --mount-string="$HOME/.ssh/:/ssh-directory" --mount
πŸ˜„ minikube v1.9.2 on Darwin 10.14.6
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
πŸ”„ Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
β–ͺ kubeadm.pod-network-cidr=10.244.0.0/16
E0426 23:44:18.447396 80170 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32810 with https://127.0.0.1:32813
πŸ“ Creating mount /Users/myhome/.ssh/:/ssh-directory ...
🌟 Enabling addons: default-storageclass, storage-provisioner
πŸ„ Done! kubectl is now configured to use "minikube"
❗ /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.0.
πŸ’‘ You can also use 'minikube kubectl -- get pods' to invoke a matching version
When I check the docker for the given Minikube, it return
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad64f642b63 gcr.io/k8s-minikube/kicbase:v0.0.8 "/usr/local/bin/entr…" 3 weeks ago Up 45 seconds 127.0.0.1:32815->22/tcp, 127.0.0.1:32814->2376/tcp, 127.0.0.1:32813->8443/tcp minikube
And check the .ssh directory content are there or not.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
id_rsa id_rsa.pub known_hosts
I have deployment yml as
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
stack: api
app: api-web
spec:
replicas: 1
selector:
matchLabels:
app: api-web
template:
metadata:
labels:
app: api-web
spec:
containers:
- name: api-web-pod
image: tiangolo/uwsgi-nginx-flask
ports:
- name: api-web-port
containerPort: 80
envFrom:
- secretRef:
name: api-secrets
volumeMounts:
- name: ssh-directory
mountPath: /app/.ssh
volumes:
- name: ssh-directory
hostPath:
path: /ssh-directory/
type: Directory
When it ran, it gives error for /ssh-directory.
$ kubectl describe pod/api-deployment-f65db9c6c-cwtvt
Name: api-deployment-f65db9c6c-cwtvt
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sat, 02 May 2020 23:07:51 -0500
Labels: app=api-web
pod-template-hash=f65db9c6c
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/api-deployment-f65db9c6c
Containers:
api-web-pod:
Container ID:
Image: tiangolo/uwsgi-nginx-flask
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
api-secrets Secret Optional: false
Environment: <none>
Mounts:
/app/.ssh from ssh-directory (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9shz5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ssh-directory:
Type: HostPath (bare host directory volume)
Path: /ssh-directory/
HostPathType: Directory
default-token-9shz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9shz5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/api-deployment-f65db9c6c-cwtvt to minikube
Warning FailedMount 11m kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[default-token-9shz5 ssh-directory]: timed out waiting for the condition
Warning FailedMount 2m13s (x4 over 9m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[ssh-directory default-token-9shz5]: timed out waiting for the condition
Warning FailedMount 62s (x14 over 13m) kubelet, minikube MountVolume.SetUp failed for volume "ssh-directory" : hostPath type check failed: /ssh-directory/ is not a directory
When I check the content of /ssh-directory in docker.
It gives IO error.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
ls: cannot access '/ssh-directory': Input/output error
I know there are default mount points for Minikube. As mentioned in https://minikube.sigs.k8s.io/docs/handbook/mount/,
+------------+----------+---------------+----------------+
| Driver | OS | HostFolder | VM |
+------------+----------+---------------+----------------+
| VirtualBox | Linux | /home |/hosthome |
+------------+----------+---------------+----------------+
| VirtualBox | macOS | /Users |/Users |
+------------+----------+---------------+----------------+
| VirtualBox | Windows |C://Users | /c/Users |
+------------+----------+---------------+----------------+
|VMware Fusio| macOS |/Users |/Users |
+------------+----------+---------------+----------------+
| KVM | Linux | Unsupported. | |
+------------+----------+---------------+----------------+
| HyperKit | Linux | Unsupported |(see NFS mounts)|
+------------+----------+---------------+----------------+
But I installed minikube as brew install minikube and its set driver as docker.
$ cat ~/.minikube/config/config.json
{
"driver": "docker"
}
There is no mapping for docker driver in mount point.
Initially, this directory has the files, but somehow, when I try to create the pod, it delete or something is wrong.
While reproducing this on ubuntu I encountered the exact issue.
The directory was indeed looked like mounted but the files were missing which lead me to think that this is a general issue with mounting directories with docker driver.
There is open issue on github about the same problem ( mount directory empty ) and open feature request to mount host volumes into docker driver.
Inspecting minikube container shows no record of that mounted volume and confirms information mentioned in the github request that the only volume shared with host as of now is the one that mounts by default (that is /var/lib/docker/volumes/minikube/_data mounted into minikube's /var directory).
$ docker inspect minikube
"Mounts": [
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
As the workaround you could copy your .ssh directory into the running minikube docker container with following command:
docker cp $HOME/.ssh minikube:<DESIRED_DIRECTORY>
and then mount this desired directory into the pod.

Kubernetes openshift : Permission denied during deployment

I am using the following snippet to create the deployment
oc create -f nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
openshift.io/scc: privileged
spec:
securityContext:
priviledged: false
runAsUser: 0
volumes:
- name: static-web-volume
hostPath:
path: /home/testFolder
type: Directory
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: static-web-volume
I am getting permission denied issue when i try to go inside the html folder
$ cd /usr/share/nginx/html
$ ls
ls: cannot open directory .: Permission denied
This is easiest sample code as i have similar requirement where i have to read the files from the mounted drives, but that one is failing as well.
I am using kubernetes 1.5 as this is only one available. I am not sure whether the volumes have been mounted or not.
all my dir permissions are set to root as well.
content of /home/testfolder
0 drwxrwxrwx. 3 root root 52 Apr 15 23:06 .
4 dr-xr-x---. 11 root root 4096 Apr 15 22:58 ..
0 drwxrwxrwx. 2 root root 6 Apr 15 19:56 ind
4 -rwxrwxrwx. 1 root root 14 Apr 15 19:22 index.html
4 -rwxrwxrwx. 1 root root 694 Apr 15 23:06 ordr.yam
I remember hitting this one in openshift sometime back. It has something to do with SElinux configuration on the host.
Try this at the host server where you mount to your container volume /usr/share/nginx/html.
sudo chcon -Rt svirt_sandbox_file_t /

Unable to mount MySQL data volume to Kubernetes Minikube pod

I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.

Resources