I am trying to get docker running on Jenkins which itself is a container. Below is part of the Pod spec.
cyrilpanicker/jenkins is an image with Jenkins and docker-cli installed.
For Docker daemon, I am running another container with docker:dind image (The nodes are running on a k8s cluster).
And to get docker.sock linked between them, I am using volume mounts.
spec:
containers:
- name: jenkins
image: cyrilpanicker/jenkins
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket
- name: docker
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket
volumes:
- name: docker-socket
hostPath:
path: /docker.sock
type: FileOrCreate
But this is not working. Below are the logs from the docker container.
time="2021-06-04T20:47:26.059792967Z" level=info msg="Starting up"
time="2021-06-04T20:47:26.061956820Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
Can anyone suggest another way to get this working?
According to the kubernetes docs, hostPath mounts a path from node filesystem, so if I understand correctly, this is not what you want to achieve.
I'm afraid that it isn't possible do mount single file as a volume, so even if you remove hostPath from volumes, docker.sock will be mounted as directory:
jenkins#static-web:/$ ls -la /var/run/
total 20
drwxr-xr-x 1 root root 4096 Jun 5 14:44 .
drwxr-xr-x 1 root root 4096 Jun 5 14:44 ..
drwxrwxrwx 2 root root 4096 Jun 5 14:44 docker.sock
I would try to run docker daemon in dind container with TCP listener instead of sock file:
spec:
containers:
- name: jenkins
image: cyrilpanicker/jenkins
- name: docker
image: docker:dind
command: ["dockerd"]
args: ["-H", "tcp://127.0.0.1:2376"]
ports:
- containerPort: 2376
securityContext:
privileged: true
jenkins#static-web:/$ docker -H tcp://127.0.0.1:2376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
And then configure jenkins to use tcp://127.0.0.1:2376 as a remote docker daemon.
Related
I'm trying to build a docker image using DIND with Atlassian Bamboo.
I've created the deployment/ StatefulSet as follows:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Note that I've set privileged: true in securityContext to enable this.
However, when trying to run docker images, I get a permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
Am I missing something wrt setting up DIND?
The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.
Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its Dockerfile, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.
So I think there are two approaches possible:
Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.
Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified here. Both should be '0'.
As mentioned in the documentation here
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
$ newgrp docker
Verify that you can run docker commands without sudo
$ docker run hello-world
If that doesn't help you can change the permissions of docker socket to be able to connect to the docker daemon /var/run/docker.sock.
sudo chmod 666 /var/run
A better way to handle this is to run a sidecar container - docker:dind, and export DOCKER_HOST=tcp://dind:2375 in the main Bamboo container. This way you will invoke Docker in a dind container and won't need to mount /var/run/docker.sock
With kubernetes, I'm trying to deploy jenkins image & a persistent volume mapped to a NFS share (which is mounted on all my workers)
So, this is my share on my workers :
[root#pp-tmp-test24 /opt]# df -Th /opt/jenkins.persistent
Filesystem Type Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP nfs4 10G 9.5M 10G 1% /opt/jenkins.persistent
And My data on this share
[root#pp-tmp-test24 /opt/jenkins.persistent]# ls -l
total 0
-rwxr-xr-x. 1 root root 0 Oct 2 11:53 newfile
[root#pp-tmp-test24 /opt/jenkins.persistent]# cat newfile
hello
Here It is my yaml files to deploy it
My PersistentVolume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-nfs
labels:
type: type-nfs
spec:
storageClassName: class-nfs
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /opt/jenkins.persistent
My PersistentVolumeClaim yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc-nfs
namespace: ns-jenkins
spec:
storageClassName: class-nfs
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: type-nfs
And my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: ns-jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- image: jenkins
#- image: httpd:latest
name: jenkins
ports:
- containerPort: 8080
protocol: TCP
name: jenkins-web
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /var/foo
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: jenkins-pvc-nfs
After kubectl create -f command, all is looking good :
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins-pv-nfs 10Gi RWX Recycle Bound ns-jenkins/jenkins-pvc-nfs class-nfs 37s
# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ns-jenkins jenkins-pvc-nfs Bound jenkins-pv-nfs 10Gi RWX class-nfs 35s
# kubectl get pods -A |grep jenkins
ns-jenkins jenkins-5bdb8678c-x6vht 1/1 Running 0 14s
# kubectl describe pod jenkins-5bdb8678c-x6vht -n ns-jenkins
Name: jenkins-5bdb8678c-x6vht
Namespace: ns-jenkins
Priority: 0
Node: pp-tmp-test25.mydomain/172.31.68.225
Start Time: Wed, 02 Oct 2019 11:48:23 +0200
Labels: app=jenkins
pod-template-hash=5bdb8678c
Annotations: <none>
Status: Running
IP: 10.244.5.47
Controlled By: ReplicaSet/jenkins-5bdb8678c
Containers:
jenkins:
Container ID: docker://8a3e4871ed64b371818bac59e24d6912e5d2b13c8962c1639d36797fbce8082e
Image: jenkins
Image ID: docker-pullable://docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Oct 2019 11:48:26 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/foo from jenkins-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dz6cd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc-nfs
ReadOnly: false
default-token-dz6cd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dz6cd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39s default-scheduler Successfully assigned ns-jenkins/jenkins-5bdb8678c-x6vht to pp-tmp-test25.mydomain
Normal Pulling 38s kubelet, pp-tmp-test25.mydomain Pulling image "jenkins"
Normal Pulled 36s kubelet, pp-tmp-test25.mydomain Successfully pulled image "jenkins"
Normal Created 36s kubelet, pp-tmp-test25.mydomain Created container jenkins
Normal Started 36s kubelet, pp-tmp-test25.mydomain Started container jenkins
On my worker, this is my container
# docker ps |grep jenkins
8a3e4871ed64 docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668 "/bin/tini -- /usr..." 2 minutes ago Up 2 minutes k8s_jenkins_jenkins-5bdb8678c-x6vht_ns-jenkins_64b66dae-a1da-4d90-83fd-ff433638dc9c_0
So I launch a shell on my container, and I can see my data on /var/foo :
# docker exec -t -i 8a3e4871ed64 /bin/bash
jenkins#jenkins-5bdb8678c-x6vht:/$ df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ cat newfile
hello
I'm trying to write data in my /var/foo/newfile but the Permission is denied
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ echo "world" >> newfile
bash: newfile: Permission denied
Same thing in my /var/foo/ directory, I can't write data
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ touch newfile2
touch: cannot touch 'newfile2': Permission denied
So, I tried an another image like httpd:latest in my deployment yaml (keeping the same name in my yaml definition)
[...]
containers:
#- image: jenkins
- image: httpd:latest
[...]
# docker ps |grep jenkins
fa562400405d docker.io/httpd#sha256:39d7d9a3ab93c0ad68ee7ea237722ed1b0016ff6974d80581022a53ec1e58797 "httpd-foreground" 50 seconds ago Up 48 seconds k8s_jenkins_jenkins-7894877f96-6dj85_ns-jenkins_540b12bd-69df-44d8-b3df-20a0a96cc851_0
In my new container, this time I can Read-Write data :
root#jenkins-7894877f96-6dj85:/usr/local/apache2# df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.6M 10G 1% /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ
total 0
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 0 Oct 2 10:06 newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 6 Oct 2 09:55 newfile
root#jenkins-7894877f96-6dj85:/var/foo# echo "world" >> newfile
root#jenkins-7894877f96-6dj85:/var/foo# touch newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root 0 Oct 2 10:06 newfile2
What I'm doing wrong ? Does the pb is due to jenkins images who do not allow RW access ? Same pb with a local storage (on my worker) with persistent volume.
Other thing, perhaps it is stupid : with my jenkins image, I would like to mount the /var/jenkins_home dir to a persistent volume in order to keep jenkins's configuration files. But if I try to mount /var/jenkins_home instead of /var/foo, pod is crashinglookbackoff (because there is already data stored in /var/jenkins_home).
thank you all for your help !
I noticed You are trying to write as jenkins user on jenkins-5bdb8678c-x6vht that might not have write permissions in that root:root directory.
You might want to change that directory permissions to match jenkins user privileges.
Try to verify that this is causing this issue by using sudo before writing to file.
If you sudo is not installed then exec in with --user flag as root user. So its just like in other cases where writing worked.
docker exec -t -i -u root 8a3e4871ed64 /bin/bash
#Piotr Malec Thank you. Yes I realized that : jenkins is the default user when I connect to my container :
docker exec -t -i 46d2497d440d /bin/bash
jenkins#jenkins-7bcdd5db57-8qgth:/$
So I have changed permissions on this /opt/jenkins.persistent to 777 on my worker, in order to try and now I have RW perm on this mount :
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var
jenkins#jenkins-7bcdd5db57-8qgth:/$ ls -l
[...]
drwxrwxrwx. 2 root root 4096 Oct 4 13:41 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ touch newfile
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ ls
newfile
So I added jenkins user account on my worker and set chown jenkins:jenkins on my /opt/jenkins.persistent directory. Now, inside my container I have RW perm :
jenkins#jenkins-7bcdd5db57-8qgth:/var$ ls -l
[...]
drwxr-xr-x. 2 jenkins jenkins 4096 Oct 4 13:53 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/var$ cd foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ touch newfile2
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ ls -l
-rw-r--r--. 1 jenkins jenkins 0 Oct 4 13:53 newfile2
I am using the following snippet to create the deployment
oc create -f nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
openshift.io/scc: privileged
spec:
securityContext:
priviledged: false
runAsUser: 0
volumes:
- name: static-web-volume
hostPath:
path: /home/testFolder
type: Directory
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: static-web-volume
I am getting permission denied issue when i try to go inside the html folder
$ cd /usr/share/nginx/html
$ ls
ls: cannot open directory .: Permission denied
This is easiest sample code as i have similar requirement where i have to read the files from the mounted drives, but that one is failing as well.
I am using kubernetes 1.5 as this is only one available. I am not sure whether the volumes have been mounted or not.
all my dir permissions are set to root as well.
content of /home/testfolder
0 drwxrwxrwx. 3 root root 52 Apr 15 23:06 .
4 dr-xr-x---. 11 root root 4096 Apr 15 22:58 ..
0 drwxrwxrwx. 2 root root 6 Apr 15 19:56 ind
4 -rwxrwxrwx. 1 root root 14 Apr 15 19:22 index.html
4 -rwxrwxrwx. 1 root root 694 Apr 15 23:06 ordr.yam
I remember hitting this one in openshift sometime back. It has something to do with SElinux configuration on the host.
Try this at the host server where you mount to your container volume /usr/share/nginx/html.
sudo chcon -Rt svirt_sandbox_file_t /
I have problem when i want to deploy working well cAdvisor containter in openshift project.
I have a dedicated serviceaccount in openshift project and i add him privileged scc,
then i made some changes in cAdvisor.yaml and try to deploy, containter is working but with an error when i want to go in "Docker Containers" section on webside
Changes i made in cAdvisor.yaml:
metadata:
annotations:
openshift.io/scc: privileged
securityContext:
runAsUser: 0
serviceAccount: cadvisor-sa
serviceAccountName: cadvisor-sa
volumes:
volumes:
- hostPath:
path: /
name: rootfs
- hostPath:
path: /var/run
name: var-run
- hostPath:
path: /sys/fs/cgroup/cpu
name: sys
- hostPath:
path: /var/lib/docker
name: docker
Error on web:
failed to get docker info: Got permission denied while trying to
connect to the Docker daemon socket at unix:///var/run/docker.sock:
Get http://%2Fvar%2Frun%2Fdocker.sock/info: dial unix
/var/run/docker.sock: connect: permission denied"
I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.