I am new to kubernetes and experimenting with volumes. I have a docker image which declares 2 volumes as in :
VOLUME ["/db/mongo/data" , "/db/mongo/log"]
I am using a StatefulSet , wherein I have 2 volume mounts, as in --
volumeMounts:
- name: mongo-vol
mountPath: << path1 >>
subPath: data
- name: mongo-vol
mountPath: << path2 >>
subPath: log
My question is i) should path1 & path2 be mentioned as "/db/mongo/data" and "/db/mongo/log" respectively ??
ii) Or it can be any path where the volumes would be mounted inside the container, and "/db/mongo/data" & "/db/mongo/log" container paths would be automatically mapped to those mount points ?
I tried reading up the documentation and tried both options but some confusion still remains. Appreciate some help here.
Your both volume mounts reference to the same volume mongo-vol. That tells me this is a volume containing the data and log directories. You should use /db/mongo/log and /db/mongo/data as your mountPaths, and specify the subPath as log and data respectively. That will mount the volume referenced by mongo-vol, and mount data and log directories in that volume on to those directories.
If you had two seperate volumes, a mongo-data and mongo-log, then you would mount them the same way, without the subPath, because you are not referencing sub-directories under the volume.
Related
I have a docker image that contains data in directors /opt/myfiles, Lets say the following:
/opt/myfiles/file1.txt
/opt/myfiles/file2.dat
I want to deploy that image to kubernetes and mount an NFS volume to that directory so that the changes to these files are persisted when I delete the pod.
When I do it in docker swarm I simply mount an empty NFS volume to /opt/myfiles/ and then my docker swarm service is started, the volume is populated with the files from the image and I can then work with my service and when I delete the service, I still have the files on my NFS server, so on next start of the service, I have my previous state back.
In kubernetes, when I mount an empty NFS volume to /opt/myfiles/, the pod is started and /opt/myfiles/ is overwritten with an empty directory, so my pod does not see the files from the image anymore.
My volme mount and volume definition:
[...]
volumeMounts:
- name: myvol
mountPath: /opt/myfiles
[...]
volumes:
- name: myvol
nfs:
server: nfs-server.mydomain.org
path: /srv/shares/myfiles
I read some threads about similar problems (for example K8s doesn't mount files on Persistent Volume) and tried some stuff using subPath and subPathExpr as in the documentation (https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) but none of my changes does, what docker swarm does by default.
The behaviour of kubernetes seems strange to my as I have worked with docker swarm for quite a while now and I am familiar with the way docker swarm handles that. But I am sure that there is a reason why kubernetes handles that in another way and that there are some possibilities to get, what I need.
So please, can someone have a look at my problem and help me find a way to get the following behaviour?
I have files in my image in some directory
I want to mount an NFS volume to that directory, because I need to have the data persisted, when for example my pod crashes and moves to another host or when my pod is temporarily shut down for whatever reason.
When my pod starts, I want the volume to be populated with the files from the image
And of course I would be really happy if someone could explain me, why kubernetes and docker swarm behave so different.
Thanks a lot in advance
This can be usually achieved in Kubernetes with init-containers.
Let's have an image with files stored in /mypath folder. If you are able to reconfigure the container to use a different path (like /persistent) you can use init container to copy files from /mypath to /persistent on pod's startup.
containers:
- name: myapp-container
image: myimage
env:
- name: path
value: /persistent
volumeMounts:
- name: myvolume
path: /persistent
initContainers:
- name: copy-files
image: myimage
volumeMounts:
- name: myvolume
path: /persistent
command: ['sh', '-c', 'cp /mypath/*.* /persistent']
In this case you have a main container myapp-container using files from /persistent folder from the NFS volume and each time when container starts the files from /mypath will be copied into that folder by init container copy-files.
I have a docker image A that contains a folder I need to share with another container B in the same K8s pod.
At first I decided to use a shared volume (emptyDir) and launched A as an init container to copy all the content of the folder into the shared volume. This works fine.
Then looking at k8s doc I realised I could use mountPropagation between the containers.
So I changed the initContainer to a plain container (side car) in the same pod and performed a mount of the container A folder I want to share with container B. This works fine but I need to keep the container running A up with a wait loop. Or not...
Then I decided to come back to the InitContainer pattern and do the same, meaning mount the folder in A inside the shared volume and then the container finishes cause it is an InitContainer and then use the newly mounted folder in container B. And it works !!!!
So my question is, can someone explains me if this is expected on all Kubernetes clusters ? and explain to me why the mounted folder from A that is no longer running as a container can still be seen by my other container ?
Here is a simple manifest to demonstrate it.
apiVersion: v1
kind: Pod
metadata:
name: testvol
spec:
initContainers:
- name: busybox-init
image: busybox
securityContext:
privileged: true
command: ["/bin/sh"]
args: ["-c", "mkdir -p /opt/connectors; echo \"bar\" > /opt/connectors/foo.txt; mkdir -p /opt/connectors_new; mount --bind /opt/connectors /opt/connectors_new; echo connectors mount is ok"]
volumeMounts:
- name: connectors
mountPath: /opt/connectors_new
mountPropagation: Bidirectional
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "cat /opt/connectors/foo.txt; trap : TERM INT; (while true; do sleep 1000; done) & wait"]
volumeMounts:
- name: connectors
mountPath: /opt/connectors
mountPropagation: HostToContainer
volumes:
- name: connectors
emptyDir: {}
here the manifest to reproduce the behavior
This works because your containers run in a pod. The pod is where your volume is defined, not the container. So you are creating a volume in your pod that is an empty directory. Then you are mounting it in your init container and making changes. That makes changes to the volume on the pod.
Then when your init container finishes, the files at the pod level don't go away, they are still there, so your second container picks up the files when it mounts the same volume from the pod.
This is expected behavior and doesn't need mountPropagation fields at all. The mountPropagation fields may have some effect on emptyDir volumes, but it is not related to preserving the files:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.
Note: A container crashing does not remove a Pod from a node. The data in an emptyDir volume is safe across container crashes.
The note here doesn't explicitly state it, but this implies it is also safe across initContainer to Container transitions. As long as your pod exists on the node, your data will be there in the volume.
I am using this config to mount drive that is read-only to docker image:
volumes:
- name: drive-c
hostPath:
path: /media/sf_C_DRIVE
containers:
...
volumeMounts:
- name: drive-c
mountPath: /c
When I try to write (mkdir) to this folder (which is virtual-box read only mounted folder) inside docker image, I get:
mkdir: can't create directory 'somedir': Read-only file system
Which is OK since this is read-only filesystem (C_DRIVE on /c type vboxsf (rw,nodev,relatime))
Is it possible to somehow make this folder writable inside docker container? Changes does not need to be propagated to hostPath. I read something about overlayfs, but I'm not sure how to specify this in Kubernetes yaml.
You can't do this via Kubernetes volume mounts, as they simply work on volumes. What you need to do is create a writable overlay filesystem on your host machine, then map that into Kubernetes with a different path that is writable.
You want to create something like /mnt/writable_sf_C_drive by mounting your /media/sf_C_DRIVE with an overlay on top of it, then your volume mount in Kubernetes would mount /mnt/writable_sf_C_DRIVE.
There are some instructions here
In the Docker Compose documentation, here, you have the following example related to the volumes section of docker-compose.yml files:
volumes:
# (1) Just specify a path and let the Engine create a volume
- /var/lib/mysql
# (2) Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# (3) Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# (4) User-relative path
- ~/configs:/etc/configs/:ro
# (5) Named volume
- datavolume:/var/lib/mysql
Which syntaxes produce a bind mount and which produce a docker volume?
At some place of the documentation, the two concepts are strictly differentiated but at this place they are mixed together... so it is not clear to me.
Whenever you see "volume" in the comment, that will create a volume: so (1) and (5).
If there is not a volume in the comment, this is about a bind mount.
The documentation regarding volumes in docker-compose is here:
Mount host paths or named volumes, specified as sub-options to a service.
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key.
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format. See Use volumes and Volume Plugins for general information on volumes.
Those are two completely different concepts. A volume means that given directory will be persisted between container runs. Imagine MySQL database. You don’t want to lose your data. On the other hand there’s a bind mount where you attach your local directory to the directory in the container. If the container writes something there it will appear in your file system and vice versa (synchronization).
As a side note a volume is nothing more than a symlink to the directory on your machine :) (to a /var/lib/docker/volumes/... directory by default)
I am trying to create a jenkins and nexus integration using docker compose file. Where in my jenkins updated with few plugins using Dockerfile and volume created under /var/lib/jenkins/.
VOLUME ["/var/lib/jenkins/"]
in compose file am trying to map my volume to local store /opt/jenkins/
jenkins:
build: ./jenkins
ports:
- 9090:8080
volumes:
- /opt/jenkins/:/var/lib/jenkins/
But Nothing is copying to my persistence directory(/opt/jenkins/).
I can see in all my jenkins jobs created under _data/jobs/ directory under some volume. not in my volume defined /var/lib/jenkins/
Can any one help me on this why this is happening?
From the documentation:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
And in the mount a host directory as data volume:
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
So basically you are overlaying (hiding) anything that was in var/lib/jenkins. Can your image function if those things are hidden?