I am using this config to mount drive that is read-only to docker image:
volumes:
- name: drive-c
hostPath:
path: /media/sf_C_DRIVE
containers:
...
volumeMounts:
- name: drive-c
mountPath: /c
When I try to write (mkdir) to this folder (which is virtual-box read only mounted folder) inside docker image, I get:
mkdir: can't create directory 'somedir': Read-only file system
Which is OK since this is read-only filesystem (C_DRIVE on /c type vboxsf (rw,nodev,relatime))
Is it possible to somehow make this folder writable inside docker container? Changes does not need to be propagated to hostPath. I read something about overlayfs, but I'm not sure how to specify this in Kubernetes yaml.
You can't do this via Kubernetes volume mounts, as they simply work on volumes. What you need to do is create a writable overlay filesystem on your host machine, then map that into Kubernetes with a different path that is writable.
You want to create something like /mnt/writable_sf_C_drive by mounting your /media/sf_C_DRIVE with an overlay on top of it, then your volume mount in Kubernetes would mount /mnt/writable_sf_C_DRIVE.
There are some instructions here
Related
I have a docker image that contains data in directors /opt/myfiles, Lets say the following:
/opt/myfiles/file1.txt
/opt/myfiles/file2.dat
I want to deploy that image to kubernetes and mount an NFS volume to that directory so that the changes to these files are persisted when I delete the pod.
When I do it in docker swarm I simply mount an empty NFS volume to /opt/myfiles/ and then my docker swarm service is started, the volume is populated with the files from the image and I can then work with my service and when I delete the service, I still have the files on my NFS server, so on next start of the service, I have my previous state back.
In kubernetes, when I mount an empty NFS volume to /opt/myfiles/, the pod is started and /opt/myfiles/ is overwritten with an empty directory, so my pod does not see the files from the image anymore.
My volme mount and volume definition:
[...]
volumeMounts:
- name: myvol
mountPath: /opt/myfiles
[...]
volumes:
- name: myvol
nfs:
server: nfs-server.mydomain.org
path: /srv/shares/myfiles
I read some threads about similar problems (for example K8s doesn't mount files on Persistent Volume) and tried some stuff using subPath and subPathExpr as in the documentation (https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) but none of my changes does, what docker swarm does by default.
The behaviour of kubernetes seems strange to my as I have worked with docker swarm for quite a while now and I am familiar with the way docker swarm handles that. But I am sure that there is a reason why kubernetes handles that in another way and that there are some possibilities to get, what I need.
So please, can someone have a look at my problem and help me find a way to get the following behaviour?
I have files in my image in some directory
I want to mount an NFS volume to that directory, because I need to have the data persisted, when for example my pod crashes and moves to another host or when my pod is temporarily shut down for whatever reason.
When my pod starts, I want the volume to be populated with the files from the image
And of course I would be really happy if someone could explain me, why kubernetes and docker swarm behave so different.
Thanks a lot in advance
This can be usually achieved in Kubernetes with init-containers.
Let's have an image with files stored in /mypath folder. If you are able to reconfigure the container to use a different path (like /persistent) you can use init container to copy files from /mypath to /persistent on pod's startup.
containers:
- name: myapp-container
image: myimage
env:
- name: path
value: /persistent
volumeMounts:
- name: myvolume
path: /persistent
initContainers:
- name: copy-files
image: myimage
volumeMounts:
- name: myvolume
path: /persistent
command: ['sh', '-c', 'cp /mypath/*.* /persistent']
In this case you have a main container myapp-container using files from /persistent folder from the NFS volume and each time when container starts the files from /mypath will be copied into that folder by init container copy-files.
I am new to kubernetes and experimenting with volumes. I have a docker image which declares 2 volumes as in :
VOLUME ["/db/mongo/data" , "/db/mongo/log"]
I am using a StatefulSet , wherein I have 2 volume mounts, as in --
volumeMounts:
- name: mongo-vol
mountPath: << path1 >>
subPath: data
- name: mongo-vol
mountPath: << path2 >>
subPath: log
My question is i) should path1 & path2 be mentioned as "/db/mongo/data" and "/db/mongo/log" respectively ??
ii) Or it can be any path where the volumes would be mounted inside the container, and "/db/mongo/data" & "/db/mongo/log" container paths would be automatically mapped to those mount points ?
I tried reading up the documentation and tried both options but some confusion still remains. Appreciate some help here.
Your both volume mounts reference to the same volume mongo-vol. That tells me this is a volume containing the data and log directories. You should use /db/mongo/log and /db/mongo/data as your mountPaths, and specify the subPath as log and data respectively. That will mount the volume referenced by mongo-vol, and mount data and log directories in that volume on to those directories.
If you had two seperate volumes, a mongo-data and mongo-log, then you would mount them the same way, without the subPath, because you are not referencing sub-directories under the volume.
Example (many options omitted for brevity):
version: "3"
volumes:
traefik:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.1.100,soft,rw,nfsvers=4,async"
device: ":/volume/docker/traefik"
services:
traefik:
volumes:
- traefik/traefik.toml:/traefik.toml
This errors out as there is no volume with the name traefik/traefik.toml meaning that the volume name must be the full path to the file (i.e. you can't append a path to the volume name)?
Trying to set device: ":/volume/docker/traefik/traefik.toml" just returns a not a directory error.
Is there a way to take a single file and mount it into a container?
You cannot mount a file or sub-directory within a named volume, the source is either the named volume or a host path. NFS itself, along with most filesystems you'd mount in Linux, require you to mount an entire filesystem, not a single file, and when you get down to the inode level, this is often a really good thing.
The options remaining that I can think of are to mount the entire directory somewhere else inside your container, and symlink to the file you want. Or to NFS mount the directory to the host and do a host mount (bind mount) to a specific file.
However considering the example you presented, using a docker config would be my ideal solution, removing the NFS mount entirely, and getting a read only copy of the file that's automatically distributed to whichever node is running the container.
More details on configs: https://docs.docker.com/engine/swarm/configs/
I believe I found the issue!
Wrong:
volumes:
- traefik/traefik.toml:/traefik.toml
Correct:
volumes:
- /traefik/traefik.toml:/traefik.toml
Start the volume with "/"
I understand that in Kubernetes you don't want to "tie" a pod to a host, but in certain cases you might need to.
In my particular case I have a DB that lives on blockstorage which is mounted to a specific host.
What I am trying to accomplish with Kubernetes is the equivalent of a bind-mount in Docker. I want to specify the directory on the host that I need mounted in the pod, similar to this:
/mnt/BTC_2:/root/.bitcoin:rw
How do I specify the location of where I want my persistent storage to be on the node/host? Would this be a hostPath volume like the following:
volumeMounts:
- mountPath: /root/.bitcoin
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /mnt/BTC_2
I want to specify the directory on the host that I need mounted in the pod
That should be documented here
A hostPath volume mounts a file or directory from the host node’s filesystem into your pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
Warning:
The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged container or modify the file permissions on the host to be able to write to a hostPath volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I am trying to create a jenkins and nexus integration using docker compose file. Where in my jenkins updated with few plugins using Dockerfile and volume created under /var/lib/jenkins/.
VOLUME ["/var/lib/jenkins/"]
in compose file am trying to map my volume to local store /opt/jenkins/
jenkins:
build: ./jenkins
ports:
- 9090:8080
volumes:
- /opt/jenkins/:/var/lib/jenkins/
But Nothing is copying to my persistence directory(/opt/jenkins/).
I can see in all my jenkins jobs created under _data/jobs/ directory under some volume. not in my volume defined /var/lib/jenkins/
Can any one help me on this why this is happening?
From the documentation:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
And in the mount a host directory as data volume:
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
So basically you are overlaying (hiding) anything that was in var/lib/jenkins. Can your image function if those things are hidden?