We have created a deployment in the EKS cluster. Underlying pods are supposed to have existing content inside a particular directory which was created through the Dockerfile copy command. But when the pod is created an external EFS volume is mounted on the same directory path due to some application requirements. When we login to the pod and check the contents we found that the existing files have been overwritten by the EFS volume contents. We would like to have both the file contents in place once the EFS volume is mounted on the Pod. Please help us to achieve this.
Related
I have a docker container image that requires me to mount a volume containing a specific configuration file, in order for that container to properly start (this image is not one that I have control over, and is vendor supplied). If that volume is not mounted, the container will exit because the file is not found. So I need to put a configuration file in /host/folder/, and then:
docker run --name my_app -v /host/folder:/container/folder image_id
The application will then look in /container/folder/ for the file it needs to start.
I want to create/commit a new image with that file inside /container/folder/, but when that folder is mounted as volume from the host, docker cp will not help me do this, as far as I have tried. I think, as far as docker is concerned, the file copied there is no different than the files in the mounted volume, and will disappear when the container is stopped.
Part of the reason I want to do this, is because the file will not be changed, and should be there by default. The other reason is that I want to run this container in Kubernetes, and avoid using persistent volumes there to mount these directories. I have looked into using configmaps, but I'm not seeing how I can use those for this purpose.
If you can store the file into the ConfigMap you can mount the file to volume and use it inside the Kubernetes.
I am not sure with the type of file you have to use.
ConfigMap will inject this file into the volume of a POD so the application could access and use it.
In this case there will be no PVC required.
You can also follow this nice example showing how to mount the file into a volume inside a pod.
OR
Also, I am not sure about the docker image but if you can use that docker image you can add the file into the path, something like:
FROM <docker image>
ADD file ./container/folder/
In this case, you might have to check you can use the vendor docker image as a base and add the file into it.
Is there any possibility to mount a volume in a pod without writing back to the host? for example, I have the folder "users" in the host, I want that this folder is mounted in the pod at "/data", but the pod only write and read from the "/data".
To deploy the pods, I'm using kubectl apply -f deployment.yaml.
I understand that files / folders can be copied into a container using the command:
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
However, I am looking to do this in a yaml file
How would I go about doing this? (Assuming that I am using a deployment for the container)
The way you are going is wrong direction. Kubernetes does this with serveral ways.
first, think about configmap
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap
You can easily define the configuration files for your application running in container
If you do know the files or folders is exist on worker nodes, you can use hostPath to mount it into container with nominated nodeName: node01 in k8s yaml.
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
if the files or folders are generated temporarily, you can use emptyDir
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
You cannot, mapping local files from your workstation is not a feature of Kubernetes.
I am using k8s with version 1.11 and CephFS as storage.
I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config
in the deployment configuration
Volume
{
"name": "cephfs-0",
"cephfs": {
"monitors": [
"10.0.1.165:6789",
"10.0.1.103:6789",
"10.0.1.222:6789"
],
"user": "cfs",
"secretRef": {
"name": "ceph-secret"
},
"readOnly": false,
"path": "/cfs/data/conf"
}
}
volumeMounts
{
"mountPath": "/opt/myapplication/conf",
"name": "cephfs-0",
"readOnly": false
}
Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.
I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.
Is it possible?
I went through the docker document and it mentions that
Populate a volume using a container
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
This matches with my requirement but how to achieve it with k8s volumes?
Unfortunately Kubernetes' volume system is very different from Docker's so this is not possible directly. If there is a single file (or a small number) you can use subPath projection like this:
volumeMounts:
- name: cephfs-0
mountPath: /opt/myapplication/conf/foo.conf
subPath: foo.conf
Repeat that for each file. But if you have a lot of files, or if they can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.
Very easy ! you have to use init container here. With init container use the same deployment image of your application.
Assuming your container path is /opt/myapplication/conf
your init container will share the cephfs PVC
with init container define volume mount at /opt/data
in init container config run the command mv to move your existing data to mounted volume path /opt/data.
with main application container, mount the volume at correct location i.e. /opt/myapplication/conf
now when you deploy your application,
your init container mounts the cephfs pv and moves container path data to volume.
now your main application starts and mounts the volume on correct path , now the volume when mounting here have data also.
I was able to fix this by having my ENTRYPOINT be a bash script that mv my config files i wanted mounted to their correct location. It seems this device or resource is busy errors were happening because the files were not mounted yet.
I also encountered this very niche issue not being able to mount a folder to a specific path with content from my built image. This ends up empty.
However my workaround is to use ENTRYPOINT in de dockerfile refering to a shellscript that runs the commands to initialize DB or do something with files that affects the mounted target folder.
So it seems that entrypoint runs after the volume is being mounted by kubernetes.
I did tried to symlink the path in the entrypoint script, but that didn't work out.
I am trying to setup the AKS in which I have used azure disk to mount the source code of the application. When I am using kubectl describe pods command then also it is showing as mounted but I dont know how may I copy the code into that?
I got some recommendations that use kubectl cp command but my pod name is changing each time whenever I am deploying so please let me know what should i do?
you'd need to copy files to the disk directly (not to the pod). you can use your pod or worker node to do that. You can use kubectl cp to copy files to the pod and then move it to the mounted disk like you normally would. or you can ssh to the worker node and copy files over ssh to the node and put files to the mounted disk.