I have a docker image that creates few folders and extract files into it like below
RUN mkdir -p /home/myapp/myappv4 \
/home/myapp/myappv4/files \
/home/myapp/myappv4/files/logs \
/home/myapp/myappv4/myappentries
WORKDIR /home/myapp
RUN chown -R myapp:myapp /home/myapp
ADD /myapp-v4-files/*.zip /home/myapp/myappv4/files/
ADD /myapp-v4-files/init.txt /home/myapp/myappv4/myappentries/
ADD /myapp-v4-files/pro.json /home/myapp/myappv4/myappentries/
These folders and files needs to be accessed by other containers in a pod in kubernetes. Should i create persistentvolume in kubernetes and have these locations in them and copy the content from this container to this volume? In that way they would not get deleted right?. Since i am new to kubernetes i am not sure on how to achieve this. Transition from docker container to kubernetes deployment seems to be a confusing part for me,any help on this would be appreciated.
If you want to share a set of directories between multiple containers in a single pod, using only EmptyDir volume will suffice. You don't need to use PersistentVolumes (unless you want persistence, meaning you want the data to survive pod restarts).
However note that adding a volume (a kubernetes construct) will overwrite the files already present in your container at the path where you are mounting the volume, kind of what happens with a layered filesystem that docker uses.
For your usecase, I think you can move the file fetching logic from the Dockerfile to a script that the pod will run, that will fix the above mentioned issue.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: k8s.gcr.io/test-webserver
name: test-container-2
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
Read more about volumes here.
Related
In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged, which is a problem for me, since my main-container will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container just for the task of running rclone, mounting S3 in a /shared_storage folder, and sharing this folder with the main-container through a Volume shared-storage. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container I can read, create and delete files from my S3 bucket.
But from main-container no files are listed inside of shared_storage. I can create files (if I set readOnly: false) but those do not appear in sidecar-container.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container read from it.
In mount_source.py I am running rclone with --allow-other and I have edit etc/fuse.conf as suggested here.
Does anyone have an idea on how to solve this problem?
I've managed to make it work by using:
mountPropagation: HostToContainer on main-container
mountPropagation: Bidirectional on sidecar-container
I can control read/write permissions to specific mounts using readOnly: true/false on main-container. This is of course also possible to set within rclone mount command.
Now the main-container does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!
Interestingly, it doesn't seem to work if I set volumeMount:mountPath to be a sub-folder of the rclone mounted path. So if I want to grant main-container different read/write permissions to different subpaths, I had to create a separate rclone mount for each sub-folder.
I'm not 100% sure if there's any extra security concerns with that approach though.
I've got a fluentd.conf file and I'm trying to use (copy) it in my minikube cluster (works fine in a docker container).
The steps are the following:
Ill build (in my minikube docker-env) the docker image with the following commands
Dockerfile:
FROM fluent/fluentd:v1.11-1
# Use root account to use apk
USER root
# below RUN includes plugin as examples elasticsearch is not required
# you may customize including plugins as you wish
RUN apk add --no-cache --update --virtual .build-deps \
sudo build-base ruby-dev \
&& sudo gem install fluent-plugin-elasticsearch \
&& sudo gem sources --clear-all \
&& apk del .build-deps \
&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
COPY conf/fluent.conf /fluentd/etc/
USER fluent
This executes succesfully (so the copy works) and results in a runnable image, the following step is to launch the pods.
Part of the deployment files
spec:
containers:
- image: fluentd
imagePullPolicy: ""
name: fluentd
ports:
- containerPort: 24224
- containerPort: 24224
protocol: UDP
resources: {}
volumeMounts:
- mountPath: /fluentd/etc
name: fluentd-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: fluentd-claim0
persistentVolumeClaim:
claimName: fluentd-claim0
Everything launches fine but the fluentd-pod returns the following error:
No such file or directory # rb_sysopen - /fluentd/etc/fluent.conf
Anybody got an idea what i'm missing or what i'm doing wrong?
Thanks in advance,
This happens because you are mounting a volume in the same folder.
Inside your image, the config file is placed in /fluentd/etc/fluent.conf.
In your pod, you mount the fluentd-claim0 volume into /fluentd/etc/:
volumeMounts:
- mountPath: /fluentd/etc
name: fluentd-claim0
Like it would happen if you mount a volume in a nonempty directory in linux, all files present in the mount point directory will be hidden by the mount itself.
In order to "fix" this, you could use the subPath option of the volumeMounts entry like described in the documentation. For example:
volumeMounts:
- mountPath: /fluentd/etc/myfileordirectory
name: fluentd-claim0
subPath: myfileordirectory
This way only myfileordirectory will be mounted in /fluentd/etc/ and the rest of files will still be visible.
The limitation of this approach is that you need to know the list of files within your volume in advance. If this is not possible, the only alternative is to use a different directory for either your config file or your mountPath.
I see two problems in this setup.
First, your image: fluentd is running the Docker Hub fluentd image and not your customized image. You need to give a name including a registry address or your Docker Hub user name to use something else. (You say you're using minikube, so you could have docker build -t fluentd in the minikube context to get this image; try to avoid using names that could conflict with the Docker Hub image names even for local builds.)
Second, your volume setup is creating a new empty persistent volume, and then overwriting the configuration file in the image with that volume. Nothing copies anything into the persistent volume, so you're getting an empty directory inside the container. (This is different from Docker named volumes; even in plain Docker, I don't recommend relying on the copy-on-first-use volume behavior.) You can delete everything that mentions the volumes.
That leaves you with a simpler pod spec, especially if you also delete fields that are being left at default values:
spec:
containers:
- image: my/fluentd # not plain "fluentd"
name: fluentd
ports:
- containerPort: 24224
- containerPort: 24224
protocol: UDP
restartPolicy: Always
Another option is to provide the configuration file in a ConfigMap. It wouldn't necessarily be in your image in this case, and if you use a tool like Helm to deploy, there's an opportunity to set up the configuration at deployment time. Here you would have most of the volume machinery, but the volumes: block would reference the configMap: and not a PVC.
I have a Docker image and I'd like to share an entire directory on a volume (a Persistent Volume) on Kubernetes.
Dockerfile
FROM node:carbon
WORKDIR /node-test
COPY hello.md /node-test/hello.md
VOLUME /node-test
CMD ["tail", "-f", "/dev/null"]
Basically it copies a file hello.md and makes it part of the image (lets call it my-image).
On the Kubernetes deployment config I create a container from my-image and I share a specific directory to a volume.
Kubernetes deployment
# ...
spec:
containers:
- image: my-user/my-image:v0.0.1
name: node
volumeMounts:
- name: node-volume
mountPath: /node-test
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
I'd expect to see the hello.md file in the directory of the persistent volume, but nothing shows up.
If I don't bind the container to a volume I can see the hello.md file (with kubectl exec -it my-container bash).
I'm not doing anything different from what this official Kubernetes example does. As matter of fact I can change mountPath and switch to the official Wordpress image and it works as expected.
How can Wordpress image copy all files into the volume directory?
What's in the Wordpress Dockerfile that is missing on mine?
In order not to overwrite the existing files/content, you can use subpath to mount the testdir directory (In the example below) in the existing Container file system.
volumeMounts:
- name: node-volume
mountPath: /node-test/testdir
subPath: testdir
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
you can find for more information here using-subpath
I have a builder image / container which is supposed to run tests on a directory with tests sources.
The container is run in a Kubernetes pod, in AWS EKS, through helm test. I.e. not docker, so I can't simply use -v volume mount.
I am struggling to find the right way to bring this directory to the container, in a simple way. This is a Helm template I have. All works except for the volume.
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-gatling-test"
annotations:
"helm.sh/hook": test-success
spec:
restartPolicy: Never
containers:
- name: {{ .Release.Name }}-gatling-test
image: {{ .Values.builderImage }}
command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
## TODO: The builder image also counts with having /tmp/build, so it needs a mount: -v '${job.WORKDIR}:/tmp/build'
volumeMounts:
- name: mavenRepoToBuild
mountPath: /tmp/build
volumes:
- name: mavenRepoToBuild
hostPath:
path: {{.Values.fromJenkins.WORKDIR}}
I've read on few places that it can't be done directly. So what's the easy way to do it indirectly? Zip and upload to S3 and download? Or add it to the image as a layer? Or should I create a Kubernetes volume resource?
The hostPath directory or file must be existing on all your cluster nodes.
You can attach some types on the hostPath to determine whether its files or directories.
List of types you can use on hostpath can be found in kubernetes documentation.
https://kubernetes.io/docs/concepts/storage/volumes/
Btw, what error do you get? Permission denied? You can do helm dry-run to see the rendered template.
UPDATE:
I connected to the minikubevm and I see my host directory mounted but there is no files there. Also when I create a file there it will not in my host machine. Any link are between them
I try to mount an host directory for developing my app with kubernetes.
As the doc recommended, I am using minikube for running my kubernetes cluster on my pc. The goal is to create a develop environment with docker and kubernetes for develop my app. I want to mount a local directory so my docker will read the code app from there. But it is not work. Any help would be really appreciate.
my test app (server.js):
var http = require('http');
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);
my Dockerfile:
FROM node:latest
WORKDIR /code
ADD code/ /code
EXPOSE 8080
CMD server.js
my pod kubernetes configuration: (pod-configuration.yaml)
apiVersion: v1
kind: Pod
metadata:
name: apiserver
spec:
containers:
- name: node
image: myusername/nodetest:v1
ports:
- containerPort: 8080
volumeMounts:
- name: api-server-code-files
mountPath: /code
volumes:
- name: api-server-code-files
hostPath:
path: /home/<myuser>/Projects/nodetest/api-server/code
my folder are:
/home/<myuser>/Projects/nodetest/
- pod-configuration.yaml
- api-server/
- Dockerfile
- code/
- server.js
When I running my docker image without the hostPath volume it is of course works but the problem is that on each change I will must recreate my image that is really not powerful for development, that's why I need the volume hostPath.
Any idea ? why i don't success to mount my local directory ?
Thanks for the help.
EDIT: Looks like the solution is to either use a privilaged container, or to manually mount your home folder to allow the MiniKube VM to read from your hostPath -- https://github.com/boot2docker/boot2docker#virtualbox-guest-additions. (Credit to Eliel for figuring this out).
It is absolutely possible to configure a hostPath volume with minikube - but there are a lot of quirks and there isn't very good support for this particular issue.
Try removing ADD code/ /code from your Dockerfile. Docker's "ADD" instruction is copying the code from your host machine into your container's /code directory. This is why rebuilding the image successfully updates your code.
When Kubernetes tries to mount the container's /code directory to the host path, it finds that this directory is already full of the code that was baked into the image. If you take this out of the build step, Kubernetes should be able to successfully mount the host path at runtime.
Also be sure to check the permissions of the code/ directory on your host machine.
My only other thought is related to mounting in the root directory. I had issues when mounting Kubernetes hostPath volumes to/from directories in the root directory (I assume this was permissions related). So, something else to try would be a mountPath like /var/www/html.
Here's an example of a functional hostPath volume:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: example-volume
hostPath:
path: '/Users/example-user/code'
containers:
- name: example-container
image: example-image
volumeMounts:
- mountPath: '/var/www/html'
name: example-volume
They have now given the minikube mount which works on all environment
https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md
Tried on Mac:
$ minikube mount ~/stuff/out:/mnt1/out
Mounting /Users/macuser/stuff/out into /mnt1/out on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
And in pod:
apiVersion: v1
kind: Pod
metadata:
name: myServer
spec:
containers:
- name: myServer
image: myImage
volumeMounts:
- mountPath: /mnt1/out
name: volume
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumes:
- name: volume
hostPath:
path: /mnt1/out
Best practice would be building the code into your image, you should not run an image with code just coming from the disk. Your Dockerfile should look more like:
FROM node:latest
COPY /code/server.js /code/server.js
EXPOSE 8080
CMD /code/server.js
Then you run the Image on Kubernetes without any volumes. You need to rebuild the image and update the pod every time you update the code.
Also, I'm currently not aware that minikube allows for mounts between the VM it creates and the host you are running it on.
If you really want the extreme fast feedback cycle of changing code while the container is running, you might be able to use just Docker by itself with -v /path/to/host/code:/code without Kubernetes and then once you are ready build the image and deploy and test it on minikube. However, I'm not sure that would work if you're changing the main .js file of your node app.