I would like to share a directory from my Windows host to my Kubernetes container, achieving the same what I used to have in docker-compose:
volumes:
- ./data:/mnt/data
I tried with hostPath the following way:
volumes:
- name: data-vol
hostPath:
path: ./data
type: Directory
It failed with "MountVolume.SetUp failed for volume "data-vol" : hostPath type check failed: ./data is not a directory"
I tried with different path formats (e.g: /C/data or /host_mnt/c/data) but no success. Any idea how to overcome this?
Can you try to use absolute path instead relative:
FROM:
path: ./data
type: Directory
TO:
path: /my/absolute/path/data
type: Directory
Related
Need some help with Cloud Run with Secret Manager, we need to mount 2 secrets as volume (file only), following is the yaml from Cloud Run.
volumeMounts:
- name: secret-2f1d5ec9-d681-4b0f-8a77-204c5f853330
readOnly: true
mountPath: /root/key/mtls/client_auth.p12
- name: secret-29c1417a-d9fe-4c37-8cb0-562c97f3c827
readOnly: true
mountPath: /root/key/firebase/myapp-d2a0f-firebase-adminsdk-irfes-a699971a4d.json
volumes:
- name: secret-2f1d5ec9-d681-4b0f-8a77-204c5f853330
secret:
secretName: myapp_mtls_key
items:
- key: latest
path: myapp_mtls_key
- name: secret-29c1417a-d9fe-4c37-8cb0-562c97f3c827
secret:
secretName: myapp_firebase_token
items:
- key: latest
path: myapp_firebase_token
mtls secret (p12 file) is getting mounted properly as a file but the firebase secret (json file) is getting mounted as a directory instead.
java.io.FileNotFoundException: /root/key/firebase/myapp-d2a0f-firebase-adminsdk-irfes-a699971a4d.json (Is a directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:111)
at com.myapp.gcp.GCPInit.init(GCPInit.java:39)
Based on docker convention, if a file is not found on the host then its mounted as directory, but in this case we do not have control over the host path or file availability, so could it be a bug?
When testing our deployment in docker container with volume mounts everything works fine so we are sure our application is not at fault.
Appreciate any guidance on this issue.
Thanks
I am learning about Volumes in the Kubernetes.
I understood the concept of Volume and types of volume.
But, I am confused about the mouthPath property. Following my YAML file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-alpine-volume
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
resources:
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /mohit/index.html;sleep 10; done
resources:
volumeMounts:
- name: html
mountPath: /mohit
volumes:
- name: html
emptyDir: {}
Question: What is the use of the mountPath property. Here, I am using one pod with two containers. Both containers have different mounPath values.
Update:
Consider the mount path as the directory where you are attaching or mounting the files or system
While your actual volume is emptyDir
What basically the idea is there to both container have different mount path
as both containers need to use different folders
While as your volume is single name html so locally from volume both container pointing or using the different folders
both containers manage the different files at their mounting point (or folder)
so mount path is a point or directly where your container will be managing files.
Empty dir : https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Read more at : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
if you see this example: https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml
it has the same two containers with mount path and emptydir volume
what i am doing is attaching the Nginx configuration file, to the container so Nginx will use that configuration file which i am mounting from outside to the container.
My config file stored inside configmap or secret.
How should Mutagen be configured, to synchronise local-host source code files with a Docker volume, onto a Kubernetes cluster?
I used to mount my project directory onto a container, using hostPath:
kind: Deployment
spec:
volumes:
- name: "myapp-src"
hostPath:
path: "path/to/project/files/on/localhost"
type: "Directory"
...
containers:
- name: myapp
volumeMounts:
- name: "myapp-src"
mountPath: "/app/"
but this has permission and symlinks problems, that I need to solve using Mutagen.
At the moment, it works correctly when relying on docker-compose (run via mutagen project start -f path/to/mutagen.yml):
sync:
defaults:
symlink:
mode: ignore
permissions:
defaultFileMode: 0664
defaultDirectoryMode: 0775
myapp-src:
alpha: "../../"
beta: "docker://myapp-mutagen/myapp-src"
mode: "two-way-safe"
But it isn't clear to me how to configure the K8S Deployment, in order to use Mutagen for keeping the myapp-src volume in sync with localhost?
I have a Docker image and I'd like to share an entire directory on a volume (a Persistent Volume) on Kubernetes.
Dockerfile
FROM node:carbon
WORKDIR /node-test
COPY hello.md /node-test/hello.md
VOLUME /node-test
CMD ["tail", "-f", "/dev/null"]
Basically it copies a file hello.md and makes it part of the image (lets call it my-image).
On the Kubernetes deployment config I create a container from my-image and I share a specific directory to a volume.
Kubernetes deployment
# ...
spec:
containers:
- image: my-user/my-image:v0.0.1
name: node
volumeMounts:
- name: node-volume
mountPath: /node-test
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
I'd expect to see the hello.md file in the directory of the persistent volume, but nothing shows up.
If I don't bind the container to a volume I can see the hello.md file (with kubectl exec -it my-container bash).
I'm not doing anything different from what this official Kubernetes example does. As matter of fact I can change mountPath and switch to the official Wordpress image and it works as expected.
How can Wordpress image copy all files into the volume directory?
What's in the Wordpress Dockerfile that is missing on mine?
In order not to overwrite the existing files/content, you can use subpath to mount the testdir directory (In the example below) in the existing Container file system.
volumeMounts:
- name: node-volume
mountPath: /node-test/testdir
subPath: testdir
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
you can find for more information here using-subpath
UPDATE:
I connected to the minikubevm and I see my host directory mounted but there is no files there. Also when I create a file there it will not in my host machine. Any link are between them
I try to mount an host directory for developing my app with kubernetes.
As the doc recommended, I am using minikube for running my kubernetes cluster on my pc. The goal is to create a develop environment with docker and kubernetes for develop my app. I want to mount a local directory so my docker will read the code app from there. But it is not work. Any help would be really appreciate.
my test app (server.js):
var http = require('http');
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);
my Dockerfile:
FROM node:latest
WORKDIR /code
ADD code/ /code
EXPOSE 8080
CMD server.js
my pod kubernetes configuration: (pod-configuration.yaml)
apiVersion: v1
kind: Pod
metadata:
name: apiserver
spec:
containers:
- name: node
image: myusername/nodetest:v1
ports:
- containerPort: 8080
volumeMounts:
- name: api-server-code-files
mountPath: /code
volumes:
- name: api-server-code-files
hostPath:
path: /home/<myuser>/Projects/nodetest/api-server/code
my folder are:
/home/<myuser>/Projects/nodetest/
- pod-configuration.yaml
- api-server/
- Dockerfile
- code/
- server.js
When I running my docker image without the hostPath volume it is of course works but the problem is that on each change I will must recreate my image that is really not powerful for development, that's why I need the volume hostPath.
Any idea ? why i don't success to mount my local directory ?
Thanks for the help.
EDIT: Looks like the solution is to either use a privilaged container, or to manually mount your home folder to allow the MiniKube VM to read from your hostPath -- https://github.com/boot2docker/boot2docker#virtualbox-guest-additions. (Credit to Eliel for figuring this out).
It is absolutely possible to configure a hostPath volume with minikube - but there are a lot of quirks and there isn't very good support for this particular issue.
Try removing ADD code/ /code from your Dockerfile. Docker's "ADD" instruction is copying the code from your host machine into your container's /code directory. This is why rebuilding the image successfully updates your code.
When Kubernetes tries to mount the container's /code directory to the host path, it finds that this directory is already full of the code that was baked into the image. If you take this out of the build step, Kubernetes should be able to successfully mount the host path at runtime.
Also be sure to check the permissions of the code/ directory on your host machine.
My only other thought is related to mounting in the root directory. I had issues when mounting Kubernetes hostPath volumes to/from directories in the root directory (I assume this was permissions related). So, something else to try would be a mountPath like /var/www/html.
Here's an example of a functional hostPath volume:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: example-volume
hostPath:
path: '/Users/example-user/code'
containers:
- name: example-container
image: example-image
volumeMounts:
- mountPath: '/var/www/html'
name: example-volume
They have now given the minikube mount which works on all environment
https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md
Tried on Mac:
$ minikube mount ~/stuff/out:/mnt1/out
Mounting /Users/macuser/stuff/out into /mnt1/out on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
And in pod:
apiVersion: v1
kind: Pod
metadata:
name: myServer
spec:
containers:
- name: myServer
image: myImage
volumeMounts:
- mountPath: /mnt1/out
name: volume
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumes:
- name: volume
hostPath:
path: /mnt1/out
Best practice would be building the code into your image, you should not run an image with code just coming from the disk. Your Dockerfile should look more like:
FROM node:latest
COPY /code/server.js /code/server.js
EXPOSE 8080
CMD /code/server.js
Then you run the Image on Kubernetes without any volumes. You need to rebuild the image and update the pod every time you update the code.
Also, I'm currently not aware that minikube allows for mounts between the VM it creates and the host you are running it on.
If you really want the extreme fast feedback cycle of changing code while the container is running, you might be able to use just Docker by itself with -v /path/to/host/code:/code without Kubernetes and then once you are ready build the image and deploy and test it on minikube. However, I'm not sure that would work if you're changing the main .js file of your node app.