Configure Mutagen to sync host files with a container, inside Kubernetes? - docker

How should Mutagen be configured, to synchronise local-host source code files with a Docker volume, onto a Kubernetes cluster?
I used to mount my project directory onto a container, using hostPath:
kind: Deployment
spec:
volumes:
- name: "myapp-src"
hostPath:
path: "path/to/project/files/on/localhost"
type: "Directory"
...
containers:
- name: myapp
volumeMounts:
- name: "myapp-src"
mountPath: "/app/"
but this has permission and symlinks problems, that I need to solve using Mutagen.
At the moment, it works correctly when relying on docker-compose (run via mutagen project start -f path/to/mutagen.yml):
sync:
defaults:
symlink:
mode: ignore
permissions:
defaultFileMode: 0664
defaultDirectoryMode: 0775
myapp-src:
alpha: "../../"
beta: "docker://myapp-mutagen/myapp-src"
mode: "two-way-safe"
But it isn't clear to me how to configure the K8S Deployment, in order to use Mutagen for keeping the myapp-src volume in sync with localhost?

Related

Rclone mount shared between containers in the same K8s pod

In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged, which is a problem for me, since my main-container will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container just for the task of running rclone, mounting S3 in a /shared_storage folder, and sharing this folder with the main-container through a Volume shared-storage. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container I can read, create and delete files from my S3 bucket.
But from main-container no files are listed inside of shared_storage. I can create files (if I set readOnly: false) but those do not appear in sidecar-container.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container read from it.
In mount_source.py I am running rclone with --allow-other and I have edit etc/fuse.conf as suggested here.
Does anyone have an idea on how to solve this problem?
I've managed to make it work by using:
mountPropagation: HostToContainer on main-container
mountPropagation: Bidirectional on sidecar-container
I can control read/write permissions to specific mounts using readOnly: true/false on main-container. This is of course also possible to set within rclone mount command.
Now the main-container does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!
Interestingly, it doesn't seem to work if I set volumeMount:mountPath to be a sub-folder of the rclone mounted path. So if I want to grant main-container different read/write permissions to different subpaths, I had to create a separate rclone mount for each sub-folder.
I'm not 100% sure if there's any extra security concerns with that approach though.

How to mount a directory in windows on a container with Kubernetes on Docker for Windows?

I am using Docker for Windows 2.3.0.4 (stable) backed by WSL2 on Windows 10 version 2004 and with Kubernetes support enabled.
I am trying to create the following pod:
apiVersion: v1
kind: Pod
metadata:
name: api0
spec:
volumes:
- name: "mongo-data"
hostPath:
path: "/c/wr/volumes/mongo/data"
containers:
- name: db
image: mongo:3.6.19-xenial
volumeMounts:
- mountPath: "/data/db"
name: "mongo-data"
resources:
limits:
memory: "512Mi"
cpu: "1"
ports:
- containerPort: 27017
I have an issue with the mongo-data volume; when the pod is created via kubectl apply -f api0.yml the pod runs properly and the MongoDB collections are persisted after deleting and re-applying the pod.
But the C:\wr\volumes\mongo\data path that is mounted to the mongo db container does not contain the data files and is always empty
As I mentioned before, the state is persisted somewhere but not in the specified path.
What am I missing?
I tried specifying the path with the following formats:
/c/wr/volumes/mongo/data
//c/wr/volumes/mongo/data
//////c/wr/volumes/mongo/data
/mnt/c/wr/volumes/mongo/data
And I even tried referencing the /opt/data path in the wsl filesystem but the data files are never there.

Expose folder from a container as a volume in a Kubernetes Pod

I have a pod which contains two containers. One container is a web application and another store some static data for this web application.
The data here is a set of files which are stored in the folder of this container with name /data and that's only function of the container to store this data and expose them to the web application.
I'm looking for the way to share the content of this folder with web application container in this pod.
If I'm using the YAML spec below the folder in both containers is empty. Is there a way to share the data from container folder without cleaning it up?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
version: 1.2.3
spec:
volumes:
- name: my-app-data-volume
containers:
- name: my-app-server
image: my-app-server-container-name
volumeMounts:
- name: my-app-data-volume
mountPath: /data
ports:
- containerPort: 8080
- name: my-app-data
image: my-app-data-container-name
volumeMounts:
- name: my-app-data-volume
mountPath: /data
You can use an EmptyDir volume for this. Specify the container that contains the files as an initContainer, then copy the files into the EmptyDir volume. Finally, mount that volume in the web app container.

kubernetes: how to write pod yaml file for mapping volumn from host to container

I have a command to run docker,
docker run --name pre-core -itdp 8086:80 -v /opt/docker/datalook-pre-core:/usr/application app
In above command, /opt/docker/datalook-pre-core is host directory, /usr/application is container directory. The purpose is that container directory maps to host directory. So when container crashes, the directory functions as storage and data on it would be saved.
When I am going to use kubernetes to create a pod for this containter, how to write pod.yaml file?
I guess it is something like following:
apiVersion: v1
kind: Pod
metadata:
name: app-ykt
labels:
app: app-ykt
purpose: ykt_production
spec:
containers:
- name: app-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumnMounts:
- name: volumn-app-ykt
mountPath: /usr/application
volumns:
- name: volumn-app-ykt
????
I do not know what's the exact properties in yaml I shall write in my case?
This would be a hostPath volume: https://kubernetes.io/docs/concepts/storage/volumes/
volumes:
- name: volumn-app-ykt
hostPath:
# directory location on host
path: /opt/docker/datalook-pre-core
# this field is optional
type: Directory
However remember that while a container crash won't move things, other events can cause a pod to move to a different host so you need to be prepared to both deal with cold caches and to clean up orphaned caches.

HostPath with minikube - Kubernetes

UPDATE:
I connected to the minikubevm and I see my host directory mounted but there is no files there. Also when I create a file there it will not in my host machine. Any link are between them
I try to mount an host directory for developing my app with kubernetes.
As the doc recommended, I am using minikube for running my kubernetes cluster on my pc. The goal is to create a develop environment with docker and kubernetes for develop my app. I want to mount a local directory so my docker will read the code app from there. But it is not work. Any help would be really appreciate.
my test app (server.js):
var http = require('http');
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);
my Dockerfile:
FROM node:latest
WORKDIR /code
ADD code/ /code
EXPOSE 8080
CMD server.js
my pod kubernetes configuration: (pod-configuration.yaml)
apiVersion: v1
kind: Pod
metadata:
name: apiserver
spec:
containers:
- name: node
image: myusername/nodetest:v1
ports:
- containerPort: 8080
volumeMounts:
- name: api-server-code-files
mountPath: /code
volumes:
- name: api-server-code-files
hostPath:
path: /home/<myuser>/Projects/nodetest/api-server/code
my folder are:
/home/<myuser>/Projects/nodetest/
- pod-configuration.yaml
- api-server/
- Dockerfile
- code/
- server.js
When I running my docker image without the hostPath volume it is of course works but the problem is that on each change I will must recreate my image that is really not powerful for development, that's why I need the volume hostPath.
Any idea ? why i don't success to mount my local directory ?
Thanks for the help.
EDIT: Looks like the solution is to either use a privilaged container, or to manually mount your home folder to allow the MiniKube VM to read from your hostPath -- https://github.com/boot2docker/boot2docker#virtualbox-guest-additions. (Credit to Eliel for figuring this out).
It is absolutely possible to configure a hostPath volume with minikube - but there are a lot of quirks and there isn't very good support for this particular issue.
Try removing ADD code/ /code from your Dockerfile. Docker's "ADD" instruction is copying the code from your host machine into your container's /code directory. This is why rebuilding the image successfully updates your code.
When Kubernetes tries to mount the container's /code directory to the host path, it finds that this directory is already full of the code that was baked into the image. If you take this out of the build step, Kubernetes should be able to successfully mount the host path at runtime.
Also be sure to check the permissions of the code/ directory on your host machine.
My only other thought is related to mounting in the root directory. I had issues when mounting Kubernetes hostPath volumes to/from directories in the root directory (I assume this was permissions related). So, something else to try would be a mountPath like /var/www/html.
Here's an example of a functional hostPath volume:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: example-volume
hostPath:
path: '/Users/example-user/code'
containers:
- name: example-container
image: example-image
volumeMounts:
- mountPath: '/var/www/html'
name: example-volume
They have now given the minikube mount which works on all environment
https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md
Tried on Mac:
$ minikube mount ~/stuff/out:/mnt1/out
Mounting /Users/macuser/stuff/out into /mnt1/out on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
And in pod:
apiVersion: v1
kind: Pod
metadata:
name: myServer
spec:
containers:
- name: myServer
image: myImage
volumeMounts:
- mountPath: /mnt1/out
name: volume
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumes:
- name: volume
hostPath:
path: /mnt1/out
Best practice would be building the code into your image, you should not run an image with code just coming from the disk. Your Dockerfile should look more like:
FROM node:latest
COPY /code/server.js /code/server.js
EXPOSE 8080
CMD /code/server.js
Then you run the Image on Kubernetes without any volumes. You need to rebuild the image and update the pod every time you update the code.
Also, I'm currently not aware that minikube allows for mounts between the VM it creates and the host you are running it on.
If you really want the extreme fast feedback cycle of changing code while the container is running, you might be able to use just Docker by itself with -v /path/to/host/code:/code without Kubernetes and then once you are ready build the image and deploy and test it on minikube. However, I'm not sure that would work if you're changing the main .js file of your node app.

Resources