I'm trying to create a new Kubernetes deployment that will allow me to persist a pod's state when it is restarted or shutdown. Just for some background, the Kubernetes instance is a managed Amazon EKS Cluster, and I am trying to incorporate an Amazon EFS-backed Persistent Volume that is mounted to the pod.
Unfortunately as I have it now, the PV mounts to /etc/ as desired, but the contents are nearly empty, except for some files that were modified during boot.
The deployment yaml looks as below:
kind: Deployment
apiVersion: apps/v1
spec:
replicas: 1
selector:
matchLabels:
app: testpod
template:
metadata:
creationTimestamp: null
labels:
app: testpod
spec:
volumes:
- name: efs
persistentVolumeClaim:
claimName: efs
containers:
- name: testpod
image: 'xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/testpod:latest'
args:
- /bin/init
ports:
- containerPort: 443
protocol: TCP
resources: {}
volumeMounts:
- name: efs
mountPath: /etc
subPath: etc
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- ALL
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
Any ideas of what could be going wrong? I would expect /etc/ to be populated with the contents of the image.
Edit:
This seems to be working fine in Docker by using the same image, creating a volume with docker volume create <name> and then mounting it as -v <name>:/etc.
Kubernetes does not have the Docker feature that populates volumes based on the contents of the image. If you create a new volume (whether an emptyDir volume or something based on cloud storage like AWS EBS or EFS) it will start off empty, and hide whatever was in the container.
As such, you can’t mount a volume over large parts of the container; it won’t work to mount a volume over your application’s source tree, or over /etc as you show. For files in /etc in particular, a better approach would be to use a Kubernetes ConfigMap to hold specific files you want to add to that directory. (Store your config files in source control and add them as part of the deployment sequence; don’t try to persist untracked modifications to deployed files.)
my guess would be the mounts in containers works exactly the same way as mounts in operating system.. if you mount something at /etc you simply overwrite (better word 'cover') what has been there before.. if you mount empty EFS there will be empty folder
I tried what you tried in docker and (surprise for me) it works the way you describe.. it's likely because docker volumes are simply technologically something else than kubernetes volume claims (especially backed by EFS) this explains it: Docker mount to folder overriding content
tldr: if docker volume is empty files will be mirrored
I don't personally think with k8s and EFS you can achieve what you're trying to
I think you might be interested in "nsfdsuds", potentially: it establishes an overlayfs for a Kubernetes container in which the writable, top layer of the overlayfs can be on a PersistentVolume of your choice.
https://github.com/Sha0/nsfdsuds
Related
In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged, which is a problem for me, since my main-container will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container just for the task of running rclone, mounting S3 in a /shared_storage folder, and sharing this folder with the main-container through a Volume shared-storage. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container I can read, create and delete files from my S3 bucket.
But from main-container no files are listed inside of shared_storage. I can create files (if I set readOnly: false) but those do not appear in sidecar-container.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container read from it.
In mount_source.py I am running rclone with --allow-other and I have edit etc/fuse.conf as suggested here.
Does anyone have an idea on how to solve this problem?
I've managed to make it work by using:
mountPropagation: HostToContainer on main-container
mountPropagation: Bidirectional on sidecar-container
I can control read/write permissions to specific mounts using readOnly: true/false on main-container. This is of course also possible to set within rclone mount command.
Now the main-container does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!
Interestingly, it doesn't seem to work if I set volumeMount:mountPath to be a sub-folder of the rclone mounted path. So if I want to grant main-container different read/write permissions to different subpaths, I had to create a separate rclone mount for each sub-folder.
I'm not 100% sure if there's any extra security concerns with that approach though.
I am learning about Volumes in the Kubernetes.
I understood the concept of Volume and types of volume.
But, I am confused about the mouthPath property. Following my YAML file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-alpine-volume
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
resources:
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /mohit/index.html;sleep 10; done
resources:
volumeMounts:
- name: html
mountPath: /mohit
volumes:
- name: html
emptyDir: {}
Question: What is the use of the mountPath property. Here, I am using one pod with two containers. Both containers have different mounPath values.
Update:
Consider the mount path as the directory where you are attaching or mounting the files or system
While your actual volume is emptyDir
What basically the idea is there to both container have different mount path
as both containers need to use different folders
While as your volume is single name html so locally from volume both container pointing or using the different folders
both containers manage the different files at their mounting point (or folder)
so mount path is a point or directly where your container will be managing files.
Empty dir : https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Read more at : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
if you see this example: https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml
it has the same two containers with mount path and emptydir volume
what i am doing is attaching the Nginx configuration file, to the container so Nginx will use that configuration file which i am mounting from outside to the container.
My config file stored inside configmap or secret.
I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:
docker run [image] -v /host/path:/container/path
Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?
In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:
spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
Lots of other volumes types and examples in Kubernetes documentation.
Kubernetes has a dedicated construct for holding configuration files, ConfigMaps. Helm in turn has support for Accessing Files Inside Templates which can help you copy them into ConfigMap objects. A minimal setup here would look like:
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.ini: |
{{ .Files.Get "config.ini" | indent 4 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment:
metadata: { ... }
spec:
template:
spec:
volumes:
- name: config-data
configMap:
name: my-config # matches ConfigMap metadata: { name: }
containers:
- volumeMounts:
- name: config-data # matches volume name: in this file
mountPath: /container/path
You can use Helm's templating constructs in various ways here: to dynamically construct the contents of the ConfigMap, to set an environment variable saying which file to use, and so on.
Do not use hostPath volumes here. Since Kubernetes is designed as a clustered environment, you do not have much control over which node a given pod will run on; you would have to copy these config files to every node in the cluster and try to update them all when a file changed. That's a huge maintenance problem, especially if you don't have direct filesystem access to the nodes.
I am trying to run a Factorio game server on Kubernetes (hosted on GKE).
I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory.
I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server.
What would be the best way to upload a file to this Persistent Volume Claim?
I have thought of 2 ways but I'm not sure which is best or if either are a good idea:
Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim
Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container
It turns out there is a much simpler way: The kubectl cp command.
This command lets you copy data from your computer to a container running on your cluster.
In my case I ran:
kubectl cp ~/.factorio/saves/k8s-test.zip factorio/factorio-0:/factorio/saves/
This copied the k8s-test.zip file on my computer to /factorio/saves/k8s-test.zip in a container running on my cluster.
See kubectl cp -h for more more detail usage information and examples.
You can create data-folder on your GoogleCloud:
gcloud compute ssh <your cloud> <your zone>
mdkir data
Then create PersistentVolume:
kubectl create -f hostpth-pv.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-local
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/<user-name>/data"
Create PersistentVolumeClaim:
kubectl create -f hostpath-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hostpath-pvc
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Then copy file to GCloud:
gcloud compute scp <your file> <your cloud> <your zone>
And at last mount this PersistentVolumeClaim to your pod:
...
volumeMounts:
- name: hostpath-pvc
mountPath: <your-path>
subPath: hostpath-pvc
volumes:
- name: hostpath-pvc
persistentVolumeClaim:
claimName: hostpath-pvc
And copy file to data-folder in GGloud:
gcloud compute scp <your file> <your cloud>:/home/<user-name>/data/hostpath-pvc <your zone>
You can just use Google Cloud Storage (https://cloud.google.com/storage/) since you're looking at serving a few files.
The other option is to use PersistenVolumeClaims. This will work better if you're not updating the files frequently because you will need to detach the disk from the Pods (so you need to delete the Pods) while doing this.
You can create a GCE persistent disk, attach it to a GCE VM, put files on it, then delete the VM and bring the PD to Kubernetes as PersistentVolumeClaim. There's doc on how to do that: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#using_preexsiting_persistent_disks_as_persistentvolumes
I've gone through the steps to create a persistent disk in google compute engine and attach it to a running VM instance. I've also created a docker image with a VOLUME directive. It runs fine locally, in the docker run command, i can pass a -v option to mount a host directory as the volume. I thought there would be a similar command in kubectl, but I don't see one. How can I mount my persistent disk as the docker volume?
In your pod spec, you may specify a Kubernetes gcePersistentDisk volume (the spec.volumes field) and where to mount that volume into containers (the spec.containers.volumeMounts field). Here's an example:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Read more about Kubernetes volumes: http://kubernetes.io/docs/user-guide/volumes