I am trying out to have a volume mount on Kubernetes.
Currently I have a Docker image which I run like:
docker run --mount type=bind,source="$(pwd)"<host_dir>,target=<docker_dir> container
To have this run on Google Kubernetes cluster, I have:
Create a Google Compute Disk
Created a persistent volume which refers to the disk:
kind: PersistentVolume
...
namespace: default
name: pvc
spec:
claimRef:
namespace: default
name: pvc
gcePersistentDisk:
pdName: disk-name
fsType: ext4
---
...
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: "storage"
...
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
Created pod with mount
kind: Pod
apiVersion: v1
metadata:
name: k8s-pod
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
containers:
- name: image_name
image: eu.gcr.io/container:latest
volumeMounts:
- mountPath: <docker_dir>
name: dir
I am missing out where the binding between the host and container/pod directories will take place. Also where do I mention that binding in my yaml files.
I will appreciate any help :)
You are on the right path here. In your Pod spec, the name of the volumeMount should match the name of the volumes. So in your case,
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
volume name is pvc. So your volumeMount should be
volumeMounts:
- mountPath: "/path/in/container"
name: pvc
So, for example, to mount this volume at /mydata in your container, your Pod spec would look like
kind: Pod
apiVersion: v1
metadata:
name: k8s-pod
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
containers:
- name: image_name
image: eu.gcr.io/container:latest
volumeMounts:
- mountPath: "/mydata"
name: pvc
Related
I want to copy a text file to a pod on minikube. But I get the timeout error.
scp -r /Users/joe/Downloads/Archive/data.txt docker#192.168.49.2:/home/docker
I got the ip address (192.168.49.2) with:
minikube ip
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume (that will be great!!)
The yaml for the PersistentVolume is:
kind: PersistentVolume
apiVersion: v1
metadata:
name: my-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The yaml for the PersistentVolumeClaim is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
The yaml for the pod is:
kind: Pod
apiVersion: v1
metadata:
name: my-pvc-pod
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- mountPath: "/mnt/storage"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume.
You can achieve that with mounting the host directory into the guest using minikube mount command:
minikube mount <source directory>:<target directory>
Whereas the the <source directory> is the host directory and <target directory> is the guest/minikube directory.
And then use that <target directory> and create pv with hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "<target-directory"
Depending also driver, some of them have built-in host folder sharing. You can check them here.
If you need to mount only part of the volume, in your case a single file, you can use subPath to specify the part that must be mounted. This answer explains it well.
I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.
I'm working in Kubernetes in GCP and I'm having problems with volumes and persistent disks.
I'm using Directus 7 (CMS Headless), which saves most of its information in the database except the files that are uploaded, these files are in the /var/www/html/public/uploads folder (tested locally with docker-compose and works fine), and that folder is the one I'm trying to save on the persistent disk.
No error occurs but when restart the Kubernetes Pod i lose the uploaded images (they are not being saved on the disk).
This is my configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: directus-pv
namespace: default
spec:
storageClassName: ""
capacity:
storage: 100G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: directus-disk
fsType: ext4
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: directus-pvc
namespace: default
labels:
app: .....
spec:
storageClassName: ""
volumeName: directus-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100G
And in the deploy.yaml:
volumeMounts:
- name: api-disk
mountPath: /var/www/html/public/uploads
readOnly: false
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Thanks for the help
Remove namespace property from pv and pvc manifest. They are shared resources in the cluster.
Remove storage class property as well.
I presume that your manually provisioned persistence volume directus-pv, is being created somehow with PersistentVolumeReclaimPolicy=*Recycle. That's the only possible reason that could cause data erase on each POD restart.
I'm not able to reproduce your case with the provided manifest files,
but I tried the following test:
Create gcePersistentDisk
Create PersistentVolume
Create PersistentVolumeClaim
Create ReplicaSet (replicas=1) like this one
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: busybox-list-uploads
spec:
replicas: 1
template:
metadata:
labels:
app: busybox-list-uploads
version: "2"
spec:
containers:
- image: busybox
args: [/bin/sh, -c, 'sleep 9999' ]
volumeMounts:
- mountPath: /var/www/html/public/uploads
name: api-disk
name: busybox
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Write some file into mounted folder /var/www/html/public/uploads
Restart POD (=kill the POD) by resizing replica to 0 then to 1
List content of /var/www/html/public/uploads on newly created POD
for i in busybox-list-uploads-dgfbc; do kubectl exec -it $i -- ls /var/www/html/public/uploads; done;
lost+found picture_from_busybox-list-uploads-ng4t6.png
As you can see output shows clearly, that data survives POD restart
* you can verify it with cmd: kubectl get pv/directus-pv -o yaml
I want to be able to mount an unknown number of config files in /etc/configs
I have added some files to the configmap using:
kubectl create configmap etc-configs --from-file=/tmp/etc-config
The number of files and file names are never going to be known and I would like to recreate the configmap and the folder in the Kubernetes container should be updated after sync interval.
I have tried to mount this but I'm not able to do so, the folder is always empty but I have data in the configmap.
bofh$ kubectl describe configmap etc-configs
Name: etc-configs
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
file1.conf:
----
{
... trunkated ...
}
file2.conf:
----
{
... trunkated ...
}
file3.conf:
----
{
... trunkated ...
}
Events: <none>
I'm using this one in the container volumeMounts:
- name: etc-configs
mountPath: /etc/configs
And this is the volumes:
- name: etc-configs
configMap:
name: etc-configs
I can mount individual items but not an entire directory.
Any suggestions about how to solve this?
You can mount the ConfigMap as a special volume into your container.
In this case, the mount folder will show each of the keys as a file in the mount folder and the files will have the map values as content.
From the Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
I'm feeling really stupid now.
Sorry, My fault.
The Docker container did not start so I was manually staring it using docker run -it --entrypoint='/bin/bash' and I could not see any files from the configMap.
This does not work since docker don't know anything about my deployment until Kubernetes starts it.
The docker image was failing and the Kubernetes config was correct all the time.
I was debugging it wrong.
With your config, you're going to mount each file listed in your configmap.
If you need to mount all file in a folder, you shouldn't use configmap, but a persistenceVolume and persistenceVolumeClaims:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume-jenkins
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/pv-jenkins"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 50Gi
In your deployment.yml:
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /data
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: pv-claim-jenkins
You can also use the following:
kubectl create configmap my-config --from-file=/etc/configs
to create the config map with all files in that folder.
Hope this helps.
Found this example for Kubernetes EmptyDir volume
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
I want to volume mount between 2 pods. I am creating these pods using 2 different Replication Controllers. The replication controllers looks like this
Replication Controller 1:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-worker
labels:
name: node-worker
spec:
replicas: 1
selector:
name: node-worker
template:
metadata:
labels:
name: node-worker
spec:
containers:
-
name: node-worker
image: image/node-worker
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Replication Controller 2:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-manager
labels:
name: node-manager
spec:
replicas: 1
selector:
name: node-manager
template:
metadata:
labels:
name: node-manager
spec:
containers:
-
name: node-manager
image: image/node-manager
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Can Kubernetes emptyDir volume be used for this scenario?
EmptyDir volumes are inherently bound to the lifecycle of a single pod and can't be shared amongst pods in replication controllers or otherwise. If you want to share volumes amongst pods, the best choices right now are NFS or gluster, in a persistent volume. See an example here: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md
Why do you want to share the volume mount between pods? This will not work reliably because you aren't guaranteed to have a 1:1 mapping between where pods in replication controller 1 and replication controller 2 are scheduled in your cluster.
If you want to share local storage between containers, you should put both of the containers into the same pod, and have each container mount the emptyDir volume.
You require three things to get this working. More info can be found here and some documentation here, but it's a little confusing at first.
This example mounts a NFS volume.
1. Create a PersistentVolume pointing to your NFS server
file : mynfssharename-pv.yaml
(update server to point to your server)
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfssharename
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: yourservernotmine.yourcompany.com
path: "/yournfspath"
kubectl create -f mynfssharename-pv.yaml
2. Create a PersistentVolumeClaim to points to PersistentVolume mynfssharename
file : mynfssharename-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynfssharename
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl create -f mynfssharename-pvc.yaml
3. Add the claim to your ReplicationController or Deployment
spec:
containers:
- name: sample-pipeline
image: yourimage
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
volumeMounts:
# name must match the volume name below
- name: mynfssharename
mountPath: "/mnt"
volumes:
- name: mynfssharename
persistentVolumeClaim:
claimName: mynfssharename