I have a pv like below
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: azurefile
azureFile:
secretName: azure-secret
shareName: cloudshare
readOnly: false
and a pvc like below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 2Gi
on deployments i have the following
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: azurefile
volumeMounts:
- name: my-storage
mountPath: "/home/myapp/newapp/"
My understanding is that under the path /home/myapp/newapp/ in the containers ,the azure fileshare cloudshare's content will be accessible. So, whatever i have in cloudshare will be visible here. Does pvc or pv creates folders under the cloudshare? , the reason i ask is ,
I have a WORKDIR in my dockerimage which is actually in the same mountpath like below
WORKDIR /home/myapp/newapp/reta-app/ .
For some reason reta-app folder is getting created inside the cloudshare. Is this a normal behaviour or i am doing something wrong.
Does pvc or pv create folders under the cloudshare?
No. A Kubernetes PersistentVolume is just some storage somewhere, and a PersistentVolumeClaim is a way of referencing a PV (that may not immediately exist). Kubernetes does absolutely no management of any of the content in a persistent volume; it will not create directories on startup, copy content from the image into a volume, or anything else.
Related
I want to copy a text file to a pod on minikube. But I get the timeout error.
scp -r /Users/joe/Downloads/Archive/data.txt docker#192.168.49.2:/home/docker
I got the ip address (192.168.49.2) with:
minikube ip
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume (that will be great!!)
The yaml for the PersistentVolume is:
kind: PersistentVolume
apiVersion: v1
metadata:
name: my-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The yaml for the PersistentVolumeClaim is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
The yaml for the pod is:
kind: Pod
apiVersion: v1
metadata:
name: my-pvc-pod
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- mountPath: "/mnt/storage"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume.
You can achieve that with mounting the host directory into the guest using minikube mount command:
minikube mount <source directory>:<target directory>
Whereas the the <source directory> is the host directory and <target directory> is the guest/minikube directory.
And then use that <target directory> and create pv with hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "<target-directory"
Depending also driver, some of them have built-in host folder sharing. You can check them here.
If you need to mount only part of the volume, in your case a single file, you can use subPath to specify the part that must be mounted. This answer explains it well.
I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver
I am running a docker image that has certain configuration files within it. I need to persist/mount the same folder to the disk as new files will get added later on. When I use standard volume mount in kubernetes, it mounts an empty directory without the intial configuration files. How do I make sure my initial files are copied to the volume while mounting?
- mountPath: /tmp
name: my-vol
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: wso2-disk2```
A possible solution could be the use the node storage mounted on containers (easiest way) or using a DFS solution like NFS, GlusterFS, and so on.
Another and recommended way to achieve what you need is to use a persistent volumes to share the same files between your containers.
Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods (Source):
Create a PersistentVolume:
A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Create a PersistentVolumeClaim:
Pods use PersistentVolumeClaims to request physical storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
Create a deployment with 2 replicas for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
Now you can check inside both container the path /mnt/data has the same files.
If you have cluster with more than 1 node I recommend you to think about the other types of persistent volumes or using DFS.
References:
Configure persistent volumes
Persistent volumes
Volume Types
The suggested way to provide configurations to your pod is by creating a configmap for your configurations and mount it in your pod using volumes. This guide ( https://kubernetes.io/docs/concepts/storage/volumes/#configmap) descibes how to do that.
Other ways are to create a persistent volume and persistent volume claim in your cluster and copy your configuration file in that path. Mount the persistent volume in your pod.
You can also copy your configuration on one of the nodes in your cluster and mount that path using hostPath but this requires that your pod should also run on the same node as it tries to look for the path in that node. (Not a recommended approach)
Create configmap of the folder you would like to mount, the following creates configmap consisting of all the files in your-folder:
kubectl create configmap your-config --from-file=your-folder/
Then mount this to the volume and you will have the initial files in your folder. And note that you will need to mount it to subpath since you dont want it to overwrite everything in the directory.
I'm working in Kubernetes in GCP and I'm having problems with volumes and persistent disks.
I'm using Directus 7 (CMS Headless), which saves most of its information in the database except the files that are uploaded, these files are in the /var/www/html/public/uploads folder (tested locally with docker-compose and works fine), and that folder is the one I'm trying to save on the persistent disk.
No error occurs but when restart the Kubernetes Pod i lose the uploaded images (they are not being saved on the disk).
This is my configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: directus-pv
namespace: default
spec:
storageClassName: ""
capacity:
storage: 100G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: directus-disk
fsType: ext4
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: directus-pvc
namespace: default
labels:
app: .....
spec:
storageClassName: ""
volumeName: directus-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100G
And in the deploy.yaml:
volumeMounts:
- name: api-disk
mountPath: /var/www/html/public/uploads
readOnly: false
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Thanks for the help
Remove namespace property from pv and pvc manifest. They are shared resources in the cluster.
Remove storage class property as well.
I presume that your manually provisioned persistence volume directus-pv, is being created somehow with PersistentVolumeReclaimPolicy=*Recycle. That's the only possible reason that could cause data erase on each POD restart.
I'm not able to reproduce your case with the provided manifest files,
but I tried the following test:
Create gcePersistentDisk
Create PersistentVolume
Create PersistentVolumeClaim
Create ReplicaSet (replicas=1) like this one
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: busybox-list-uploads
spec:
replicas: 1
template:
metadata:
labels:
app: busybox-list-uploads
version: "2"
spec:
containers:
- image: busybox
args: [/bin/sh, -c, 'sleep 9999' ]
volumeMounts:
- mountPath: /var/www/html/public/uploads
name: api-disk
name: busybox
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Write some file into mounted folder /var/www/html/public/uploads
Restart POD (=kill the POD) by resizing replica to 0 then to 1
List content of /var/www/html/public/uploads on newly created POD
for i in busybox-list-uploads-dgfbc; do kubectl exec -it $i -- ls /var/www/html/public/uploads; done;
lost+found picture_from_busybox-list-uploads-ng4t6.png
As you can see output shows clearly, that data survives POD restart
* you can verify it with cmd: kubectl get pv/directus-pv -o yaml
I want to be able to mount an unknown number of config files in /etc/configs
I have added some files to the configmap using:
kubectl create configmap etc-configs --from-file=/tmp/etc-config
The number of files and file names are never going to be known and I would like to recreate the configmap and the folder in the Kubernetes container should be updated after sync interval.
I have tried to mount this but I'm not able to do so, the folder is always empty but I have data in the configmap.
bofh$ kubectl describe configmap etc-configs
Name: etc-configs
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
file1.conf:
----
{
... trunkated ...
}
file2.conf:
----
{
... trunkated ...
}
file3.conf:
----
{
... trunkated ...
}
Events: <none>
I'm using this one in the container volumeMounts:
- name: etc-configs
mountPath: /etc/configs
And this is the volumes:
- name: etc-configs
configMap:
name: etc-configs
I can mount individual items but not an entire directory.
Any suggestions about how to solve this?
You can mount the ConfigMap as a special volume into your container.
In this case, the mount folder will show each of the keys as a file in the mount folder and the files will have the map values as content.
From the Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
I'm feeling really stupid now.
Sorry, My fault.
The Docker container did not start so I was manually staring it using docker run -it --entrypoint='/bin/bash' and I could not see any files from the configMap.
This does not work since docker don't know anything about my deployment until Kubernetes starts it.
The docker image was failing and the Kubernetes config was correct all the time.
I was debugging it wrong.
With your config, you're going to mount each file listed in your configmap.
If you need to mount all file in a folder, you shouldn't use configmap, but a persistenceVolume and persistenceVolumeClaims:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume-jenkins
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/pv-jenkins"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 50Gi
In your deployment.yml:
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /data
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: pv-claim-jenkins
You can also use the following:
kubectl create configmap my-config --from-file=/etc/configs
to create the config map with all files in that folder.
Hope this helps.