Adding Hospath to a Kubernetes Statefulset - docker

In Kubernetes is it possible to add hostPath storage in Statefulset. If so, can someone help me with some example?

Yes but it is definitely for testing purposes.
First you need to create as many Persistent Volume as you need
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
Afterwards, add this VolumeClaimsTemplate to your Statefulset
volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Another solution is using the hostpath dynamic provisioner. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.

A hostPath volume for StatefulSet should only be used in a single-node cluster, e.g. for development. Rescheduling of the pod will not work.
Instead, consider using a Local Persistent Volume for this kind of use cases.
The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.
Consider using local static provisioner for this, the Getting Started guide has instructions for how to use it in different environments.

Related

Persistent volume claim - mount path

I have a pv like below
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: azurefile
azureFile:
secretName: azure-secret
shareName: cloudshare
readOnly: false
and a pvc like below
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 2Gi
on deployments i have the following
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: azurefile
volumeMounts:
- name: my-storage
mountPath: "/home/myapp/newapp/"
My understanding is that under the path /home/myapp/newapp/ in the containers ,the azure fileshare cloudshare's content will be accessible. So, whatever i have in cloudshare will be visible here. Does pvc or pv creates folders under the cloudshare? , the reason i ask is ,
I have a WORKDIR in my dockerimage which is actually in the same mountpath like below
WORKDIR /home/myapp/newapp/reta-app/ .
For some reason reta-app folder is getting created inside the cloudshare. Is this a normal behaviour or i am doing something wrong.
Does pvc or pv create folders under the cloudshare?
No. A Kubernetes PersistentVolume is just some storage somewhere, and a PersistentVolumeClaim is a way of referencing a PV (that may not immediately exist). Kubernetes does absolutely no management of any of the content in a persistent volume; it will not create directories on startup, copy content from the image into a volume, or anything else.

Kubernetes Persistent Volume Claims creates a new Persistent Volume instead of binding to the available Persistent Volume

I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver

How to mount folder with files in kubernetes

I am running a docker image that has certain configuration files within it. I need to persist/mount the same folder to the disk as new files will get added later on. When I use standard volume mount in kubernetes, it mounts an empty directory without the intial configuration files. How do I make sure my initial files are copied to the volume while mounting?
- mountPath: /tmp
name: my-vol
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: wso2-disk2```
A possible solution could be the use the node storage mounted on containers (easiest way) or using a DFS solution like NFS, GlusterFS, and so on.
Another and recommended way to achieve what you need is to use a persistent volumes to share the same files between your containers.
Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods (Source):
Create a PersistentVolume:
A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Create a PersistentVolumeClaim:
Pods use PersistentVolumeClaims to request physical storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
Create a deployment with 2 replicas for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
Now you can check inside both container the path /mnt/data has the same files.
If you have cluster with more than 1 node I recommend you to think about the other types of persistent volumes or using DFS.
References:
Configure persistent volumes
Persistent volumes
Volume Types
The suggested way to provide configurations to your pod is by creating a configmap for your configurations and mount it in your pod using volumes. This guide ( https://kubernetes.io/docs/concepts/storage/volumes/#configmap) descibes how to do that.
Other ways are to create a persistent volume and persistent volume claim in your cluster and copy your configuration file in that path. Mount the persistent volume in your pod.
You can also copy your configuration on one of the nodes in your cluster and mount that path using hostPath but this requires that your pod should also run on the same node as it tries to look for the path in that node. (Not a recommended approach)
Create configmap of the folder you would like to mount, the following creates configmap consisting of all the files in your-folder:
kubectl create configmap your-config --from-file=your-folder/
Then mount this to the volume and you will have the initial files in your folder. And note that you will need to mount it to subpath since you dont want it to overwrite everything in the directory.

How do I use Openstack Cinder volumes as Kubernetes persistant volume?

I have kuberentes running in Openstack, and I want to use volumes provided by openstack rather than using NFS to manage volumes. I'm not sure where to start or if its even possible. i've tried bunch of stuff, no luck :(
Here is some methods I've tried so far.
I modified the /etc/kubernetes/manifests/kube-conroller yaml file. I mounted the cloud.conf file and added these lines
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
Then I ran this to create my storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openstack-test
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
namespace: mongo-dump
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
Then created my pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After this step its just stuck at pending. :9

Kubernetes Persistent Volume and hostpath

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume

Resources