I want to use data from the files present in my localsystem in my pod in k8s.
How is PersistentLocalVolumes used in this and is it safe to use PersistentLocalVolumes as it is an alpha feature.
Thanks
For cluster created with kubeadm.
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add a line for KUBE_FEATURE_GATES.
Environment="KUBE_FEATURE_GATES=--feature-gates PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true"
Add $KUBE_FEATURE_GATES to the ExecStart line.
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS $KUBE_FEATURE_GATES
Manifest
$ cat local_pvc.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["my-node"] <--- change the node name to yours
}
]}
]}
}'
spec:
capacity:
storage: 5Gi <----- change the size to your need
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1 <----- change the path to yours
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi <----- change the size to your need
storageClassName: local-storage
----
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Execution
Restart the cluster.
Create the pv & pvc (kubectl create -f local_pvc.yaml).
Use the pvc in the pod.
References
doc: update with Kubernetes localhost persistent storage example
Feature Gates
Kubelet (see
feature gate parameter such as PersistentLocalVolumes=true)
Local Persistent Storage User Guide
You can use the hostPath Volume, which will allow you to mount a directory from the host filesystem onto the POD.
Related
I am trying to install jenkins on my kubernetes cluster under jenkins namespace. When I deploy my pv and pvc, the pv remains available and does not bind to my pvc.
Here is my yamls:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: jenkins
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Below is my Storageclass for manual. Standard class has not been changed, should be the same as the default standard on kubernetes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"manual"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
creationTimestamp: "2021-06-14T14:41:39Z"
name: manual
resourceVersion: "3643100822"
uid: 8254d900-58e5-49e1-a07e-1830096aac87
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Based on the storage class spec, I think the problem is the volumeBindingMode being set as WaitForFirstConsumer which means the PV will remain unbound until there is a Pod to consume it.
You can change it Immediate to allow the PV to be bound immediately without requiring to create a Pod.
You can read about the different volume binding modes in detail in the docs.
I want to copy a text file to a pod on minikube. But I get the timeout error.
scp -r /Users/joe/Downloads/Archive/data.txt docker#192.168.49.2:/home/docker
I got the ip address (192.168.49.2) with:
minikube ip
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume (that will be great!!)
The yaml for the PersistentVolume is:
kind: PersistentVolume
apiVersion: v1
metadata:
name: my-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The yaml for the PersistentVolumeClaim is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
The yaml for the pod is:
kind: Pod
apiVersion: v1
metadata:
name: my-pvc-pod
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- mountPath: "/mnt/storage"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume.
You can achieve that with mounting the host directory into the guest using minikube mount command:
minikube mount <source directory>:<target directory>
Whereas the the <source directory> is the host directory and <target directory> is the guest/minikube directory.
And then use that <target directory> and create pv with hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "<target-directory"
Depending also driver, some of them have built-in host folder sharing. You can check them here.
If you need to mount only part of the volume, in your case a single file, you can use subPath to specify the part that must be mounted. This answer explains it well.
I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver
I have kuberentes running in Openstack, and I want to use volumes provided by openstack rather than using NFS to manage volumes. I'm not sure where to start or if its even possible. i've tried bunch of stuff, no luck :(
Here is some methods I've tried so far.
I modified the /etc/kubernetes/manifests/kube-conroller yaml file. I mounted the cloud.conf file and added these lines
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
Then I ran this to create my storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openstack-test
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
namespace: mongo-dump
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
Then created my pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After this step its just stuck at pending. :9
In Kubernetes is it possible to add hostPath storage in Statefulset. If so, can someone help me with some example?
Yes but it is definitely for testing purposes.
First you need to create as many Persistent Volume as you need
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
Afterwards, add this VolumeClaimsTemplate to your Statefulset
volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Another solution is using the hostpath dynamic provisioner. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.
A hostPath volume for StatefulSet should only be used in a single-node cluster, e.g. for development. Rescheduling of the pod will not work.
Instead, consider using a Local Persistent Volume for this kind of use cases.
The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.
Consider using local static provisioner for this, the Getting Started guide has instructions for how to use it in different environments.