I have kuberentes running in Openstack, and I want to use volumes provided by openstack rather than using NFS to manage volumes. I'm not sure where to start or if its even possible. i've tried bunch of stuff, no luck :(
Here is some methods I've tried so far.
I modified the /etc/kubernetes/manifests/kube-conroller yaml file. I mounted the cloud.conf file and added these lines
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
Then I ran this to create my storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openstack-test
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
namespace: mongo-dump
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
Then created my pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After this step its just stuck at pending. :9
Related
I am trying to install jenkins on my kubernetes cluster under jenkins namespace. When I deploy my pv and pvc, the pv remains available and does not bind to my pvc.
Here is my yamls:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: jenkins
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Below is my Storageclass for manual. Standard class has not been changed, should be the same as the default standard on kubernetes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"manual"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
creationTimestamp: "2021-06-14T14:41:39Z"
name: manual
resourceVersion: "3643100822"
uid: 8254d900-58e5-49e1-a07e-1830096aac87
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Based on the storage class spec, I think the problem is the volumeBindingMode being set as WaitForFirstConsumer which means the PV will remain unbound until there is a Pod to consume it.
You can change it Immediate to allow the PV to be bound immediately without requiring to create a Pod.
You can read about the different volume binding modes in detail in the docs.
I am trying to setup a persistent volume for K8s that is running in Docker Desktop for Windows. The end goal being I want to run Jenkins and not lose any work if docker/K8s spins down.
I have tried a couple of things but I'm either misunderstanding the ability to do this or I am setting something up wrong. Currently I have the environment setup like so:
I have setup a volume in docker for Jenkins. All I did was create the volume, not sure if I need more configuration here.
docker volume inspect jenkins-pv
[
{
"CreatedAt": "2020-05-20T16:02:42Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/jenkins-pv/_data",
"Name": "jenkins-pv",
"Options": {},
"Scope": "local"
}
]
I have also created a persistent volume in K8s pointing to the mount point in the Docker volume and deployed it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/var/lib/docker/volumes/jenkins-pv/_data"
I have also created a pv claim and deployed that.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Lastly I have created a deployment for Jenkins. I have confirmed it works and I am able to access the app.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins-pv-volume
mountPath: /var/lib/docker/volumes/jenkins-pv/_data
volumes:
- name: jenkins-pv-volume
persistentVolumeClaim:
claimName: jenkins-pv-claim
However, the data does not persist quitting Docker and I have to reconfigure Jenkins every time I start. Did I miss something or how/what I am trying to do not possible? Is there a better or easier way to do this?
Thanks!
I figured out my issue, it was two fold.
I was trying to save data from the wrong location within the pod that was running Jenkins.
I was never writing the data back to docker shared folder.
To get this working I created a shared folder in Docker (C:\DockerShare).
Then I updated the host path in my Persistent Volume.
The format is /host_mnt/path_to_docker_shared_folder_location
Since I used C:\DockerShare my path is: /host_mnt/c/DockerShare
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /host_mnt/c/DockerShare/jenkins
I also had to update the Jenkins deployment because I was not actually saving any of the config.
I should have been saving data from /var/jenkins_home.
Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins
mountPath: /var/jenkins_home
volumes:
- name: jenkins
persistentVolumeClaim:
claimName: jenkins
Anyways its working now and I hope this helps someone else when it comes to setting up a PV.
I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver
I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume
In Kubernetes is it possible to add hostPath storage in Statefulset. If so, can someone help me with some example?
Yes but it is definitely for testing purposes.
First you need to create as many Persistent Volume as you need
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
Afterwards, add this VolumeClaimsTemplate to your Statefulset
volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Another solution is using the hostpath dynamic provisioner. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.
A hostPath volume for StatefulSet should only be used in a single-node cluster, e.g. for development. Rescheduling of the pod will not work.
Instead, consider using a Local Persistent Volume for this kind of use cases.
The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.
Consider using local static provisioner for this, the Getting Started guide has instructions for how to use it in different environments.