Cannot bind PersistentVolumeClaim to PersistentVolume in namespace - jenkins

I am trying to install jenkins on my kubernetes cluster under jenkins namespace. When I deploy my pv and pvc, the pv remains available and does not bind to my pvc.
Here is my yamls:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: jenkins
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Below is my Storageclass for manual. Standard class has not been changed, should be the same as the default standard on kubernetes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"manual"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
creationTimestamp: "2021-06-14T14:41:39Z"
name: manual
resourceVersion: "3643100822"
uid: 8254d900-58e5-49e1-a07e-1830096aac87
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Based on the storage class spec, I think the problem is the volumeBindingMode being set as WaitForFirstConsumer which means the PV will remain unbound until there is a Pod to consume it.
You can change it Immediate to allow the PV to be bound immediately without requiring to create a Pod.
You can read about the different volume binding modes in detail in the docs.

Related

AWS EKS EFS mounted volume. In spite 21Gi in claimed volume the pod has 8E (full possible size of EFS)

In spite 21Gi being set in claimed volume, the pod has 8E (full possible size of EFS)
Is it OK and storage size is limited. Or did I make a mistake in configuration and there needs to change, or something other?
I will be appreciated for your help.
Volume:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
monitoring-eks-falcon-victoriametrics 21Gi RWX Retain Bound victoriametrics/victoriametrics-data
Pod:
Filesystem Size Used Available Use% Mounted on
fs-efs.us-....s.com:/ 8.0E 0 8.0E 0% /data
Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
name: monitoring-eks-falcon-victoriametrics
uid: f43e12d0-77ab-4530-8c9e-cfbd3c641467
resourceVersion: '28847'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 21Gi
nfs:
server: fs-.efs.us-east-1.amazonaws.com
path: /
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: victoriametrics
name: victoriametrics-data
uid: 8972e897-4e16-a64f-4afd8f90fa89
apiVersion: v1
resourceVersion: '28842'
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
Persistent Volume Claims
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: victoriametrics-data
namespace: victoriametrics
uid: 8972e897-4e16-a64f-4afd8f90fa89
resourceVersion: '28849'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
Description: Volume for Victoriametrics DB
pv.kubernetes.io/bind-completed: 'yes'
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteMany
selector:
matchLabels:
k8s-app: victoriametrics
purpose: victoriametrics
matchExpressions:
- key: k8s-app
operator: In
values:
- victoriametrics
resources:
limits:
storage: 21Gi
requests:
storage: 21Gi
volumeName: monitoring-eks-falcon-victoriametrics
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 21Gi
Pod deployment
kind: Deployment
...
spec:
...
spec:
volumes:
- name: victoriametrics-data
persistentVolumeClaim:
claimName: victoriametrics-data
containers:
- name: victoriametrics
...
volumeMounts:
- name: victoriametrics-data
mountPath: /data
mountPropagation: None
...
The number "8E" serves as an indicator, it is not a real quota. AWS EFS does not support quota (eg. FATTR4_QUOTA_AVAIL_HARD). It generally means you have "unlimited" space on this mount. There's nothing wrong with your spec; the number specified in the PVC's resources.requests.storage is used to match PV's capacity.storage. It doesn't mean you can only write 21GB on the EFS mount.

Kubernetes Persistent Volume Claims creates a new Persistent Volume instead of binding to the available Persistent Volume

I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver

pod init state, while installing jenkins with helm in k8s

here is the the pvc yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 200Gi
storageClassName: standard
here is the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
--
here is the pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
capacity:
storage: 200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/shared/jenkins
Here is the command am using:
helm install --set persistence.existingClaim=jenkins-pvc --set master.serviceType=NodePort stable/jenkins --generate-name
error : Pod init state.
here is the init container error:
here is the init container error [map[containerID:docker://1e2d565bfde2a84410a63d028b73215fd4c81fd552cc246e0c517e4c76c69c67 image:jenkins/jenkins:lts imageID:docker-pullable://jenkins/jenkins#sha256:d5069c543e80454279caacd13457d012fb32c5229b5037a163d8bf61ffa6b80b lastState:map[terminated:map[containerID:docker://1e2d565bfde2a84410a63d028b73215fd4c81fd552cc246e0c517e4c76c69c67 exitCode:1 finishedAt:2020-01-07T07:13:13Z reason:Error startedAt:2020-01-07T07:06:18Z]] name:copy-default-config ready:false restartCount:4 state:map[waiting:map[message:back-off 1m20s restarting failed container=copy-default-config pod=jenkins-1578379025-ccf77dfc-wtnww_default(ca6e0e22-7cf6-487d-b6e4-0223a1dc46a0) reason:CrashLoopBackOff]]]]
Can anyone help?

How do I use Openstack Cinder volumes as Kubernetes persistant volume?

I have kuberentes running in Openstack, and I want to use volumes provided by openstack rather than using NFS to manage volumes. I'm not sure where to start or if its even possible. i've tried bunch of stuff, no luck :(
Here is some methods I've tried so far.
I modified the /etc/kubernetes/manifests/kube-conroller yaml file. I mounted the cloud.conf file and added these lines
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
Then I ran this to create my storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openstack-test
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
namespace: mongo-dump
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
Then created my pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After this step its just stuck at pending. :9

Use data from local system in my kubernetes Pod

I want to use data from the files present in my localsystem in my pod in k8s.
How is PersistentLocalVolumes used in this and is it safe to use PersistentLocalVolumes as it is an alpha feature.
Thanks
For cluster created with kubeadm.
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add a line for KUBE_FEATURE_GATES.
Environment="KUBE_FEATURE_GATES=--feature-gates PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true"
Add $KUBE_FEATURE_GATES to the ExecStart line.
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS $KUBE_FEATURE_GATES
Manifest
$ cat local_pvc.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["my-node"] <--- change the node name to yours
}
]}
]}
}'
spec:
capacity:
storage: 5Gi <----- change the size to your need
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1 <----- change the path to yours
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi <----- change the size to your need
storageClassName: local-storage
----
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Execution
Restart the cluster.
Create the pv & pvc (kubectl create -f local_pvc.yaml).
Use the pvc in the pod.
References
doc: update with Kubernetes localhost persistent storage example
Feature Gates
Kubelet (see
feature gate parameter such as PersistentLocalVolumes=true)
Local Persistent Storage User Guide
You can use the hostPath Volume, which will allow you to mount a directory from the host filesystem onto the POD.

Resources