here is the the pvc yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 200Gi
storageClassName: standard
here is the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
--
here is the pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
capacity:
storage: 200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/shared/jenkins
Here is the command am using:
helm install --set persistence.existingClaim=jenkins-pvc --set master.serviceType=NodePort stable/jenkins --generate-name
error : Pod init state.
here is the init container error:
here is the init container error [map[containerID:docker://1e2d565bfde2a84410a63d028b73215fd4c81fd552cc246e0c517e4c76c69c67 image:jenkins/jenkins:lts imageID:docker-pullable://jenkins/jenkins#sha256:d5069c543e80454279caacd13457d012fb32c5229b5037a163d8bf61ffa6b80b lastState:map[terminated:map[containerID:docker://1e2d565bfde2a84410a63d028b73215fd4c81fd552cc246e0c517e4c76c69c67 exitCode:1 finishedAt:2020-01-07T07:13:13Z reason:Error startedAt:2020-01-07T07:06:18Z]] name:copy-default-config ready:false restartCount:4 state:map[waiting:map[message:back-off 1m20s restarting failed container=copy-default-config pod=jenkins-1578379025-ccf77dfc-wtnww_default(ca6e0e22-7cf6-487d-b6e4-0223a1dc46a0) reason:CrashLoopBackOff]]]]
Can anyone help?
Related
In spite 21Gi being set in claimed volume, the pod has 8E (full possible size of EFS)
Is it OK and storage size is limited. Or did I make a mistake in configuration and there needs to change, or something other?
I will be appreciated for your help.
Volume:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
monitoring-eks-falcon-victoriametrics 21Gi RWX Retain Bound victoriametrics/victoriametrics-data
Pod:
Filesystem Size Used Available Use% Mounted on
fs-efs.us-....s.com:/ 8.0E 0 8.0E 0% /data
Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
name: monitoring-eks-falcon-victoriametrics
uid: f43e12d0-77ab-4530-8c9e-cfbd3c641467
resourceVersion: '28847'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 21Gi
nfs:
server: fs-.efs.us-east-1.amazonaws.com
path: /
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: victoriametrics
name: victoriametrics-data
uid: 8972e897-4e16-a64f-4afd8f90fa89
apiVersion: v1
resourceVersion: '28842'
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
Persistent Volume Claims
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: victoriametrics-data
namespace: victoriametrics
uid: 8972e897-4e16-a64f-4afd8f90fa89
resourceVersion: '28849'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
Description: Volume for Victoriametrics DB
pv.kubernetes.io/bind-completed: 'yes'
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteMany
selector:
matchLabels:
k8s-app: victoriametrics
purpose: victoriametrics
matchExpressions:
- key: k8s-app
operator: In
values:
- victoriametrics
resources:
limits:
storage: 21Gi
requests:
storage: 21Gi
volumeName: monitoring-eks-falcon-victoriametrics
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 21Gi
Pod deployment
kind: Deployment
...
spec:
...
spec:
volumes:
- name: victoriametrics-data
persistentVolumeClaim:
claimName: victoriametrics-data
containers:
- name: victoriametrics
...
volumeMounts:
- name: victoriametrics-data
mountPath: /data
mountPropagation: None
...
The number "8E" serves as an indicator, it is not a real quota. AWS EFS does not support quota (eg. FATTR4_QUOTA_AVAIL_HARD). It generally means you have "unlimited" space on this mount. There's nothing wrong with your spec; the number specified in the PVC's resources.requests.storage is used to match PV's capacity.storage. It doesn't mean you can only write 21GB on the EFS mount.
I am trying to install jenkins on my kubernetes cluster under jenkins namespace. When I deploy my pv and pvc, the pv remains available and does not bind to my pvc.
Here is my yamls:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: jenkins
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Below is my Storageclass for manual. Standard class has not been changed, should be the same as the default standard on kubernetes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"manual"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
creationTimestamp: "2021-06-14T14:41:39Z"
name: manual
resourceVersion: "3643100822"
uid: 8254d900-58e5-49e1-a07e-1830096aac87
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Based on the storage class spec, I think the problem is the volumeBindingMode being set as WaitForFirstConsumer which means the PV will remain unbound until there is a Pod to consume it.
You can change it Immediate to allow the PV to be bound immediately without requiring to create a Pod.
You can read about the different volume binding modes in detail in the docs.
I want to copy a text file to a pod on minikube. But I get the timeout error.
scp -r /Users/joe/Downloads/Archive/data.txt docker#192.168.49.2:/home/docker
I got the ip address (192.168.49.2) with:
minikube ip
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume (that will be great!!)
The yaml for the PersistentVolume is:
kind: PersistentVolume
apiVersion: v1
metadata:
name: my-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The yaml for the PersistentVolumeClaim is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
The yaml for the pod is:
kind: Pod
apiVersion: v1
metadata:
name: my-pvc-pod
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- mountPath: "/mnt/storage"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
Eventually I would like that the file appear on the persistentVolumeClaim/persistentVolume.
You can achieve that with mounting the host directory into the guest using minikube mount command:
minikube mount <source directory>:<target directory>
Whereas the the <source directory> is the host directory and <target directory> is the guest/minikube directory.
And then use that <target directory> and create pv with hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "<target-directory"
Depending also driver, some of them have built-in host folder sharing. You can check them here.
If you need to mount only part of the volume, in your case a single file, you can use subPath to specify the part that must be mounted. This answer explains it well.
I would like to run a lcoal FTP server ProFtpd with minikube with docker image : vipconsult/proftpd
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
name: ftp-local
namespace: influx
labels:
app: ftp-local
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: ftp-local
component: core
spec:
initContainers:
containers:
- image: vipconsult/proftpd
name: ftp-local
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: USERNAME
value: test
- name: PASSWORD
value: test
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
and volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
When I deploy it, I have:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-58d5d774d-zdztf
chown: cannot access '/etc/proftpd/ftpd.passwd': No such file or directory
Documentation states:
to make the users persistent share the passwords file with the host or a data -v /home/docker/proftpd/ftpd.passwd:/etc/proftpd/ftpd.passwd
so, I tried to mount this file:
deployment ( just showing volume section )
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
- name: ftp-persistent-users
mountPath: /etc/proftpd/ftpd.passwd
subPath: ftpd.passwd
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
- name: ftp-persistent-users
persistentVolumeClaim:
claimName: ftp-storage-users
and volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage-user
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage-user
hostPath:
path: /data/ftp/config
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage-user
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
First I am a bit frustrated to use a 1GB volume for a single file, and then I get:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-67cfd77497-7rt5m
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: mod_auth_file/1.0: unable to use AuthUserFile '/etc/proftpd/ftpd.passwd': Is a directory
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: fatal: AuthUserFile: unable to use /etc/proftpd/ftpd.passwd: Is a directory on line 193 of '/etc/proftpd/proftpd.conf'
Here are my volumes:
git:(devops) ✗ kubectl get pv,pvc -n influx
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ftp-storage 1Gi RWO Retain Bound influx/ftp-storage 22s
persistentvolume/ftp-storage-user 1Gi RWO Retain Bound influx/ftp-storage-user 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ftp-storage Bound ftp-storage 1Gi RWO standard 22s
persistentvolumeclaim/ftp-storage-user Bound ftp-storage-user 1Gi RWO standard 22s
Now describing PVs:
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage
Name: ftp-storage
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage"},"spec":{"accessModes":["ReadWriteOnce"],"c...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/storage
HostPathType:
Events: <none>
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage-user
Name: ftp-storage-user
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage-user"},"spec":{"accessModes":["ReadWriteOnce...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage-user
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/config
HostPathType:
Events: <none>
What am I doing wrong ?
I'm trying to create two deployments, one for Wordpress the other for MySQL which refer to two different Persistent Volumes.
Sometimes, while deleting and recreating volumes and deployments, the MySQL deployment populates the Wordpress volume (ending up with a database in the wordpress-volume directory).
This is clearer when you do kubectl get pv --namespace my-namespace:
mysql-volume 2Gi RWO Retain Bound flashart-it/wordpress-volume-claim manual 1h
wordpress-volume 2Gi RWO Retain Bound flashart-it/mysql-volume-claim manual
.
I'm pretty sure the settings are ok. Please find the yaml file below.
Persistent Volume Claims + Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/mysql-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/wordpress-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployments
kind: Deployment
apiVersion: apps/v1
metadata:
name: wordpress
namespace: my-namespace
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.0-php7.1-apache
name: wordpress
env:
# ...
ports:
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: my-namespace
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# ...
ports:
# ...
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
It's expected behavior in Kubernetes. PVC can bind to any available PV, given that storage class is matched, access mode is matched, and storage size is sufficient. Names are not used to match PVC and PV.
A possible solution for your scenario is to use label selector on PVC to filter qualified PV.
First, add a label to PV (in this case: app=mysql)
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-volume
labels:
app: mysql
Then, add a label selector in PVC to filter PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: mysql