I would like to run a lcoal FTP server ProFtpd with minikube with docker image : vipconsult/proftpd
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
name: ftp-local
namespace: influx
labels:
app: ftp-local
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: ftp-local
component: core
spec:
initContainers:
containers:
- image: vipconsult/proftpd
name: ftp-local
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: USERNAME
value: test
- name: PASSWORD
value: test
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
and volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
When I deploy it, I have:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-58d5d774d-zdztf
chown: cannot access '/etc/proftpd/ftpd.passwd': No such file or directory
Documentation states:
to make the users persistent share the passwords file with the host or a data -v /home/docker/proftpd/ftpd.passwd:/etc/proftpd/ftpd.passwd
so, I tried to mount this file:
deployment ( just showing volume section )
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
- name: ftp-persistent-users
mountPath: /etc/proftpd/ftpd.passwd
subPath: ftpd.passwd
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
- name: ftp-persistent-users
persistentVolumeClaim:
claimName: ftp-storage-users
and volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage-user
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage-user
hostPath:
path: /data/ftp/config
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage-user
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
First I am a bit frustrated to use a 1GB volume for a single file, and then I get:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-67cfd77497-7rt5m
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: mod_auth_file/1.0: unable to use AuthUserFile '/etc/proftpd/ftpd.passwd': Is a directory
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: fatal: AuthUserFile: unable to use /etc/proftpd/ftpd.passwd: Is a directory on line 193 of '/etc/proftpd/proftpd.conf'
Here are my volumes:
git:(devops) ✗ kubectl get pv,pvc -n influx
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ftp-storage 1Gi RWO Retain Bound influx/ftp-storage 22s
persistentvolume/ftp-storage-user 1Gi RWO Retain Bound influx/ftp-storage-user 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ftp-storage Bound ftp-storage 1Gi RWO standard 22s
persistentvolumeclaim/ftp-storage-user Bound ftp-storage-user 1Gi RWO standard 22s
Now describing PVs:
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage
Name: ftp-storage
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage"},"spec":{"accessModes":["ReadWriteOnce"],"c...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/storage
HostPathType:
Events: <none>
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage-user
Name: ftp-storage-user
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage-user"},"spec":{"accessModes":["ReadWriteOnce...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage-user
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/config
HostPathType:
Events: <none>
What am I doing wrong ?
Related
I am trying to deploy this app here on Kubernetes GKE.
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.network/taiga: "true"
io.kompose.service: taiga-gateway
spec:
containers:
- image: nginx:1.19-alpine
name: taiga-gateway
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: taiga-gateway-claim0
- mountPath: /taiga/static
name: taiga-static-data
- mountPath: /taiga/media
name: taiga-media-data
restartPolicy: Always
volumes:
- name: taiga-gateway-claim0
persistentVolumeClaim:
claimName: taiga-gateway-claim0
- name: taiga-static-data
persistentVolumeClaim:
claimName: taiga-static-data
- name: taiga-media-data
persistentVolumeClaim:
claimName: taiga-media-data
status: {}
In kubernetes I am creating the volume this way
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-static-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-media-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-db-data
..etc
So, I end up with the following error
Normal Scheduled 4m39s default-scheduler
Successfully assigned default/taiga-gateway-77976dc77-ppbvb to
gke-taiga-cluster-default-pool-cccc58aa-jwfr Warning
FailedAttachVolume 4m39s attachdetach-controller Multi-Attach
error for volume "pvc-a28ae890-fe9d-4985-8183-ce54d7ed57d8" Volume is
already used by pod(s) taiga-async-6c7d9dbd7b-vl79s Warning
FailedAttachVolume 4m39s attachdetach-controller Multi-Attach
error for volume "pvc-3cab3ec0-88d9-4c70-96c8-97d7b48e755c" Volume is
already used by pod(s) taiga-async-6c7d9dbd7b-vl79s Normal
SuccessfulAttachVolume 4m32s attachdetach-controller
AttachVolume.Attach succeeded for volume
"pvc-d2e39951-094f-447c-86ff-c36639786111" Warning FailedMount
2m36s kubelet Unable to attach or mount volumes:
unmounted volumes=[taiga-static-data taiga-media-data], unattached
volumes=[taiga-static-data taiga-media-data kube-api-access-6lw4n
taiga-gateway-claim0]: timed out waiting for the condition Warning
FailedMount 19s kubelet Unable to
attach or mount volumes: unmounted volumes=[taiga-media-data
taiga-static-data], unattached volumes=[taiga-media-data
kube-api-access-6lw4n taiga-gateway-claim0 taiga-static-data]: timed
out waiting for the condition
PVC status
testuser#cloudshell:~/kube-deploy (taiga-pursuit)$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
taiga-async-rabbitmq-data Bound pvc-3c8eb896-1bca-4047-915f-01e9a2ca5911 1Gi RWO standard 23m
taiga-db-data Bound pvc-ab03a878-1783-4f03-92bc-c9ef96a7d36d 5Gi RWO standard 23m
taiga-events-rabbitmq-data Bound pvc-0c4da3b5-bfb6-4cd9-b45d-e4e7b44a83e5 1Gi RWO standard 23m
taiga-gateway-claim0 Bound pvc-d2e39951-094f-447c-86ff-c36639786111 5Gi RWO standard 24m
taiga-media-data Bound pvc-a28ae890-fe9d-4985-8183-ce54d7ed57d8 5Gi RWO standard 23m
taiga-static-data Bound pvc-3cab3ec0-88d9-4c70-96c8-97d7b48e755c 5Gi RWO standard 23m
Is the warning that these are being shared causing it to fail mounting or is it something else? I am not able to figure out
In spite 21Gi being set in claimed volume, the pod has 8E (full possible size of EFS)
Is it OK and storage size is limited. Or did I make a mistake in configuration and there needs to change, or something other?
I will be appreciated for your help.
Volume:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
monitoring-eks-falcon-victoriametrics 21Gi RWX Retain Bound victoriametrics/victoriametrics-data
Pod:
Filesystem Size Used Available Use% Mounted on
fs-efs.us-....s.com:/ 8.0E 0 8.0E 0% /data
Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
name: monitoring-eks-falcon-victoriametrics
uid: f43e12d0-77ab-4530-8c9e-cfbd3c641467
resourceVersion: '28847'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 21Gi
nfs:
server: fs-.efs.us-east-1.amazonaws.com
path: /
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: victoriametrics
name: victoriametrics-data
uid: 8972e897-4e16-a64f-4afd8f90fa89
apiVersion: v1
resourceVersion: '28842'
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
Persistent Volume Claims
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: victoriametrics-data
namespace: victoriametrics
uid: 8972e897-4e16-a64f-4afd8f90fa89
resourceVersion: '28849'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
Description: Volume for Victoriametrics DB
pv.kubernetes.io/bind-completed: 'yes'
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteMany
selector:
matchLabels:
k8s-app: victoriametrics
purpose: victoriametrics
matchExpressions:
- key: k8s-app
operator: In
values:
- victoriametrics
resources:
limits:
storage: 21Gi
requests:
storage: 21Gi
volumeName: monitoring-eks-falcon-victoriametrics
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 21Gi
Pod deployment
kind: Deployment
...
spec:
...
spec:
volumes:
- name: victoriametrics-data
persistentVolumeClaim:
claimName: victoriametrics-data
containers:
- name: victoriametrics
...
volumeMounts:
- name: victoriametrics-data
mountPath: /data
mountPropagation: None
...
The number "8E" serves as an indicator, it is not a real quota. AWS EFS does not support quota (eg. FATTR4_QUOTA_AVAIL_HARD). It generally means you have "unlimited" space on this mount. There's nothing wrong with your spec; the number specified in the PVC's resources.requests.storage is used to match PV's capacity.storage. It doesn't mean you can only write 21GB on the EFS mount.
I am working with Docker Desktop on Windows 10 as well as Windows Subsystem for Linux (WSL). I have a containerized app that I deploy to the local K8s cluster (courtesy of Docker Desktop). Typical story: all was working fine until one day a Docker Desktop update came and ruined everything). DD version I have now is 2.3.0.2 stable.
I have a pod with MySQL with defined pv, pvc and storage class. When I deploy my app to the cluster I can see that pv and pvc are bound but the pod is stuck at ContainerCreating state:
$ kubectl describe pod mysql-6779d8fb8b-d25wz
Name: mysql-6779d8fb8b-d25wz
Namespace: typo3-connector
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Wed, 13 May 2020 14:21:43 +0200
Labels: app=mysql
pod-template-hash=6779d8fb8b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-6779d8fb8b
Containers:
mysql:
Container ID:
Image: lw-mysql
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
mysql-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wr6g9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-wr6g9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wr6g9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled <unknown> default-scheduler Successfully assigned typo3-connector/mysql-6779d8fb8b-d25wz to docker-desktop
Warning FailedMount 35s (x8 over 99s) kubelet, docker-desktop MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
The error is
MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
But the path actually exists on disk (in WSL that is).
The pv:
$ kubectl describe pv mysql-pv
Name: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"mysql-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: typo3-connector/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [docker-desktop]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /c/kubernetes/typo3-8/mysql-storage/
Events: <none>
The pvc:
$ kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: typo3-connector
StorageClass: local-storage
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"typo3-connector"},"spe...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-6779d8fb8b-d25wz
Events: <none>
I tried running it from PowerShell but no luck. I get the same result.
But it was all working fine before the update.
Is it configuration based issue?
EDIT
Here's the manifest file:
apiVersion: v1
kind: Namespace
metadata:
name: typo3-connector
---
apiVersion: v1
data:
MYSQL_PASSWORD: ZHVtbXk=
MYSQL_ROOT_PASSWORD: ZHVtbXk=
MYSQL_USER: ZHVtbXk=
kind: Secret
metadata:
name: mysql-credentials
namespace: typo3-connector
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: typo3-connector
spec:
ports:
- name: mysql-backend
port: 3306
protocol: TCP
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: typo3-connector
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- envFrom:
- secretRef:
name: mysql-credentials
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-persistent-storage
imagePullSecrets:
- name: lwdockerregistry
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
local:
path: /c/kubernetes/typo3-8/mysql-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: typo3-connector
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
You must use this format for the path:
/run/desktop/mnt/host/c/someDir/volumeDir
So in your case it is:
/run/desktop/mnt/host/c/kubernetes/typo3-8/mysql-storage/
source for solution
UPDATE: You really don't want to use windows host directories inside WSL2 distro because it is really really poor performance. Use the WSL2 distro directories instead that you can access just as easily from the host windows on this address \\wsl$
As described in the following example taken from here, the path shouldn't end with /.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Changing your path to path: /c/kubernetes/typo3-8/mysql-storage makes the magic and your PVC works as designed.
Trying to persist my jenkins jobs on to vsphere storage when I delete the deployments/services.
I've tried using the standard approach: used StorageClass, then made a PersistentVolumeClaim which is referenced in the .ayml file that will create the deployments.
storage-class.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mystorage
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
persistent-volume-claim.yml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0003
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
jenkins.yml:
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-auto-ci
labels:
app: jenkins-auto-ci
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: jenkins-auto-ci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0003
I expect the jenkins jobs to persist when I delete and recreate the deployments.
You should create VMDK which is Virtual Machine Disk.
You can do that using govc which is vSphere CLI.
govc datastore.disk.create -ds datastore1 -size 2G volumes/myDisk.vmdk
Or using ESXi CLI by ssh into the host as root and executing:
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
Once this is done you should create your PV let's call it vsphere_pv.yaml which might look like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
The datastore1 in this example was created in root folder of vCenter, if you have it in a different location you need to change the volumePath. If it's located in DatastoreCluster then set volumePath to"[DatastoreCluster/datastore1] volumes/myDisk".
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pv.yaml
You can check if it was created by describing it kubectl describe pv pv0001
Now you need PVC let's call it vsphere_pvc.yaml to consume PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pvc.yaml
You can check if it was created by describing it kubectl describe pvc pv0001
Once this is done your yaml might be looking like the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0001
All this is nicely explained on Vmware GitHub vsphere-storage-for-kubernetes.
I'm trying to create two deployments, one for Wordpress the other for MySQL which refer to two different Persistent Volumes.
Sometimes, while deleting and recreating volumes and deployments, the MySQL deployment populates the Wordpress volume (ending up with a database in the wordpress-volume directory).
This is clearer when you do kubectl get pv --namespace my-namespace:
mysql-volume 2Gi RWO Retain Bound flashart-it/wordpress-volume-claim manual 1h
wordpress-volume 2Gi RWO Retain Bound flashart-it/mysql-volume-claim manual
.
I'm pretty sure the settings are ok. Please find the yaml file below.
Persistent Volume Claims + Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/mysql-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/wordpress-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployments
kind: Deployment
apiVersion: apps/v1
metadata:
name: wordpress
namespace: my-namespace
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.0-php7.1-apache
name: wordpress
env:
# ...
ports:
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: my-namespace
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# ...
ports:
# ...
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
It's expected behavior in Kubernetes. PVC can bind to any available PV, given that storage class is matched, access mode is matched, and storage size is sufficient. Names are not used to match PVC and PV.
A possible solution for your scenario is to use label selector on PVC to filter qualified PV.
First, add a label to PV (in this case: app=mysql)
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-volume
labels:
app: mysql
Then, add a label selector in PVC to filter PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: mysql