Why can't find configmap in kubernetes? - docker

I have defined a K8S configuration which deploy a metricbeat container. Below is the configuration file. But I got an error when run kubectl describe pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m24s default-scheduler Successfully assigned default/metricbeat-6467cc777b-jrx9s to ip-192-168-44-226.ap-southeast-2.compute.internal
Warning FailedMount 3m21s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[default-token-4dxhl config modules]: timed out waiting for the condition
Warning FailedMount 74s (x10 over 5m24s) kubelet MountVolume.SetUp failed for volume "config" : configmap "metricbeat-daemonset-config" not found
Warning FailedMount 67s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config modules default-token-4dxhl]: timed out waiting for the condition
based on the error message, it says configmap "metricbeat-daemonset-config" not found but it does exist in below configuration file. why does it report this error?
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
labels:
k8s-app: metricbeat
data:
aws.yml: |-
- module: aws
access_key_id: 'xxxx'
secret_access_key: 'xxxx'
period: 600s
regions:
- ap-southeast-2
metricsets:
- elb
- natgateway
- rds
- transitgateway
- usage
- vpn
- cloudwatch
metrics:
- namespace: "*"
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
containers:
- name: metricbeat
image: elastic/metricbeat:7.11.1
env:
- name: ELASTICSEARCH_HOST
value: es-entrypoint
- name: ELASTICSEARCH_PORT
value: '9200'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-daemonset-modules

There's a good chance that the other resources are ending up in namespace default because they do not specify a namespace, but the config does (kube-system). You should probably put all of this in its own metricbeat namespace.

Related

Syslog from AKS cluster nodes - Using App Armor & Seccomp

How can I retrieve /var/log/syslog logs from AKS cluster nodes?
Azure recommends the use of Linux security features App Armor and Seccomp. Both of them produce log entries at /var/log/syslog of each cluster node where a workload running has a profile attached.
I've run a test with a nginx container using both, and I can see the corresponding log entries in my node:
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505152] audit: type=1326 audit(1651228665.090:23650): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=3 compat=0 ip=0x7f90c03fda67 code=0x7ffc0000
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505154] audit: type=1326 audit(1651228665.090:23651): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=257 compat=0 ip=0x7f90c03fdb84 code=0x7ffc0000
Apr 29 10:42:46 aks-agentpool-31529777-vmss000000 kernel: [ 6550.189592] audit: type=1326 audit(1651228966.775:25032): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=9028 comm="runc:[2:INIT]" exe="/" sig=0 arch=c000003e syscall=257 compat=0 ip=0x561a3e78fa2a code=0x7ffc0000
Syslog's are supposed to be retrievable through configuration of the monitoring agent at the destination Log Analytics workspace, by enabling the syslog facility:
Agents Configuration.
I've also enabled Container Insights in the cluster and Diagnostic Settings for every log category available. Still, I have no way to see those logs without directly connecting to the node and the node VM appears as a not connected source for the Log Analytics workspace: Workspace Data Sources
Env:
Kubernetes v1.21.9
Node: Linux aks-agentpool-31529777-vmss000000 5.4.0-1074-azure #77~18.04.1-Ubuntu SMP
1 pod with one nginx container + two daemonsets and a configMap to deploy the profiles.
apiVersion: v1
kind: Namespace
metadata:
name: testing
---
apiVersion: v1
kind: ConfigMap
metadata:
name: seccomp-profiles-map
namespace: testing
data:
mynginx: |-
{
"defaultAction": "SCMP_ACT_LOG"
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: apparmor-profiles
namespace: testing
data:
mynginx: |-
# vim:syntax=apparmor
#include <tunables/global>
profile mynginx flags=(complain) {
#include <abstractions/base>
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: seccomp-profiles-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: seccomp-profiles-loader
spec:
selector:
matchLabels:
daemon: seccomp-profiles-loader
template:
metadata:
name: seccomp-profiles-loader
labels:
daemon: seccomp-profiles-loader
spec:
automountServiceAccountToken: false
initContainers:
- name: seccomp-profile-loader
image: busybox
command: ["/bin/sh"]
# args: ["/profiles/*", "/var/lib/kubelet/seccomp/profiles/"]
args: ["-c", "cp /profiles/* /var/lib/kubelet/seccomp/profiles/"]
securityContext:
privileged: true
volumeMounts:
- name: node-profiles-folder
mountPath: /var/lib/kubelet/seccomp/profiles
readOnly: false
- name: profiles
mountPath: /profiles
readOnly: true
containers:
- name: seccomp-profiles-pause
# https://github.com/kubernetes/kubernetes/tree/master/build/pause
image: gcr.io/google_containers/pause
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
volumes:
- name: node-profiles-folder
hostPath:
path: /var/lib/kubelet/seccomp/profiles
- name: profiles
configMap:
name: seccomp-profiles-map
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 60s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most apparmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: apparmor-profiles
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: testing
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
container.apparmor.security.beta.kubernetes.io/nginx: localhost/mynginx
spec:
automountServiceAccountToken: false
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: nginx
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/mynginx
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: PORT
value: '3000'
---
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: nginx

Fluent Bit cannot reach DNS for Azure Log Analytics

I'm running Fluent Bit on Azure Kubernetes Service as DaemonSet, output goes to Azure Log Analytics. Once Fluent Bit tries to send logs it fails to connect DNS server:
getaddrinfo(host='XXX.ods.opinsights.azure.com', err=12): Timeout while contacting DNS servers
If I manually enter IP and XXX.ods.opinsights.azure.com into /etc/hosts, everything goes well.
If I ssh to fluent-bit pod and run wget XXX.ods.opinsights.azure.com host resolving works as well.
What can be wrong?
My configs:
ConfigMap
- apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: airflow
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
#INCLUDE input-kubernetes.conf
#INCLUDE filter-kubernetes.conf
#INCLUDE output-azure.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log On
K8S-Logging.Parser Off
K8S-Logging.Exclude Off
output-azure.conf: |
[OUTPUT]
Name azure
Match *
Customer_ID xxx
Shared_Key yyy
DaemonSet
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentbit
namespace: airflow
labels:
app.kubernetes.io/name: fluentbit
spec:
selector:
matchLabels:
name: fluentbit
template:
metadata:
labels:
name: fluentbit
spec:
serviceAccountName: fluent-bit
containers:
- name: fluent-bit
imagePullPolicy: Always
image: fluent/fluent-bit:1.8-debug
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
resources:
limits:
memory: 1500Mi
requests:
cpu: 500m
memory: 500Mi
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: fluent-bit-config
I ran against a similar issue today, downgrading the version of fluent bit to 1.8.4 resolved the issue. (https://github.com/fluent/fluent-bit/issues/4050)
In your case you would need to fix the docker container label to "1.8.4-debug" - hope this helps!
Can confirm findings above. same result with 1.8.5. had to go back to 1.8.4.

Zookeeper pod can't access mounted persistent volume claim

I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000

WSL Kubernetes Pod stuck in ContainerCreating state due to volume mount

I am working with Docker Desktop on Windows 10 as well as Windows Subsystem for Linux (WSL). I have a containerized app that I deploy to the local K8s cluster (courtesy of Docker Desktop). Typical story: all was working fine until one day a Docker Desktop update came and ruined everything). DD version I have now is 2.3.0.2 stable.
I have a pod with MySQL with defined pv, pvc and storage class. When I deploy my app to the cluster I can see that pv and pvc are bound but the pod is stuck at ContainerCreating state:
$ kubectl describe pod mysql-6779d8fb8b-d25wz
Name: mysql-6779d8fb8b-d25wz
Namespace: typo3-connector
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Wed, 13 May 2020 14:21:43 +0200
Labels: app=mysql
pod-template-hash=6779d8fb8b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-6779d8fb8b
Containers:
mysql:
Container ID:
Image: lw-mysql
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
mysql-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wr6g9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-wr6g9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wr6g9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled <unknown> default-scheduler Successfully assigned typo3-connector/mysql-6779d8fb8b-d25wz to docker-desktop
Warning FailedMount 35s (x8 over 99s) kubelet, docker-desktop MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
The error is
MountVolume.NewMounter initialization failed for volume "mysql-pv" : path "/c/kubernetes/typo3-8/mysql-storage/" does not exist
But the path actually exists on disk (in WSL that is).
The pv:
$ kubectl describe pv mysql-pv
Name: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"mysql-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: typo3-connector/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [docker-desktop]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /c/kubernetes/typo3-8/mysql-storage/
Events: <none>
The pvc:
$ kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: typo3-connector
StorageClass: local-storage
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"typo3-connector"},"spe...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-6779d8fb8b-d25wz
Events: <none>
I tried running it from PowerShell but no luck. I get the same result.
But it was all working fine before the update.
Is it configuration based issue?
EDIT
Here's the manifest file:
apiVersion: v1
kind: Namespace
metadata:
name: typo3-connector
---
apiVersion: v1
data:
MYSQL_PASSWORD: ZHVtbXk=
MYSQL_ROOT_PASSWORD: ZHVtbXk=
MYSQL_USER: ZHVtbXk=
kind: Secret
metadata:
name: mysql-credentials
namespace: typo3-connector
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: typo3-connector
spec:
ports:
- name: mysql-backend
port: 3306
protocol: TCP
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: typo3-connector
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- envFrom:
- secretRef:
name: mysql-credentials
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-persistent-storage
imagePullSecrets:
- name: lwdockerregistry
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
local:
path: /c/kubernetes/typo3-8/mysql-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: typo3-connector
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
You must use this format for the path:
/run/desktop/mnt/host/c/someDir/volumeDir
So in your case it is:
/run/desktop/mnt/host/c/kubernetes/typo3-8/mysql-storage/
source for solution
UPDATE: You really don't want to use windows host directories inside WSL2 distro because it is really really poor performance. Use the WSL2 distro directories instead that you can access just as easily from the host windows on this address \\wsl$
As described in the following example taken from here, the path shouldn't end with /.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Changing your path to path: /c/kubernetes/typo3-8/mysql-storage makes the magic and your PVC works as designed.

Launching ProFtpd with Kubernetes: Issue mounting volumes

I would like to run a lcoal FTP server ProFtpd with minikube with docker image : vipconsult/proftpd
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
name: ftp-local
namespace: influx
labels:
app: ftp-local
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: ftp-local
component: core
spec:
initContainers:
containers:
- image: vipconsult/proftpd
name: ftp-local
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: USERNAME
value: test
- name: PASSWORD
value: test
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
and volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
When I deploy it, I have:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-58d5d774d-zdztf
chown: cannot access '/etc/proftpd/ftpd.passwd': No such file or directory
Documentation states:
to make the users persistent share the passwords file with the host or a data -v /home/docker/proftpd/ftpd.passwd:/etc/proftpd/ftpd.passwd
so, I tried to mount this file:
deployment ( just showing volume section )
volumeMounts:
- name: ftp-persistent-storage
mountPath: /var/lib/ftp
- name: ftp-persistent-users
mountPath: /etc/proftpd/ftpd.passwd
subPath: ftpd.passwd
volumes:
- name: ftp-persistent-storage
persistentVolumeClaim:
claimName: ftp-storage
- name: ftp-persistent-users
persistentVolumeClaim:
claimName: ftp-storage-users
and volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage
hostPath:
path: /data/ftp/storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ftp-storage-user
namespace: influx
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: influx
name: ftp-storage-user
hostPath:
path: /data/ftp/config
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ftp-storage-user
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
First I am a bit frustrated to use a 1GB volume for a single file, and then I get:
➜ proFTP git:(master) ✗ kubectl logs -n influx ftp-local-67cfd77497-7rt5m
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: mod_auth_file/1.0: unable to use AuthUserFile '/etc/proftpd/ftpd.passwd': Is a directory
2019-10-03 07:35:23,853 ftp-local-67cfd77497-7rt5m proftpd[1]: fatal: AuthUserFile: unable to use /etc/proftpd/ftpd.passwd: Is a directory on line 193 of '/etc/proftpd/proftpd.conf'
Here are my volumes:
git:(devops) ✗ kubectl get pv,pvc -n influx
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ftp-storage 1Gi RWO Retain Bound influx/ftp-storage 22s
persistentvolume/ftp-storage-user 1Gi RWO Retain Bound influx/ftp-storage-user 22s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ftp-storage Bound ftp-storage 1Gi RWO standard 22s
persistentvolumeclaim/ftp-storage-user Bound ftp-storage-user 1Gi RWO standard 22s
Now describing PVs:
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage
Name: ftp-storage
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage"},"spec":{"accessModes":["ReadWriteOnce"],"c...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/storage
HostPathType:
Events: <none>
➜ git:(devops) ✗ kubectl describe pv -n influx ftp-storage-user
Name: ftp-storage-user
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"ftp-storage-user"},"spec":{"accessModes":["ReadWriteOnce...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: influx/ftp-storage-user
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/ftp/config
HostPathType:
Events: <none>
What am I doing wrong ?

Resources