Kubernetes Volume Mount with Replication Controllers - docker

Found this example for Kubernetes EmptyDir volume
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
I want to volume mount between 2 pods. I am creating these pods using 2 different Replication Controllers. The replication controllers looks like this
Replication Controller 1:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-worker
labels:
name: node-worker
spec:
replicas: 1
selector:
name: node-worker
template:
metadata:
labels:
name: node-worker
spec:
containers:
-
name: node-worker
image: image/node-worker
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Replication Controller 2:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-manager
labels:
name: node-manager
spec:
replicas: 1
selector:
name: node-manager
template:
metadata:
labels:
name: node-manager
spec:
containers:
-
name: node-manager
image: image/node-manager
volumeMounts:
- mountPath: /mnt/test
name: deployment-volume
volumes:
- name: deployment-volume
emptyDir: {}
Can Kubernetes emptyDir volume be used for this scenario?

EmptyDir volumes are inherently bound to the lifecycle of a single pod and can't be shared amongst pods in replication controllers or otherwise. If you want to share volumes amongst pods, the best choices right now are NFS or gluster, in a persistent volume. See an example here: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md

Why do you want to share the volume mount between pods? This will not work reliably because you aren't guaranteed to have a 1:1 mapping between where pods in replication controller 1 and replication controller 2 are scheduled in your cluster.
If you want to share local storage between containers, you should put both of the containers into the same pod, and have each container mount the emptyDir volume.

You require three things to get this working. More info can be found here and some documentation here, but it's a little confusing at first.
This example mounts a NFS volume.
1. Create a PersistentVolume pointing to your NFS server
file : mynfssharename-pv.yaml
(update server to point to your server)
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfssharename
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: yourservernotmine.yourcompany.com
path: "/yournfspath"
kubectl create -f mynfssharename-pv.yaml
2. Create a PersistentVolumeClaim to points to PersistentVolume mynfssharename
file : mynfssharename-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynfssharename
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl create -f mynfssharename-pvc.yaml
3. Add the claim to your ReplicationController or Deployment
spec:
containers:
- name: sample-pipeline
image: yourimage
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
volumeMounts:
# name must match the volume name below
- name: mynfssharename
mountPath: "/mnt"
volumes:
- name: mynfssharename
persistentVolumeClaim:
claimName: mynfssharename

Related

Docker volume mount to kubernetes volume

I am trying out to have a volume mount on Kubernetes.
Currently I have a Docker image which I run like:
docker run --mount type=bind,source="$(pwd)"<host_dir>,target=<docker_dir> container
To have this run on Google Kubernetes cluster, I have:
Create a Google Compute Disk
Created a persistent volume which refers to the disk:
kind: PersistentVolume
...
namespace: default
name: pvc
spec:
claimRef:
namespace: default
name: pvc
gcePersistentDisk:
pdName: disk-name
fsType: ext4
---
...
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: "storage"
...
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
Created pod with mount
kind: Pod
apiVersion: v1
metadata:
name: k8s-pod
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
containers:
- name: image_name
image: eu.gcr.io/container:latest
volumeMounts:
- mountPath: <docker_dir>
name: dir
I am missing out where the binding between the host and container/pod directories will take place. Also where do I mention that binding in my yaml files.
I will appreciate any help :)
You are on the right path here. In your Pod spec, the name of the volumeMount should match the name of the volumes. So in your case,
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
volume name is pvc. So your volumeMount should be
volumeMounts:
- mountPath: "/path/in/container"
name: pvc
So, for example, to mount this volume at /mydata in your container, your Pod spec would look like
kind: Pod
apiVersion: v1
metadata:
name: k8s-pod
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
containers:
- name: image_name
image: eu.gcr.io/container:latest
volumeMounts:
- mountPath: "/mydata"
name: pvc

Deploy Dind and secure Docker Registry on Kubernetes (colon issues)

I have an issue with one of my project. Here is what I want to do :
Have a private docker registry on my cluster Kubernetes
Have a docker deamon running so that I can pull / push and build image directly inside the cluster
For this project I'm using some certificate to secure all those interactions.
1. How to reproduce :
Note: I'm working on a linux-based system
Here are the files that I'm using :
Deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: dind-registry-cert
mountPath: >-
/etc/docker/certs.d/registry:5000/ca.crt
ports:
- containerPort: 2376
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
- name: init-reg-vol
secret:
secretName: init-reg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/registry.pem
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: dind-registry-cert
mountPath: /certs/
- name: registry-data
mountPath: /var/lib/registry
ports:
- containerPort: 5000
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: registry
- name: registry-data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: docker
command: ['sleep','200']
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
env:
- name: DOCKER_HOST
value: tcp://docker:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: DOCKER_CERT_PATH
value: /certs/client
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: true
- name: dind-registry-cert
mountPath: /usr/local/share/ca-certificate/ca.crt
readOnly: true
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
Services.yaml
---
apiVersion: v1
kind: Service
metadata:
name: docker
spec:
selector:
app: docker
ports:
- name: docker
protocol: TCP
port: 2376
targetPort: 2376
---
apiVersion: v1
kind: Service
metadata:
name: registry
spec:
selector:
app: registry
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: 5000
Pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certs-client
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
limits:
storage: 50Gi
requests:
storage: 2Gi
status: {}
For the cert files I have the following folder certs/ certs/client certs.d/registry:5000/ and I use these command line to generate the certs :
openssl req -newkey rsa:4096 -nodes -keyout ./certs/registry.pem -x509 -days 365 -out ./certs/registry.crt -subj "/C=''/ST=''/L=''/O=''/OU=''/CN=registry"
cp ./certs/registry.crt ./certs.d/registry\:5000/ca.crt
Then I use secrets to pass those certs inside the pods :
kubectl create secret generic registry --from-file=certs/registry.crt --from-file=certs/registry.pem
kubectl create secret generic ca.crt --from-file=certs/registry.crt
The to launch the project the following line is used :
kubectl apply -f pvc.yaml,deployment.yaml,service.yaml
2. My issues
I have a problem on my docker pods with this error :
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/727d0f2a-bef6-4217-a292-427c5d76e071/volumes/kubernetes.io~secret/dind-registry-cert:/etc/docker/certs.d/registry:5000/ca.crt:ro
So the problem seems to comme from the colon in the path name. Then I tried to escape the colon and I got this sublime error
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 34: found unknown escape character
The real problem here is that if the folder is not named 'registry:5000' the certificat is not reconised as correct and I have a x509 error when trying to push an image from the client.
For the overall project I know that it can work like that since I already succes to deploy it localy with a docker-compose (here is the link to the github project if any of you are curious)
So I looked a bit on to it and found out that it's a recuring problem on docker (I mean on Docker Desktop for mount volumes on containers) but I can't find anything about the same issue on Kubernetes.
Do any of you have any lead / suggestion / workaround on this mater ?
As always, thanks for your times :)
------------------------------- EDIT following #HelloWorld answer -------------------------------
Thanks to the workaround with simlink the ca.cert is correctly mounted inside. Howerver since I was mounting it on the deployement that was use to run the docker deamon, the entrypoint of the container docker:dind was overwrite by the commands. For future reader here is the solution that I found : geting the entry-point.sh and running it manualy.
Here is the deployement as I write those lines :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /random/registry.crt /etc/docker/certs.d/registry:5000/ca.crt && wget https://raw.githubusercontent.com/docker-library/docker/a73d96e731e2dd5d6822c99a9af4dcbfbbedb2be/19.03/dind/dockerd-entrypoint.sh && chmod +x dockerd-entrypoint.sh && ./dockerd-entrypoint.sh']
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: false
- name: dind-registry-cert
mountPath: /random/
readOnly: false
ports:
- containerPort: 2376
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
I hope it will be usefull for someone in the futur :)
The only thing I come up with is using symlinks. I tested it and it works. I also tried searching for better solution but didn't find anything satisfying.
Have a look at this example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: centos:7
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /some/random/path/ca.crt /etc/docker/certs.d/registry:5000/ca.crt && exec sleep 10000']
volumeMounts:
- mountPath: '/some/random/path'
name: registry-cert
volumes:
- name: registry-cert
secret:
secretName: my-secret
And here is a template secret i used:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
data:
ca.crt: <<< some_random_Data >>>
I have mounted this secret into a /some/random/path location (without colon so it wouldn't throw errors) and created a symlink between /some/random/path/ca.crt and /etc/docker/certs.d/registry:5000/ca.crt.
Of course you also need to create a dir structure before running ln -s ..., that is why I run mkdir -p ....
Let me know if you have any further questions. I'd be happy to answer them.

Setting up a persitent volume with Kubernetes and Docker Destop for Windows

I am trying to setup a persistent volume for K8s that is running in Docker Desktop for Windows. The end goal being I want to run Jenkins and not lose any work if docker/K8s spins down.
I have tried a couple of things but I'm either misunderstanding the ability to do this or I am setting something up wrong. Currently I have the environment setup like so:
I have setup a volume in docker for Jenkins. All I did was create the volume, not sure if I need more configuration here.
docker volume inspect jenkins-pv
[
{
"CreatedAt": "2020-05-20T16:02:42Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/jenkins-pv/_data",
"Name": "jenkins-pv",
"Options": {},
"Scope": "local"
}
]
I have also created a persistent volume in K8s pointing to the mount point in the Docker volume and deployed it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/var/lib/docker/volumes/jenkins-pv/_data"
I have also created a pv claim and deployed that.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Lastly I have created a deployment for Jenkins. I have confirmed it works and I am able to access the app.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins-pv-volume
mountPath: /var/lib/docker/volumes/jenkins-pv/_data
volumes:
- name: jenkins-pv-volume
persistentVolumeClaim:
claimName: jenkins-pv-claim
However, the data does not persist quitting Docker and I have to reconfigure Jenkins every time I start. Did I miss something or how/what I am trying to do not possible? Is there a better or easier way to do this?
Thanks!
I figured out my issue, it was two fold.
I was trying to save data from the wrong location within the pod that was running Jenkins.
I was never writing the data back to docker shared folder.
To get this working I created a shared folder in Docker (C:\DockerShare).
Then I updated the host path in my Persistent Volume.
The format is /host_mnt/path_to_docker_shared_folder_location
Since I used C:\DockerShare my path is: /host_mnt/c/DockerShare
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /host_mnt/c/DockerShare/jenkins
I also had to update the Jenkins deployment because I was not actually saving any of the config.
I should have been saving data from /var/jenkins_home.
Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins
mountPath: /var/jenkins_home
volumes:
- name: jenkins
persistentVolumeClaim:
claimName: jenkins
Anyways its working now and I hope this helps someone else when it comes to setting up a PV.

Jenkins container persistence on Kubernetes cluster - PersistentVolumeClaim (VMware/Vsphere)

Trying to persist my jenkins jobs on to vsphere storage when I delete the deployments/services.
I've tried using the standard approach: used StorageClass, then made a PersistentVolumeClaim which is referenced in the .ayml file that will create the deployments.
storage-class.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mystorage
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
persistent-volume-claim.yml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0003
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
jenkins.yml:
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-auto-ci
labels:
app: jenkins-auto-ci
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: jenkins-auto-ci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0003
I expect the jenkins jobs to persist when I delete and recreate the deployments.
You should create VMDK which is Virtual Machine Disk.
You can do that using govc which is vSphere CLI.
govc datastore.disk.create -ds datastore1 -size 2G volumes/myDisk.vmdk
Or using ESXi CLI by ssh into the host as root and executing:
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
Once this is done you should create your PV let's call it vsphere_pv.yaml which might look like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
The datastore1 in this example was created in root folder of vCenter, if you have it in a different location you need to change the volumePath. If it's located in DatastoreCluster then set volumePath to"[DatastoreCluster/datastore1] volumes/myDisk".
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pv.yaml
You can check if it was created by describing it kubectl describe pv pv0001
Now you need PVC let's call it vsphere_pvc.yaml to consume PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pvc.yaml
You can check if it was created by describing it kubectl describe pvc pv0001
Once this is done your yaml might be looking like the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0001
All this is nicely explained on Vmware GitHub vsphere-storage-for-kubernetes.

Can't enable images deletion from a private docker registry

all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"

Resources