Kubernetes-Help Needed-FailedMount 3m38s - docker

I have created a Jenkins cluster on Kubernetes (Master + 2 workers) with local volumes on the Master node.
I created a persistent vol of 2GB and the claim is 1 GB.
I created a deployment with the image: jenkins/jenkins:lts and volume mount from /var/jenkins_home to PVC: claimname
I have already copied the data on local folder which is Persistent Volume but I am not able to see my jobs on jenkins server.
kubectl describe pod dep-jenkins-8648454f65-4v8tb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 3m38s (x149 over 4h50m) kubelet, kube-worker001 MountVolume.SetUp failed for volume "default-token-424m4" : secret "default-token-424m4" not found
What is the correct way to mount a local directory in a POD so that I can transfer my Jenkins data to newly created Jenkins server on Kubernetes?

Looks like the Warning in your pod description is related to mounting a secret and not mounting any PV. To set up your JENKINS_HOME as a persistent volume you would do something like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: my-jenkins-image
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home

Related

Accessing CIFS files from pods

We have a docker image that is processing some files on a samba share.
For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:
docker run -v /mnt/dfs/project1:/workspace image
Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.
I installed this repo as a daemonset
https://k8scifsvol.juliohm.com.br/
and it's up and running.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cifs-volumedriver-installer
spec:
selector:
matchLabels:
app: cifs-volumedriver-installer
template:
metadata:
name: cifs-volumedriver-installer
labels:
app: cifs-volumedriver-installer
spec:
containers:
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.4
name: flex-deploy
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?
Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?
Do we even need a PV? Could the pod just read from the host's mounted share?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)
Managed to get it working with the fstab/cifs plugin.
Copy its cifs script to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and give it execute permissions. Also restart kubelet on all nodes.
https://github.com/fstab/cifs
Then added
containers:
- name: pablo
image: "10.203.32.80:5000/pablo"
volumeMounts:
- name: dfs
mountPath: /data
volumes:
- name: dfs
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//dfs/dir"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
Now there is the /data mount inside the container pointing to //dfs/dir

convert docker-compose.yml file to kubernetes

I am converting a docker-compose file to kubernetes using kompose running the follwing command:
$kompose convert -f docker-compose.yml -o kubernetes_image.yaml
After the command finish the ouput is the following.
WARN Volume mount on the host "/usr/docker/adpater/dbdata" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.crt" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.key" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/server.xml" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
To push the converted file to kubernetes I run the follwoing command:
$kubectl apply -f kubernetes_image.yaml
NAME READY STATUS RESTARTS AGE
mysql-557dd849c8-bsdq7 1/1 Running 1 17h
tomcat-7cd65d4556-spjbl 0/1 CrashLoopBackOff 76 18h
if I run:
$ kubectl describe pod tomcat-7cd65d4556-spjbl
I get the following message:
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/usr/docker/adapter/server.xml\\\" to rootfs \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged\\\" at \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged/usr/local/tomcat/conf/server.xml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Exit Code: 127
Started: Sun, 31 May 2020 13:35:00 +0100
Finished: Sun, 31 May 2020 13:35:00 +0100
Ready: False
Restart Count: 75
Environment: <none>
Mounts:
/run/secrets/rji_license.json from tomcat-hostpath0 (rw)
/usr/local/tomcat/conf/server.xml from tomcat-hostpath3 (rw)
/usr/local/tomcat/conf/ssl.crt from tomcat-hostpath1 (rw)
/usr/local/tomcat/conf/ssl.key from tomcat-hostpath2 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8dhnk (ro)
This is my docker-compose.yml file:
version: '3.6'
networks:
integration:
services:
mysql:
environment:
MYSQL_USER: 'integrationdb'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
image: db:poc
networks:
- integration
ports:
- '3306:3306'
restart: always
volumes:
- ./dbdata:/var/lib/mysql
tomcat:
image: adapter:poc
networks:
- integration
ports:
- '8080:8080'
- '8443:8443'
restart: always
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
Versions of the tools:
kompose: 1.21.0 (992df58d8)
docker: 19.03.9
kubectl:Major:"1", Minor:"18"
I think my challange here is whithin this type of volumes or files, I dont know how can I migrate or convert them to kubernetes and put the tomcat pod running fine.
Could someone give me a hand?
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
thanks in advance.
When Kompose warns you:
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
It means that it can't translate this fragment of the docker-compose.yml file into Kubernetes syntax:
volumes:
- ./license.json:/run/secrets/rji_license.json
In native Kubernetes, you'd need to provide this content in ConfigMap or Secret objects, and then mount the file into the pod. You can't directly access content on the system from which you're launching the containers.
You can't really get around directly working with the Kubernetes YAML files here. You could run kompose convert to generate the skeleton files, but then you'll need to edit those to add the ConfigMaps, PersistentVolumeClaims (for the database storage), and relevant volume and mount declarations, and then run kubectl apply -f to actually run them. I'd check the Kubernetes YAML files into source control, and maintain them in parallel with your Docker Compose setup.
Move2Kube (which does support docker-compose translation), can handle this case and tries to convert the volumes by interacting with you.
? 6. [] What type of container registry login do you want to use?
Hints:
[Docker login from config mode, will use the default config from your local machine.]
No authentication
? 7. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/dbdata]?:
Hints:
[Use PVC for persistent storage wherever applicable]
Yes
? 8. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/license.json]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 9. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 10. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 11. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/server.xml]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 12. Which storage class to use for persistent volume claim [vol17655897939759777588] used by [mysql]
Hints:
[If you have a custom cluster, you can use collect to get storage classes from it.]
default
? 13. Provide the ingress host domain
Hints:
[Ingress host domain is part of service URL]
myproject.com
? 14. Provide the TLS secret for ingress
Hints:
[Enter TLS secret name]
If the above choices were made Move2Kube creates the following artifacts:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: tomcat
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
containers:
- image: adapter:poc
imagePullPolicy: Always
name: tomcat
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /run/secrets/rji_license.json
name: vol16871681589659214643
- mountPath: /usr/local/tomcat/conf/ssl.crt
name: vol12635587774184387470
- mountPath: /usr/local/tomcat/conf/ssl.key
name: vol7446232639477381794
- mountPath: /usr/local/tomcat/conf/server.xml
name: vol4920239289720818926
restartPolicy: Always
volumes:
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/license.json
name: vol16871681589659214643
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt
name: vol12635587774184387470
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key
name: vol7446232639477381794
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/server.xml
name: vol4920239289720818926
status: {}
and
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: mysql
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
containers:
- env:
- name: MYSQL_USER
value: integrationdb
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_ROOT_PASSWORD
value: password
image: db:poc
imagePullPolicy: Always
name: mysql
ports:
- containerPort: 3306
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: vol17655897939759777588
restartPolicy: Always
volumes:
- name: vol17655897939759777588
persistentVolumeClaim:
claimName: vol17655897939759777588
status: {}
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: vol17655897939759777588
spec:
resources:
requests:
storage: 100Mi
storageClassName: default
volumeName: vol17655897939759777588
status: {}
Essentially depending on your choice Move2Kube will create the appropriate artifacts for you.
You can check out how it works in https://konveyor.github.io/move2kube/tutorials/docker-compose/.

Persistent disk problem on Kubernetes GCP

I'm working in Kubernetes in GCP and I'm having problems with volumes and persistent disks.
I'm using Directus 7 (CMS Headless), which saves most of its information in the database except the files that are uploaded, these files are in the /var/www/html/public/uploads folder (tested locally with docker-compose and works fine), and that folder is the one I'm trying to save on the persistent disk.
No error occurs but when restart the Kubernetes Pod i lose the uploaded images (they are not being saved on the disk).
This is my configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: directus-pv
namespace: default
spec:
storageClassName: ""
capacity:
storage: 100G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: directus-disk
fsType: ext4
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: directus-pvc
namespace: default
labels:
app: .....
spec:
storageClassName: ""
volumeName: directus-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100G
And in the deploy.yaml:
volumeMounts:
- name: api-disk
mountPath: /var/www/html/public/uploads
readOnly: false
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Thanks for the help
Remove namespace property from pv and pvc manifest. They are shared resources in the cluster.
Remove storage class property as well.
I presume that your manually provisioned persistence volume directus-pv, is being created somehow with PersistentVolumeReclaimPolicy=*Recycle. That's the only possible reason that could cause data erase on each POD restart.
I'm not able to reproduce your case with the provided manifest files,
but I tried the following test:
Create gcePersistentDisk
Create PersistentVolume
Create PersistentVolumeClaim
Create ReplicaSet (replicas=1) like this one
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: busybox-list-uploads
spec:
replicas: 1
template:
metadata:
labels:
app: busybox-list-uploads
version: "2"
spec:
containers:
- image: busybox
args: [/bin/sh, -c, 'sleep 9999' ]
volumeMounts:
- mountPath: /var/www/html/public/uploads
name: api-disk
name: busybox
volumes:
- name: api-disk
persistentVolumeClaim:
claimName: directus-pvc
Write some file into mounted folder /var/www/html/public/uploads
Restart POD (=kill the POD) by resizing replica to 0 then to 1
List content of /var/www/html/public/uploads on newly created POD
for i in busybox-list-uploads-dgfbc; do kubectl exec -it $i -- ls /var/www/html/public/uploads; done;
lost+found picture_from_busybox-list-uploads-ng4t6.png
As you can see output shows clearly, that data survives POD restart
* you can verify it with cmd: kubectl get pv/directus-pv -o yaml

Configure Kubernetes bitnami/mariadb container to mount minikube volume

I've been hitting errors when trying to set up a dev platform in Kubernetes & minikube. The config is creating a service, persistentVolume, persistentVolumeClaim & deployment.
The deployment is creating a single pod with a single container based on bitnami/mariadb:latest
I am mounting a local volume into the minikube vm via:
minikube mount <source-path>:/data
This local volume is mounting correctly and can be inspected when I ssh into the minikube vm via: minikube ssh
I now run:
kubectl create -f mariadb-deployment.yaml
to fire up the platform, the yaml config:
kind: Service
apiVersion: v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
ports:
- port: 3306
selector:
app: supertubes
tier: mariadb
type: LoadBalancer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-db-pv
labels:
type: local
tier: mariadb
app: supertubes
spec:
storageClassName: slow
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/staging/sumatra/mariadb-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-db-pv-claim
labels:
app: supertubes
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: local
tier: mariadb
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
selector:
matchLabels:
app: supertubes
tier: mariadb
template:
metadata:
labels:
app: supertubes
tier: mariadb
spec:
securityContext:
fsGroup: 1001
containers:
- image: bitnami/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: <db-password>
- name: MARIADB_DATABASE
value: <db-name>
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
volumes:
- name: mariadb-persistent-storage
persistentVolumeClaim:
claimName: local-db-pv-claim
The above config will then fail to boot the pod and inspecting the pods logs within minikube dashboard shows the following:
nami INFO Initializing mariadb
mariadb INFO ==> Cleaning data dir...
mariadb INFO ==> Configuring permissions...
mariadb INFO ==> Validating inputs...
mariadb INFO ==> Initializing database...
mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb INFO ==> Creating database pw_tbs...
mariadb INFO ==> Enabling remote connections...
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mariadb'
Looking at the above I believed the issue was to do with Bitnami using user: 1001 to launch their mariadb image:
https://github.com/bitnami/bitnami-docker-mariadb/issues/134
Since reading the above issue I've been playing with securityContext within the containers spec. At present you'll see I have it set to:
deployment.template.spec
securityContext:
fsGroup: 1001
but this isn't working. I've also tried:
securityContext:
privileged: true
but didn't get anywhere with that either.
One other check I made was to remove the volumeMount from deployment.template.spec.containers and see if things worked correctly without it, which they do :)
I then opened a shell into the pod to see what the permissions on /bitnami are:
Reading a bit more on the Bitnami issue posted above it says the user: 1001 is a member of the root group, therefore I'd expect them to have the neccessary permissions... At this stage I'm a little lost as to what is wrong.
If anyone could help me understand how to correctly set up this minikube vm volume within a container that would be amazing!
Edit 15/03/18
Following #Anton Kostenko's suggestions I added a busybox container as an initContainer which ran a chmod on the bitnami directory:
...
spec:
initContainers:
- name: install
image: busybox
imagePullPolicy: Always
command: ["chmod", "-R", "777", "/bitnami"]
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
containers:
- image: bitnami/mariadb:latest
name: mariadb
...
however even with setting global rwx permissions (777) the directory couldn't mount as the MariaDB container doesn't allow user 1001 to do so:
nami INFO Initializing mariadb
Error executing 'postInstallation': EPERM: operation not permitted, utime '/bitnami/mariadb/.restored'
Another Edit 15/03/18
Have now tried setting the user:group on my local machine (MacBook) so that when passed to the minikube vm they should already be correct:
Now mariadb-data has rwx permission for eveyone and user: 1001, group: 1001
I then removed the initContainer as I wasn't really sure what that would be adding.
SSHing onto the minikube vm I can see the permissions and user:group have been carried across:
The user & group now being set as docker
Firing up this container results in the same sort of error:
nami INFO Initializing mariadb
Error executing 'postInstallation': EIO: i/o error, utime '/bitnami/mariadb/.restored'
I've tried removing the securityContext, and also adding it as runAsUser: 1001, fsGroup: 1001, however neither made any difference.
Looks like that is an issue in Minikube.
You can try to use the init-container which will fix a permissions before main container will be started, like this:
...........
spec:
initContainers:
- name: "fix-non-root-permissions"
image: "busybox"
imagePullPolicy: "Always"
command: [ "chmod", "-R", "g+rwX", "/bitnami" ]
volumeMounts:
- name: datadir
mountPath: /bitnami
containers:
.........

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above

Resources