Jenkins build job A uses other build job B docker image configuration - jenkins

Jenkins is running in AWS EKS cluster under a jenkins-ci namespace. When multibranch pipeline job "Branch-A" started the build, it is picking up correct configurations (KubernetesPod.yaml) and ran successfully and when job "Branch-B" has started the build it is using job A configurations like docker image and buildurl.
Gitlab Configuration:
Branch-A -- KubernetesPod.yaml
apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins
nodeSelector:
env: jenkins-build
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: env
operator: In
values:
- jenkins-build
tolerations:
- key: "highcpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: dev
hostPath:
path: /dev
imagePullSecrets:
- name: gitlab
containers:
- name: build
image: registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1
imagePullPolicy: IfNotPresent
command:
- cat
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
tty: true
resources:
requests:
memory: "4000Mi"
cpu: "3500m"
limits:
memory: "4000Mi"
cpu: "3500m"
Branch-B -- KubernetesPod.yaml
apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins
nodeSelector:
env: jenkins-build
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: env
operator: In
values:
- jenkins-build
tolerations:
- key: "highcpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: dev
hostPath:
path: /dev
imagePullSecrets:
- name: gitlab
containers:
- name: build
image: registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-2
imagePullPolicy: IfNotPresent
command:
- cat
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
tty: true
resources:
requests:
memory: "4000Mi"
cpu: "3500m"
limits:
memory: "4000Mi"
cpu: "3500m"
Jenkins Branch-A console output:
Seen branch in repository origin/unknownMishariBranch
Seen branch in repository origin/vikg/base
Seen 471 remote branches
Obtained Jenkinsfile.kubernetes from 85b8ab296342b98be52cbef26acf20b15503c273
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained KubernetesPod.yaml from 85b8ab296342b98be52cbef26acf20b15503c273
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Agent company-pod-8whw9-wxflb is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "https://jenkins.mycompany.com/job/multibranch/job/branch-A/3/"
labels:
jenkins: "slave"
jenkins/mycompany-pod: "true"
name: "mycompany-pod-8whw9-wxflb"
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: "env"
operator: "In"
values:
- "jenkins-build"
weight: 1
containers:
- command:
- "cat"
image: "registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1"
imagePullPolicy: "IfNotPresent"
name: "build"
resources:
limits:
memory: "4000Mi"
cpu: "3500m"
requests:
memory: "4000Mi"
cpu: "3500m"
Jenkins Branch-B console output:
Seen branch in repository origin/unknownMishariBranch
Seen branch in repository origin/viking/base
Seen 479 remote branches
Obtained Jenkinsfile.kubernetes from 38ace636171311ef35dc14245bf7a36f49f24e11
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained KubernetesPod.yaml from 38ace636171311ef35dc14245bf7a36f49f24e11
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Agent mycompany-pod-qddx4-08xtm is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "https://jenkins.mycompany.com/job/multibranch/job/branch-A/3/"
labels:
jenkins: "slave"
jenkins/mycompany-pod: "true"
name: "mycompany-pod-qddx4-08xtm"
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: "env"
operator: "In"
values:
- "jenkins-build"
weight: 1
containers:
- command:
- "cat"
image: "registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1"
imagePullPolicy: "IfNotPresent"
name: "build"
resources:
limits:
memory: "4000Mi"
cpu: "3500m"
requests:
memory: "4000Mi"
cpu: "3500m"

Whenever the build gets triggered it is using same label name in Jenkinsfile.
I am posting below part of my jenkinsfile script.
The below solution solved my problem.
Before:
pipeline {
agent {
kubernetes {
label "sn-optimus"
defaultContainer "jnlp"
yamlFile "KubernetesPod.yaml"
}
}
After:
pipeline {
agent {
kubernetes {
label "sn-optimus-${currentBuild.startTimeInMillis}"
defaultContainer "jnlp"
yamlFile "KubernetesPod.yaml"
}
}

Related

How to run docker commands inside container containing devops agent

I have to run DevOps agent inside Docker container in order to run my DevOps pipeline tasks.
As you can see, after pipeline is initialized, my agent has to build and publish image.
Also this container should run inside rancher as a pod.
On my PC I figured out that I have to use
docker run -v /var/run/docker.sock:/var/run/docker.sock
In order to get it worked, but I don't know how to configure it in rancher.
Here is my actual YAML configuration of this pod where '*****' means sensitive data:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: apps.deployment-**************
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-10-25T11:22:39Z"
creationTimestamp: null
labels:
workload.user.cattle.io/workloadselector: apps.deployment-**************
spec:
affinity: {}
containers:
- env:
- name: AZP_URL
value: ***********************
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
key: AZP_TOKEN
name: pat
optional: false
- name: AZP_AGENT_NAME
value: ********************
- name: AZP_POOL
value: *******************
image: ******************************************
imagePullPolicy: Always
name: *********************
resources:
limits:
cpu: "3"
memory: 6Gi
requests:
cpu: 500m
memory: 512Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/docker.sock
name: dockersock
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: azure-registry
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /var/run/docker.sock
type: ""
name: dockersock
Also here is error message I was reciving from pipeline log:
##[error]Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
##[error]The process '/usr/bin/docker' failed with exit code 1

Jenkins on Kubernetes - working directory not accessible using workspaceVolume dynamicPVC

I'm running Jenkins on EKS cluster with k8s plugin and i'd like to write a declarative pipeline in which I specify the pod template in each stage. So a basic example would be the following, in which in the first stage a file is created and in the second one is printed :
pipeline{
agent none
stages {
stage('First sample') {
agent {
kubernetes {
label 'mvn-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
"""
}
}
steps {
container('maven'){
sh "echo 'hello' > test.txt"
}
}
}
stage('Second sample') {
agent {
kubernetes {
label 'bysbox-pod'
yaml """
spec:
containers:
- name: busybox
image: busybox
"""
}
}
steps {
container('busybox'){
sh "cat test.txt"
}
}
}
}
}
This clearly doesn't work since the two pods don't have any kind of shared memory. Reading this doc I realized I can use workspaceVolume dynamicPVC () in the yaml declaration of the pod so that the plugin creates and manages a persistentVolumeClaim in which hopefully i can write the data I need to share between stages.
Now, with workspaceVolume dynamicPVC (...) both pv and pvc are successfully created but the pod goes on error and terminates. In particular, the pods provisioned is the following :
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
runUrl: job/test-libraries/job/sample-k8s/12/
creationTimestamp: "2020-08-07T08:57:09Z"
deletionGracePeriodSeconds: 30
deletionTimestamp: "2020-08-07T08:58:09Z"
labels:
jenkins: slave
jenkins/label: bibibu
name: bibibu-ggb5h-bg68p
namespace: jenkins-slaves
resourceVersion: "29184450"
selfLink: /api/v1/namespaces/jenkins-slaves/pods/bibibu-ggb5h-bg68p
uid: 1c1e78a5-fcc7-4c86-84b1-8dee43cf3f98
spec:
containers:
- image: maven:3.3.9-jdk-8-alpine
imagePullPolicy: IfNotPresent
name: maven
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
- env:
- name: JENKINS_SECRET
value: ...
- name: JENKINS_AGENT_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: ...
image: jenkins/inbound-agent:4.3-4
imagePullPolicy: IfNotPresent
name: jnlp
resources:
requests:
cpu: 100m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ...
nodeSelector:
kubernetes.io/os: linux
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-bibibu-ggb5h-bg68p
- name: default-token-5bt8c
secret:
defaultMode: 420
secretName: default-token-5bt8c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
image: jenkins/inbound-agent:4.3-4
imageID: docker-pullable://jenkins/inbound-agent#sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
lastState: {}
name: jnlp
ready: false
restartCount: 0
state:
terminated:
containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
exitCode: 1
finishedAt: "2020-08-07T08:57:35Z"
reason: Error
startedAt: "2020-08-07T08:57:35Z"
- containerID: docker://96f747a132ee98f7bf2488bd3cde247380aea5dd6f84bdcd7e6551dbf7c08943
image: maven:3.3.9-jdk-8-alpine
imageID: docker-pullable://maven#sha256:3ab854089af4b40cf3f1a12c96a6c84afe07063677073451c2190cdcec30391b
lastState: {}
name: maven
ready: true
restartCount: 0
state:
running:
startedAt: "2020-08-07T08:57:35Z"
hostIP: 10.108.171.224
phase: Running
podIP: 10.108.171.158
qosClass: Burstable
startTime: "2020-08-07T08:57:16Z"
Retrieving logs from jnlp container on the pod with kubectl logs name-of-the-pod -c jnlp -n jenkins-slaves led me towards this error :
Exception in thread "main" java.io.IOException: The specified working directory should be fully accessible to the remoting executable (RWX): /home/jenkins/agent
at org.jenkinsci.remoting.engine.WorkDirManager.verifyDirectory(WorkDirManager.java:249)
at org.jenkinsci.remoting.engine.WorkDirManager.initializeWorkDir(WorkDirManager.java:201)
at hudson.remoting.Engine.startEngine(Engine.java:288)
at hudson.remoting.Engine.startEngine(Engine.java:264)
at hudson.remoting.jnlp.Main.main(Main.java:284)
at hudson.remoting.jnlp.Main._main(Main.java:279)
at hudson.remoting.jnlp.Main.main(Main.java:231)
I also tried to specify the accessModes as parameter of dynamicPVC, but the error is the same.
What am I doing wrong?
Thanks
The docker image being used is configured to run as a non-root user jenkins. By default PVCs will be created only allowing root-user access.
This can be configured using the security context, e.g.
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
(The jenkins user in that image is ID 1000)

Run time create pv/pvc for each jenkins jobs running on slave agent

I am looking to create pv/pvc for each jenkins jobs running on slave agent on runtime.
Basically what I am trying to achieve is, create a pv and share it between pods and later delete it when job is done.
pipeline {
agent {
kubernetes {
label 'scm'
yaml """
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 1Gi
---
apiVersion: "v1"
kind: "Pod"
spec:
containers:
image: "jenkins/jnlp-slave:3.35-5-alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/wsp1"
name: "workspace-volume"
readOnly: false
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: claim1
"""
}
}
stages {
stage('Checkout code') {
agent { label 'scm'}
steps {
git branch: 'master',
credentialsId: 'key',
url: 'giturl'
sh "ls -lat"
}
}
stage('Build ') {
agent {
kubernetes {
label 'Build-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
"""
}
}
steps {
sh "echo Workspace dir is ${pwd()}"
sh "mvn clean install
}
}
}
}
Above script does not work for obvious reasons. Do we have any other solutions.
and how to use runtime name for pvc and use it pod.
metadata:
name: claim1
I am running jenkins helm on k8s.
You can create a PVC per pod using something like workspaceVolume: dynamicPVC(requestsSize: "10Gi") but it will be tied to the pod and deleted when the pod is deleted
https://github.com/jenkinsci/kubernetes-plugin/blob/342166c1864e84791f2e94dd823709eb6e672a6e/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPVC.groovy

kubernetes jenkins plugin does not create 2 containers

I have 2 Jenkins instances, one use version 1.8 and second version 1.18.
Oldest version is able to create both containers.
Agent specification [Kubernetes Pod Template] (mo-aio-build-supplier):
* [jnlp] mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca(resourceRequestCpu: 0.25, resourceRequestMemory: 256Mi, resourceLimitCpu: 1, resourceLimitMemory: 1.5Gi)
* [postgres] mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift
Newest version are not able to create postgres container
Container postgres exited with error 1. Logs: mkdir: cannot create directory '/home/jenkins': Permission denied
Both use same podTemplate
podTemplate(
name: label,
label: label,
cloud: 'openshift',
serviceAccount: 'jenkins',
containers: [
containerTemplate(
name: 'jnlp',
image: 'mynexus.services.theosmo.com/jenkins-slave-mo-aio:v3.11.104-14_jdk8',
resourceRequestCpu: env.CPU_REQUEST,
resourceLimitCpu: env.CPU_LIMIT,
resourceRequestMemory: env.RAM_REQUEST,
resourceLimitMemory: env.RAM_LIMIT,
workingDir: '/tmp',
args: '${computer.jnlpmac} ${computer.name}',
command: ''
),
containerTemplate(
name: 'postgres',
image: 'mynexus.services.theosmo.com:443/mo-base/mo-base-postgresql-95-openshift',
envVars: [
envVar(key: "POSTGRESQL_USER", value: "admin"),
envVar(key: "POSTGRESQL_PASSWORD", value: "admin"),
envVar(key: "POSTGRESQL_DATABASE", value: "supplier_data"),
]
)
],
volumes: [emptyDirVolume(mountPath: '/dev/shm', memory: true)]
)
Also, I've noticed YAML created by newest version is a bit weird
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.svc:80/job/build-supplier/473/"
labels:
jenkins: "slave"
jenkins/mo-aio-build-supplier: "true"
name: "mo-aio-build-supplier-xfgmn-qmrdl"
spec:
containers:
- args:
- "********"
- "mo-aio-build-supplier-xfgmn-qmrdl"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-jnlp.svc:50000"
- name: "JENKINS_AGENT_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_AGENT_WORKDIR"
value: "/tmp"
- name: "JENKINS_URL"
value: "http://jenkins.svc:80/"
- name: "HOME"
value: "/home/jenkins"
image: "mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "1.5Gi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "0.25"
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/tmp"
name: "workspace-volume"
readOnly: false
workingDir: "/tmp"
- env:
- name: "POSTGRESQL_DATABASE"
value: "supplier_data"
- name: "POSTGRESQL_USER"
value: "admin"
- name: "HOME"
value: "/home/jenkins"
- name: "POSTGRESQL_PASSWORD"
value: "admin"
image: "mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift"
imagePullPolicy: "IfNotPresent"
name: "postgres"
resources:
limits: {}
requests: {}
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
nodeSelector: {}
restartPolicy: "Never"
serviceAccount: "jenkins"
volumes:
- emptyDir:
medium: "Memory"
name: "volume-0"
- emptyDir: {}
name: "workspace-volume"
As you are able to see above:
postgres container is under an env tree
Any suggestion? Thanks in advance
As far as I checked there
The problem
Since Kubernetes Plugin version 1.18.0, the default working directory of the pod containers was changed from /home/jenkins to /home/jenkins/agent. But the default HOME environment variable enforcement is still pointing to /home/jenkins. The impact of this change is that if pod container images do not have a /home/jenkins directory with sufficient permissions for the running user, builds will fail to do anything directly under their HOME directory, /home/jenkins.
Resolution
There are different workaround to that problem:
Change the default HOME variable
The simplest and preferred workaround is to add the system property -Dorg.csanchez.jenkins.plugins.kubernetes.PodTemplateBuilder.defaultHome=/home/jenkins/agent on Jenkins startup. This requires a restart.
This workaround will reflect the behavior of kubernetes plugin pre-1.18.0 but on the new working directory /home/jenkins/agent
Use /home/jenkins as the working directory
A workaround is to change the working directory of pod containers back to /home/jenkins. This workaround is only possible when using YAML to define agent pod templates (see JENKINS-60977).
Prepare images for Jenkins
A workaround could be to ensure that the images used in agent pods have a /home/jenkins directory that is owned by the root group and writable by the root group as mentioned in OpenShift Container Platform-specific guidelines.
Additionaly there is the issue on jenkins.
Hope this helps.

Kubernetes DaemonSet Permission Denied on mounted Volume - Docker in Docker dind

I tried running simple DaemonSet on kube cluster - the Idea was that other kube pods would connect to that containers docker daemon (dockerd) and execute commands on it. (The other pods are Jenkins slaves and would have just env DOCKER_HOST point to 'tcp://localhost:2375'); In short the config looks like this:
dind.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dind
spec:
selector:
matchLabels:
name: dind
template:
metadata:
labels:
name: dind
spec:
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: dind
image: docker:18.05-dind
resources:
limits:
memory: 2000Mi
requests:
cpu: 100m
memory: 500Mi
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {}
Error message when running
mount: mounting none on /sys/kernel/security failed: Permission denied
Could not mount /sys/kernel/security.
AppArmor detection and --privileged mode might break.
mount: mounting none on /tmp failed: Permission denied
I took the idea from medium post that didn't describe it fully: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25 describing docker of docker, docker in docker and Kaniko
found the solution
apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: jenkins-slave
image: gcr.io/<my-project>/myimg # it has docker installed on it
command: ['docker', 'run', '-p', '80:80', 'httpd:latest']
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:18.05-dind
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}

Resources