I am having Jenkins running in K8s and now i am trying to run: docker build as one of the step in Jenkins build. Since Jenkins is running inside Docker, i came to the solution to use Docker in Docker from this post: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
However, after I modified the deployment yaml file, it still does not work.
There are 2 containers running: Jenkins (Jenkins image) and dind (docker in docker image). I could run the docker command inside dind container but i can not run docker command in Jenkins or pod.
Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "9"
field.cattle.io/publicEndpoints: '[{"addresses":["10.0.0.111"],"port":80,"protocol":"HTTP","serviceName":"jenkins-with-did:jenkins-with-did","ingressName":"jenkins-with-did:jenkins-with-did","hostname":"jenkins.dtl.miproad.ad","allNodes":true}]'
creationTimestamp: "2020-04-30T06:38:40Z"
generation: 11
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
io.cattle.field/appId: jenkins-with-did
name: jenkins-with-did
namespace: jenkins-with-did
resourceVersion: "29233038"
selfLink: /apis/apps/v1/namespaces/jenkins-with-did/deployments/jenkins-with-did
uid: 6439c48d-c4ce-418c-8553-d06fee13c7d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-04-30T18:15:50Z"
checksum/config: fda7089fede91f066c406bbba5e2a1d59f71183eebe9bca3fe7de19d13504058
field.cattle.io/ports: '[[{"containerPort":8080,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"http","protocol":"TCP","sourcePort":0},{"containerPort":50000,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"slavelistener","protocol":"TCP","sourcePort":0}]]'
creationTimestamp: null
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
- --httpPort=8080
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- image: docker:18.05-dind
imagePullPolicy: IfNotPresent
name: dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/jenkins_plugins
name: plugin-dir
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: jenkins-with-did
serviceAccountName: jenkins-with-did
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: plugins
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: jenkins-with-did
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- emptyDir: {}
name: plugin-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-with-did
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-30T18:20:47Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-04-30T06:38:40Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: ReplicaSet "jenkins-with-did-5db85986b6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Thank you so much in advance!
Your idea is a valid approach.
The regular jenkins image does not provide the docker cli - therefore using docker does not work out of the box. You can either build your own jenkins image which provides the docker command or you can use a prebuilt jenkins image including the docker cli, for example: https://hub.docker.com/r/trion/jenkins-docker-client
You can do a hostpath volumes and mount /usr/bin/docker, /lib64 and /usr/lib64 from the node to your pod. This would need securityContext: -> privileged: true
Related
I have to run DevOps agent inside Docker container in order to run my DevOps pipeline tasks.
As you can see, after pipeline is initialized, my agent has to build and publish image.
Also this container should run inside rancher as a pod.
On my PC I figured out that I have to use
docker run -v /var/run/docker.sock:/var/run/docker.sock
In order to get it worked, but I don't know how to configure it in rancher.
Here is my actual YAML configuration of this pod where '*****' means sensitive data:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: apps.deployment-**************
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-10-25T11:22:39Z"
creationTimestamp: null
labels:
workload.user.cattle.io/workloadselector: apps.deployment-**************
spec:
affinity: {}
containers:
- env:
- name: AZP_URL
value: ***********************
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
key: AZP_TOKEN
name: pat
optional: false
- name: AZP_AGENT_NAME
value: ********************
- name: AZP_POOL
value: *******************
image: ******************************************
imagePullPolicy: Always
name: *********************
resources:
limits:
cpu: "3"
memory: 6Gi
requests:
cpu: 500m
memory: 512Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/docker.sock
name: dockersock
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: azure-registry
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /var/run/docker.sock
type: ""
name: dockersock
Also here is error message I was reciving from pipeline log:
##[error]Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
##[error]The process '/usr/bin/docker' failed with exit code 1
From yesterday I started having problems with jenkins pod - it is unable to be initialized. I haven't update any configuration in the meantime.
This is how my pod deployment configuration looks like
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: eed56a3d795865e4432dea721435a777ee100059998724f0d57bf1f9378dbb88
creationTimestamp: 2020-09-17T14:14:12Z
generateName: jenkins-74cc957b47-
labels:
app: jenkins
chart: jenkins-0.35.0
component: jenkins-jenkins-master
heritage: Tiller
pod-template-hash: "3077513603"
release: jenkins
name: jenkins-74cc957b47-zf67f
namespace: infrastructure
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: jenkins-74cc957b47
uid: 77b5d3d7-f0f1-11ea-acd2-02be15828c0e
resourceVersion: "158370354"
selfLink: /api/v1/namespaces/infrastructure/pods/jenkins-74cc957b47-zf67f
uid: 0fcefd0d-f8f0-11ea-acd2-02be15828c0e
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /var/jenkins_plugins
name: plugin-dir
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
nodeName: ip-172-20-62-226.eu-west-1.compute.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: jenkins
name: jenkins-config
- emptyDir: {}
name: plugin-dir
- emptyDir: {}
name: secrets-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: default-token-5tbbb
secret:
defaultMode: 420
secretName: default-token-5tbbb
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:15:03Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:17:11Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:14:12Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://688f4ad7dde842c2b5d6a0f1fd3cdd7ca156c8457336ca07f1d11270c2df0779
image: jenkins/jenkins:lts
imageID: docker-pullable://jenkins/jenkins#sha256:a3e7b2b6efbc2c252608b028bb844e419d44ad5e3974770c4543ab7ae6e8eb27
lastState: {}
name: jenkins
ready: true
restartCount: 0
state:
running:
startedAt: 2020-09-17T14:15:05Z
hostIP: 172.20.62.226
initContainerStatuses:
- containerID: docker://6761eab1b990aa42c7ec21ee84d1e2362eeddf9373f595ccb13b0e59c0462505
image: jenkins/jenkins:lts
imageID: docker-pullable://jenkins/jenkins#sha256:a3e7b2b6efbc2c252608b028bb844e419d44ad5e3974770c4543ab7ae6e8eb27
lastState: {}
name: copy-default-config
ready: true
restartCount: 0
state:
terminated:
containerID: docker://6761eab1b990aa42c7ec21ee84d1e2362eeddf9373f595ccb13b0e59c0462505
exitCode: 0
finishedAt: 2020-09-17T14:15:02Z
reason: Completed
startedAt: 2020-09-17T14:14:41Z
phase: Running
podIP: 100.105.185.69
qosClass: Burstable
startTime: 2020-09-17T14:14:12Z
I have tried to edit it and set the specific jenkins image version, for example image: jenkins/jenkins:2.219, but it is still not able to initialize.
When I run kubectl logs jenkins-df87c46d5-52dtt -c copy-default-config -n infrastructure I can see the following log:
11:21:05 Failed in the last attempt (curl -sSfL --connect-timeout 20 --retry 3 --retry-delay 0 --retry-max-time 60 https://updates.jenkins.io/dynamic-2.248//latest/workflow-cps.hpi -o /usr/share/jenkins/ref/plugins/workflow-cps.jpi)
Downloading plugin: workflow-cps-plugin from https://updates.jenkins.io/dynamic-2.248//latest/workflow-cps-plugin.hpi
curl: (28) Resolving timed out after 20527 milliseconds
11:21:05 Failure (28) Retrying in 1 seconds...
curl: (28) Resolving timed out after 20526 milliseconds
11:21:08 Failure (28) Retrying in 1 seconds...
curl: (22) The requested URL returned error: 404 Not Found
11:21:14 Failure (22) Retrying in 1 seconds...
Full output is available here: https://justpaste.it/8h30t
Try this. I have removed the plugin directiory which might cause the issue
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: eed56a3d795865e4432dea721435a777ee100059998724f0d57bf1f9378dbb88
creationTimestamp: 2020-09-17T14:14:12Z
generateName: jenkins-74cc957b47-
labels:
app: jenkins
chart: jenkins-0.35.0
component: jenkins-jenkins-master
heritage: Tiller
pod-template-hash: "3077513603"
release: jenkins
name: jenkins-74cc957b47-zf67f
namespace: infrastructure
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: jenkins-74cc957b47
uid: 77b5d3d7-f0f1-11ea-acd2-02be15828c0e
resourceVersion: "158370354"
selfLink: /api/v1/namespaces/infrastructure/pods/jenkins-74cc957b47-zf67f
uid: 0fcefd0d-f8f0-11ea-acd2-02be15828c0e
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
nodeName: ip-172-20-62-226.eu-west-1.compute.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: jenkins
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: default-token-5tbbb
secret:
defaultMode: 420
secretName: default-token-5tbbb
My company bought a software we're trying to deploy on IBM cloud, using kubernetes and given private docker repository. Once deployed, there is always a Kubernetes error : "Back-off restarting failed container". So I read logs in order to understand why the container is restarting and here is the error :
Caused by: java.io.FileNotFoundException: /var/yseop-log/yseop-manager.log (Permission denied)
So I deduced that I just had to change permissions in the Kubernetes file. Since I'm using a deployment, I tried the following initContainer :
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
This didn't worked because I'm not allowed to execute chmod on read-only folders as non root user.
So I tried remounting those volumes, but that also failed, because I'm not a root user.
I then found out about running as User and group. In order to find out which User and group I had to write in my security context, I read the dockerfile and here is the user and group :
USER 1001:0
So I tought I could just write this in my deployment file :
securityContext:
runAsUser: 1001
rusAsGroup: 0
Obvisouly, that didn't worked neither, because I'm not allowed to run as group 0
So I still don't know what to do in order to properly deploy this image. The image is working when doing a docker pull and exec on m computer, but it's not working on Kubernetes.
Here is my complete Volume file :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
ibm.io/auto-create-bucket: "true"
ibm.io/auto-delete-bucket: "false"
ibm.io/bucket: ""
ibm.io/secret-name: "cos-write-access"
ibm.io/endpoint: https://s3.eu-de.cloud-object-storage.appdomain.cloud
name: yseop-pvc
namespace: ns
labels:
app: yseop-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ibmc
volumeMode: Filesystem
And here is my full deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
runAsUser: 1001
rusAsGroup: 0
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
Thanks for helping
Can you please try including supplementary group ID in the security context like
SecurityContext:
runAsUser: 1001
fsGroup: 2000
By Default runAsGroup is 0 which is root. Below link might give more insight about this.
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Working Yaml Content
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
fsGroup: 2000
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chown -R root:2000 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
runAsGroup: 2000
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
I was not told by my company that we do have restrictives Pod Security Policies. Because of that, volumes are Read-only and there is no way I could have written anything in said volumes.
The solution is as follow :
volumes:
- name: yseop-data
emptyDir: {}
Then, I have to specify a path in volumeMounts (Which was already done) and create a PVC, so my Data would be persistent.
i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.
I am trying to modify the config file of a pod to use local time, but it show invalid when saving. Do you know what's wrong ?
In volumeMounts section: I added below lines:
- mountPath: /etc/localtime
name: tz-config
In volumes setion: I added below lines:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
.....
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-jgznd
readOnly: true
- mountPath: /etc/localtime
name: tz-config
dnsPolicy: ClusterFirst
.....
volumes:
- name: default-token-jgznd
secret:
defaultMode: 420
secretName: default-token-jgznd
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
UPDATE: Below are error detail
# pods "hello-75fdf45c64-w7xm8" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
# core.PodSpec{
# Volumes: []core.Volume{
# {Name: "default-token-wcf8m", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: "default-token-wcf8m", DefaultMode: &420}}},
# - {
# - Name: "tz-config",
# - VolumeSource: core.VolumeSource{
# - HostPath: &core.HostPathVolumeSource{Path: "/usr/share/zoneinfo/Asia/Ho_Chi_Minh", Type: &""},
# - },
# - },
# },
# InitContainers: nil,
I resolved my problem by adding mountPath to the file deployment yaml as below. Many thanks #Shawlz for help:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-02-14T15:59:50Z"
generation: 1
labels:
run: hello
name: hello
namespace: default
resourceVersion: "523908"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello
uid: 43196302-0176-4ce2-9d10-c8fefcc6c316
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello
spec:
containers:
- image: hello-microservice
imagePullPolicy: Never
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}