Jenkins unable to initialize using Kubernetes - jenkins

From yesterday I started having problems with jenkins pod - it is unable to be initialized. I haven't update any configuration in the meantime.
This is how my pod deployment configuration looks like
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: eed56a3d795865e4432dea721435a777ee100059998724f0d57bf1f9378dbb88
creationTimestamp: 2020-09-17T14:14:12Z
generateName: jenkins-74cc957b47-
labels:
app: jenkins
chart: jenkins-0.35.0
component: jenkins-jenkins-master
heritage: Tiller
pod-template-hash: "3077513603"
release: jenkins
name: jenkins-74cc957b47-zf67f
namespace: infrastructure
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: jenkins-74cc957b47
uid: 77b5d3d7-f0f1-11ea-acd2-02be15828c0e
resourceVersion: "158370354"
selfLink: /api/v1/namespaces/infrastructure/pods/jenkins-74cc957b47-zf67f
uid: 0fcefd0d-f8f0-11ea-acd2-02be15828c0e
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /var/jenkins_plugins
name: plugin-dir
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
nodeName: ip-172-20-62-226.eu-west-1.compute.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: jenkins
name: jenkins-config
- emptyDir: {}
name: plugin-dir
- emptyDir: {}
name: secrets-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: default-token-5tbbb
secret:
defaultMode: 420
secretName: default-token-5tbbb
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:15:03Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:17:11Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2020-09-17T14:14:12Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://688f4ad7dde842c2b5d6a0f1fd3cdd7ca156c8457336ca07f1d11270c2df0779
image: jenkins/jenkins:lts
imageID: docker-pullable://jenkins/jenkins#sha256:a3e7b2b6efbc2c252608b028bb844e419d44ad5e3974770c4543ab7ae6e8eb27
lastState: {}
name: jenkins
ready: true
restartCount: 0
state:
running:
startedAt: 2020-09-17T14:15:05Z
hostIP: 172.20.62.226
initContainerStatuses:
- containerID: docker://6761eab1b990aa42c7ec21ee84d1e2362eeddf9373f595ccb13b0e59c0462505
image: jenkins/jenkins:lts
imageID: docker-pullable://jenkins/jenkins#sha256:a3e7b2b6efbc2c252608b028bb844e419d44ad5e3974770c4543ab7ae6e8eb27
lastState: {}
name: copy-default-config
ready: true
restartCount: 0
state:
terminated:
containerID: docker://6761eab1b990aa42c7ec21ee84d1e2362eeddf9373f595ccb13b0e59c0462505
exitCode: 0
finishedAt: 2020-09-17T14:15:02Z
reason: Completed
startedAt: 2020-09-17T14:14:41Z
phase: Running
podIP: 100.105.185.69
qosClass: Burstable
startTime: 2020-09-17T14:14:12Z
I have tried to edit it and set the specific jenkins image version, for example image: jenkins/jenkins:2.219, but it is still not able to initialize.
When I run kubectl logs jenkins-df87c46d5-52dtt -c copy-default-config -n infrastructure I can see the following log:
11:21:05 Failed in the last attempt (curl -sSfL --connect-timeout 20 --retry 3 --retry-delay 0 --retry-max-time 60 https://updates.jenkins.io/dynamic-2.248//latest/workflow-cps.hpi -o /usr/share/jenkins/ref/plugins/workflow-cps.jpi)
Downloading plugin: workflow-cps-plugin from https://updates.jenkins.io/dynamic-2.248//latest/workflow-cps-plugin.hpi
curl: (28) Resolving timed out after 20527 milliseconds
11:21:05 Failure (28) Retrying in 1 seconds...
curl: (28) Resolving timed out after 20526 milliseconds
11:21:08 Failure (28) Retrying in 1 seconds...
curl: (22) The requested URL returned error: 404 Not Found
11:21:14 Failure (22) Retrying in 1 seconds...
Full output is available here: https://justpaste.it/8h30t

Try this. I have removed the plugin directiory which might cause the issue
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: eed56a3d795865e4432dea721435a777ee100059998724f0d57bf1f9378dbb88
creationTimestamp: 2020-09-17T14:14:12Z
generateName: jenkins-74cc957b47-
labels:
app: jenkins
chart: jenkins-0.35.0
component: jenkins-jenkins-master
heritage: Tiller
pod-template-hash: "3077513603"
release: jenkins
name: jenkins-74cc957b47-zf67f
namespace: infrastructure
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: jenkins-74cc957b47
uid: 77b5d3d7-f0f1-11ea-acd2-02be15828c0e
resourceVersion: "158370354"
selfLink: /api/v1/namespaces/infrastructure/pods/jenkins-74cc957b47-zf67f
uid: 0fcefd0d-f8f0-11ea-acd2-02be15828c0e
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:2.247
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "1280m"
memory: 3Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5tbbb
readOnly: true
nodeName: ip-172-20-62-226.eu-west-1.compute.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: jenkins
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: default-token-5tbbb
secret:
defaultMode: 420
secretName: default-token-5tbbb

Related

Service/pod starts up without any issues but cannot connect to service

So, I have a service (Python Flask/uwsgi app) which I deploy via helm on Kubernetes which locally runs on my Docker Desktop. I have successfully deployed the application, the pods are running fine, no issues with application, app logs also look good. But when I hit the url in the browser/via curl, I get a connection refused error.
Here's what my kubectl describe pod looks like:
Name: rhs-servicebase-6f676c458c-s6b8j
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 29 Aug 2022 07:54:30 -0700
Labels: app.kubernetes.io/instance=rhs
app.kubernetes.io/name=servicebase
pod-template-hash=6f676c458c
Annotations: <none>
Status: Running
IP: 10.1.0.44
IPs:
IP: 10.1.0.44
Controlled By: ReplicaSet/rhs-servicebase-6f676c458c
Containers:
servicebase:
Container ID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
Image: localhost:5000/referral-health-signal:latest
Image ID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
Port: 9000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 29 Aug 2022 07:54:36 -0700
Ready: True
Restart Count: 0
Environment:
TZ: UTC
LOG_FILE: application.log
S3_BUCKET: livongo-int-healthsignal
AWS_ACCESS_KEY_ID: <set to the key 'aws-access-key-id' in secret 'referral-health-signal'> Optional: false
AWS_SECRET_ACCESS_KEY: <set to the key 'aws-secret-access-key' in secret 'referral-health-signal'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5vtmd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-referral-health-signal:
Type: Secret (a volume populated by a Secret)
SecretName: referral-health-signal
Optional: false
kube-api-access-5vtmd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/rhs-servicebase-6f676c458c-s6b8j to docker-desktop
Normal Pulling 15m kubelet Pulling image "localhost:5000/referral-health-signal:latest"
Normal Pulled 15m kubelet Successfully pulled image "localhost:5000/referral-health-signal:latest" in 347.31179ms
Normal Created 15m kubelet Created container servicebase
Normal Started 15m kubelet Started container servicebase
My deployment status:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
rhs-servicebase 1/1 1 1 24m
My service status:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
rhs-servicebase NodePort 10.97.229.137 <none> 9000:30001/TCP 29m
But when I do nc -zv localhost 30001 I see:
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
An lsof -i tcp:30001 also produces an empty result.
Pod spec:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-08-29T14:54:30Z"
generateName: rhs-servicebase-6f676c458c-
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
pod-template-hash: 6f676c458c
name: rhs-servicebase-6f676c458c-s6b8j
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: rhs-servicebase-6f676c458c
uid: 0d076b86-44e8-41fa-b6ee-b73a066f20e0
resourceVersion: "23554"
uid: 6d251ab1-c240-4241-9041-7c03b4d0d9c8
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-5vtmd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: docker-desktop
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
- name: kube-api-access-5vtmd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
image: referral-health-signal:latest
imageID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
lastState: {}
name: servicebase
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-08-29T14:54:36Z"
hostIP: 192.168.65.4
phase: Running
podIP: 10.1.0.44
podIPs:
- ip: 10.1.0.44
qosClass: BestEffort
startTime: "2022-08-29T14:54:30Z"
Deployment spec:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-08-29T14:54:30Z"
generation: 1
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: servicebase
helm.sh/chart: servicebase-0.1.5
name: rhs-servicebase
namespace: default
resourceVersion: "23558"
uid: aae0086a-ecfd-4c80-b633-92307e2f059d
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-08-29T14:54:36Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-08-29T14:54:30Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: ReplicaSet "rhs-servicebase-6f676c458c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I'm so lost, I don't even know where to start debugging. I'm on MacOS (Big Sur).

how docker-registry persist images in openshift origin

i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.

Docker in Docker configuration

I am having Jenkins running in K8s and now i am trying to run: docker build as one of the step in Jenkins build. Since Jenkins is running inside Docker, i came to the solution to use Docker in Docker from this post: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
However, after I modified the deployment yaml file, it still does not work.
There are 2 containers running: Jenkins (Jenkins image) and dind (docker in docker image). I could run the docker command inside dind container but i can not run docker command in Jenkins or pod.
Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "9"
field.cattle.io/publicEndpoints: '[{"addresses":["10.0.0.111"],"port":80,"protocol":"HTTP","serviceName":"jenkins-with-did:jenkins-with-did","ingressName":"jenkins-with-did:jenkins-with-did","hostname":"jenkins.dtl.miproad.ad","allNodes":true}]'
creationTimestamp: "2020-04-30T06:38:40Z"
generation: 11
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
io.cattle.field/appId: jenkins-with-did
name: jenkins-with-did
namespace: jenkins-with-did
resourceVersion: "29233038"
selfLink: /apis/apps/v1/namespaces/jenkins-with-did/deployments/jenkins-with-did
uid: 6439c48d-c4ce-418c-8553-d06fee13c7d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-04-30T18:15:50Z"
checksum/config: fda7089fede91f066c406bbba5e2a1d59f71183eebe9bca3fe7de19d13504058
field.cattle.io/ports: '[[{"containerPort":8080,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"http","protocol":"TCP","sourcePort":0},{"containerPort":50000,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"slavelistener","protocol":"TCP","sourcePort":0}]]'
creationTimestamp: null
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
- --httpPort=8080
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- image: docker:18.05-dind
imagePullPolicy: IfNotPresent
name: dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/jenkins_plugins
name: plugin-dir
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: jenkins-with-did
serviceAccountName: jenkins-with-did
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: plugins
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: jenkins-with-did
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- emptyDir: {}
name: plugin-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-with-did
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-30T18:20:47Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-04-30T06:38:40Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: ReplicaSet "jenkins-with-did-5db85986b6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Thank you so much in advance!
Your idea is a valid approach.
The regular jenkins image does not provide the docker cli - therefore using docker does not work out of the box. You can either build your own jenkins image which provides the docker command or you can use a prebuilt jenkins image including the docker cli, for example: https://hub.docker.com/r/trion/jenkins-docker-client
You can do a hostpath volumes and mount /usr/bin/docker, /lib64 and /usr/lib64 from the node to your pod. This would need securityContext: -> privileged: true

Can not change timezone for kubernetes pod

I am trying to modify the config file of a pod to use local time, but it show invalid when saving. Do you know what's wrong ?
In volumeMounts section: I added below lines:
- mountPath: /etc/localtime
name: tz-config
In volumes setion: I added below lines:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
.....
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-jgznd
readOnly: true
- mountPath: /etc/localtime
name: tz-config
dnsPolicy: ClusterFirst
.....
volumes:
- name: default-token-jgznd
secret:
defaultMode: 420
secretName: default-token-jgznd
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
UPDATE: Below are error detail
# pods "hello-75fdf45c64-w7xm8" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
# core.PodSpec{
# Volumes: []core.Volume{
# {Name: "default-token-wcf8m", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: "default-token-wcf8m", DefaultMode: &420}}},
# - {
# - Name: "tz-config",
# - VolumeSource: core.VolumeSource{
# - HostPath: &core.HostPathVolumeSource{Path: "/usr/share/zoneinfo/Asia/Ho_Chi_Minh", Type: &""},
# - },
# - },
# },
# InitContainers: nil,
I resolved my problem by adding mountPath to the file deployment yaml as below. Many thanks #Shawlz for help:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-02-14T15:59:50Z"
generation: 1
labels:
run: hello
name: hello
namespace: default
resourceVersion: "523908"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello
uid: 43196302-0176-4ce2-9d10-c8fefcc6c316
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello
spec:
containers:
- image: hello-microservice
imagePullPolicy: Never
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}

How would I assign ConfigMap to a pod that is already running?

I cannot get a ConfigMap loaded into a pod that is currently running nginx.
I tried by creating a simple pod definition and added to it a simple read ConfigMap shown below:
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: testcontainer
image: nginx
env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
name: configmap1
key: data1
This ran successfully and its YAML file was saved and then deleted.
Here's what I got:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"testpod","namespace":"default"},"spec":{"containers":[{"env":[{"name":"MY_VAR","valueFrom":{"configMapKeyRef":{"key":"data1","name":"configmap1"}}}],"image":"nginx","name":"testcontainer"}]}}
creationTimestamp: null
name: testpod
selfLink: /api/v1/namespaces/default/pods/testpod
spec:
containers:
- env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
key: data1
name: configmap1
image: nginx
imagePullPolicy: Always
name: testcontainer
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-103
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
I then tried copying its syntax into what was another pod which was running.
This is what I got using kubectl edit pod po?
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-17T18:15:22Z"
labels:
run: pod1
name: pod1
namespace: default
resourceVersion: "12167"
selfLink: /api/v1/namespaces/default/pods/pod1
uid: fa297c13-c11a-11e9-9a5f-02ca4f0dcea0
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://99bfded0d69f4ed5ed854e59b458acd8a9197f9bef6d662a03587fe2ff61b128
image: nginx:latest
imageID: docker-pullable://nginx#sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9
lastState: {}
name: pod1
ready: true
restartCount: 0
state:
running:
startedAt: "2019-08-17T18:15:27Z"
hostIP: 10.0.1.102
phase: Running
podIP: 10.244.2.2
qosClass: BestEffort
startTime: "2019-08-17T18:15:22Z"
And also k get po pod1 -o yaml --export
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
selfLink: /api/v1/namespaces/default/pods/pod1
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
What am I doing wrong or have I missed something?
You can't add configuration to a running pod, that's something inherent to containers.
To put it simply: a container is running with a service, the state of the service defines the state of the container. As you know, nginx needs to reload it's configuration if you change it, but that's not really a good idea in this context, so you need to stop/start the container with the new configuration.
So what you are getting is normal, the service state is still running so it's keeping the old file configuration it has from before even if you make change inside the file.
If you need the service to be reloading without downtime, set multiple replicas and create a rolling update rule for no downtime during update.
There are some special cases to this, like grafana, where it can go check if files have been changed from the last modification.

Resources