I cannot get a ConfigMap loaded into a pod that is currently running nginx.
I tried by creating a simple pod definition and added to it a simple read ConfigMap shown below:
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: testcontainer
image: nginx
env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
name: configmap1
key: data1
This ran successfully and its YAML file was saved and then deleted.
Here's what I got:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"testpod","namespace":"default"},"spec":{"containers":[{"env":[{"name":"MY_VAR","valueFrom":{"configMapKeyRef":{"key":"data1","name":"configmap1"}}}],"image":"nginx","name":"testcontainer"}]}}
creationTimestamp: null
name: testpod
selfLink: /api/v1/namespaces/default/pods/testpod
spec:
containers:
- env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
key: data1
name: configmap1
image: nginx
imagePullPolicy: Always
name: testcontainer
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-103
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
I then tried copying its syntax into what was another pod which was running.
This is what I got using kubectl edit pod po?
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-17T18:15:22Z"
labels:
run: pod1
name: pod1
namespace: default
resourceVersion: "12167"
selfLink: /api/v1/namespaces/default/pods/pod1
uid: fa297c13-c11a-11e9-9a5f-02ca4f0dcea0
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://99bfded0d69f4ed5ed854e59b458acd8a9197f9bef6d662a03587fe2ff61b128
image: nginx:latest
imageID: docker-pullable://nginx#sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9
lastState: {}
name: pod1
ready: true
restartCount: 0
state:
running:
startedAt: "2019-08-17T18:15:27Z"
hostIP: 10.0.1.102
phase: Running
podIP: 10.244.2.2
qosClass: BestEffort
startTime: "2019-08-17T18:15:22Z"
And also k get po pod1 -o yaml --export
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
selfLink: /api/v1/namespaces/default/pods/pod1
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
What am I doing wrong or have I missed something?
You can't add configuration to a running pod, that's something inherent to containers.
To put it simply: a container is running with a service, the state of the service defines the state of the container. As you know, nginx needs to reload it's configuration if you change it, but that's not really a good idea in this context, so you need to stop/start the container with the new configuration.
So what you are getting is normal, the service state is still running so it's keeping the old file configuration it has from before even if you make change inside the file.
If you need the service to be reloading without downtime, set multiple replicas and create a rolling update rule for no downtime during update.
There are some special cases to this, like grafana, where it can go check if files have been changed from the last modification.
Related
So, I have a service (Python Flask/uwsgi app) which I deploy via helm on Kubernetes which locally runs on my Docker Desktop. I have successfully deployed the application, the pods are running fine, no issues with application, app logs also look good. But when I hit the url in the browser/via curl, I get a connection refused error.
Here's what my kubectl describe pod looks like:
Name: rhs-servicebase-6f676c458c-s6b8j
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 29 Aug 2022 07:54:30 -0700
Labels: app.kubernetes.io/instance=rhs
app.kubernetes.io/name=servicebase
pod-template-hash=6f676c458c
Annotations: <none>
Status: Running
IP: 10.1.0.44
IPs:
IP: 10.1.0.44
Controlled By: ReplicaSet/rhs-servicebase-6f676c458c
Containers:
servicebase:
Container ID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
Image: localhost:5000/referral-health-signal:latest
Image ID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
Port: 9000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 29 Aug 2022 07:54:36 -0700
Ready: True
Restart Count: 0
Environment:
TZ: UTC
LOG_FILE: application.log
S3_BUCKET: livongo-int-healthsignal
AWS_ACCESS_KEY_ID: <set to the key 'aws-access-key-id' in secret 'referral-health-signal'> Optional: false
AWS_SECRET_ACCESS_KEY: <set to the key 'aws-secret-access-key' in secret 'referral-health-signal'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5vtmd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-referral-health-signal:
Type: Secret (a volume populated by a Secret)
SecretName: referral-health-signal
Optional: false
kube-api-access-5vtmd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/rhs-servicebase-6f676c458c-s6b8j to docker-desktop
Normal Pulling 15m kubelet Pulling image "localhost:5000/referral-health-signal:latest"
Normal Pulled 15m kubelet Successfully pulled image "localhost:5000/referral-health-signal:latest" in 347.31179ms
Normal Created 15m kubelet Created container servicebase
Normal Started 15m kubelet Started container servicebase
My deployment status:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
rhs-servicebase 1/1 1 1 24m
My service status:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
rhs-servicebase NodePort 10.97.229.137 <none> 9000:30001/TCP 29m
But when I do nc -zv localhost 30001 I see:
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
nc: connectx to localhost port 30001 (tcp) failed: Connection refused
An lsof -i tcp:30001 also produces an empty result.
Pod spec:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-08-29T14:54:30Z"
generateName: rhs-servicebase-6f676c458c-
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
pod-template-hash: 6f676c458c
name: rhs-servicebase-6f676c458c-s6b8j
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: rhs-servicebase-6f676c458c
uid: 0d076b86-44e8-41fa-b6ee-b73a066f20e0
resourceVersion: "23554"
uid: 6d251ab1-c240-4241-9041-7c03b4d0d9c8
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-5vtmd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: docker-desktop
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
- name: kube-api-access-5vtmd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:36Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-08-29T14:54:30Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://7e7c3dca5a465deeba3312e54b789233a5c652b92dd9017cb2e422bd202130f6
image: referral-health-signal:latest
imageID: docker-pullable://localhost:5000/referral-health-signal#sha256:72af6911f60def6af987d8b5f611ab8553f5771278b3821e911f437f93c73743
lastState: {}
name: servicebase
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-08-29T14:54:36Z"
hostIP: 192.168.65.4
phase: Running
podIP: 10.1.0.44
podIPs:
- ip: 10.1.0.44
qosClass: BestEffort
startTime: "2022-08-29T14:54:30Z"
Deployment spec:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-08-29T14:54:30Z"
generation: 1
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: servicebase
helm.sh/chart: servicebase-0.1.5
name: rhs-servicebase
namespace: default
resourceVersion: "23558"
uid: aae0086a-ecfd-4c80-b633-92307e2f059d
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-08-29T14:54:36Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-08-29T14:54:30Z"
lastUpdateTime: "2022-08-29T14:54:36Z"
message: ReplicaSet "rhs-servicebase-6f676c458c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I'm so lost, I don't even know where to start debugging. I'm on MacOS (Big Sur).
I'm running Jenkins on EKS cluster with k8s plugin and i'd like to write a declarative pipeline in which I specify the pod template in each stage. So a basic example would be the following, in which in the first stage a file is created and in the second one is printed :
pipeline{
agent none
stages {
stage('First sample') {
agent {
kubernetes {
label 'mvn-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
"""
}
}
steps {
container('maven'){
sh "echo 'hello' > test.txt"
}
}
}
stage('Second sample') {
agent {
kubernetes {
label 'bysbox-pod'
yaml """
spec:
containers:
- name: busybox
image: busybox
"""
}
}
steps {
container('busybox'){
sh "cat test.txt"
}
}
}
}
}
This clearly doesn't work since the two pods don't have any kind of shared memory. Reading this doc I realized I can use workspaceVolume dynamicPVC () in the yaml declaration of the pod so that the plugin creates and manages a persistentVolumeClaim in which hopefully i can write the data I need to share between stages.
Now, with workspaceVolume dynamicPVC (...) both pv and pvc are successfully created but the pod goes on error and terminates. In particular, the pods provisioned is the following :
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
runUrl: job/test-libraries/job/sample-k8s/12/
creationTimestamp: "2020-08-07T08:57:09Z"
deletionGracePeriodSeconds: 30
deletionTimestamp: "2020-08-07T08:58:09Z"
labels:
jenkins: slave
jenkins/label: bibibu
name: bibibu-ggb5h-bg68p
namespace: jenkins-slaves
resourceVersion: "29184450"
selfLink: /api/v1/namespaces/jenkins-slaves/pods/bibibu-ggb5h-bg68p
uid: 1c1e78a5-fcc7-4c86-84b1-8dee43cf3f98
spec:
containers:
- image: maven:3.3.9-jdk-8-alpine
imagePullPolicy: IfNotPresent
name: maven
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
- env:
- name: JENKINS_SECRET
value: ...
- name: JENKINS_AGENT_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: ...
image: jenkins/inbound-agent:4.3-4
imagePullPolicy: IfNotPresent
name: jnlp
resources:
requests:
cpu: 100m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ...
nodeSelector:
kubernetes.io/os: linux
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-bibibu-ggb5h-bg68p
- name: default-token-5bt8c
secret:
defaultMode: 420
secretName: default-token-5bt8c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
image: jenkins/inbound-agent:4.3-4
imageID: docker-pullable://jenkins/inbound-agent#sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
lastState: {}
name: jnlp
ready: false
restartCount: 0
state:
terminated:
containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
exitCode: 1
finishedAt: "2020-08-07T08:57:35Z"
reason: Error
startedAt: "2020-08-07T08:57:35Z"
- containerID: docker://96f747a132ee98f7bf2488bd3cde247380aea5dd6f84bdcd7e6551dbf7c08943
image: maven:3.3.9-jdk-8-alpine
imageID: docker-pullable://maven#sha256:3ab854089af4b40cf3f1a12c96a6c84afe07063677073451c2190cdcec30391b
lastState: {}
name: maven
ready: true
restartCount: 0
state:
running:
startedAt: "2020-08-07T08:57:35Z"
hostIP: 10.108.171.224
phase: Running
podIP: 10.108.171.158
qosClass: Burstable
startTime: "2020-08-07T08:57:16Z"
Retrieving logs from jnlp container on the pod with kubectl logs name-of-the-pod -c jnlp -n jenkins-slaves led me towards this error :
Exception in thread "main" java.io.IOException: The specified working directory should be fully accessible to the remoting executable (RWX): /home/jenkins/agent
at org.jenkinsci.remoting.engine.WorkDirManager.verifyDirectory(WorkDirManager.java:249)
at org.jenkinsci.remoting.engine.WorkDirManager.initializeWorkDir(WorkDirManager.java:201)
at hudson.remoting.Engine.startEngine(Engine.java:288)
at hudson.remoting.Engine.startEngine(Engine.java:264)
at hudson.remoting.jnlp.Main.main(Main.java:284)
at hudson.remoting.jnlp.Main._main(Main.java:279)
at hudson.remoting.jnlp.Main.main(Main.java:231)
I also tried to specify the accessModes as parameter of dynamicPVC, but the error is the same.
What am I doing wrong?
Thanks
The docker image being used is configured to run as a non-root user jenkins. By default PVCs will be created only allowing root-user access.
This can be configured using the security context, e.g.
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
(The jenkins user in that image is ID 1000)
i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.
I am having Jenkins running in K8s and now i am trying to run: docker build as one of the step in Jenkins build. Since Jenkins is running inside Docker, i came to the solution to use Docker in Docker from this post: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
However, after I modified the deployment yaml file, it still does not work.
There are 2 containers running: Jenkins (Jenkins image) and dind (docker in docker image). I could run the docker command inside dind container but i can not run docker command in Jenkins or pod.
Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "9"
field.cattle.io/publicEndpoints: '[{"addresses":["10.0.0.111"],"port":80,"protocol":"HTTP","serviceName":"jenkins-with-did:jenkins-with-did","ingressName":"jenkins-with-did:jenkins-with-did","hostname":"jenkins.dtl.miproad.ad","allNodes":true}]'
creationTimestamp: "2020-04-30T06:38:40Z"
generation: 11
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
io.cattle.field/appId: jenkins-with-did
name: jenkins-with-did
namespace: jenkins-with-did
resourceVersion: "29233038"
selfLink: /apis/apps/v1/namespaces/jenkins-with-did/deployments/jenkins-with-did
uid: 6439c48d-c4ce-418c-8553-d06fee13c7d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-04-30T18:15:50Z"
checksum/config: fda7089fede91f066c406bbba5e2a1d59f71183eebe9bca3fe7de19d13504058
field.cattle.io/ports: '[[{"containerPort":8080,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"http","protocol":"TCP","sourcePort":0},{"containerPort":50000,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"slavelistener","protocol":"TCP","sourcePort":0}]]'
creationTimestamp: null
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
- --httpPort=8080
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- image: docker:18.05-dind
imagePullPolicy: IfNotPresent
name: dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/jenkins_plugins
name: plugin-dir
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: jenkins-with-did
serviceAccountName: jenkins-with-did
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: plugins
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: jenkins-with-did
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- emptyDir: {}
name: plugin-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-with-did
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-30T18:20:47Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-04-30T06:38:40Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: ReplicaSet "jenkins-with-did-5db85986b6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Thank you so much in advance!
Your idea is a valid approach.
The regular jenkins image does not provide the docker cli - therefore using docker does not work out of the box. You can either build your own jenkins image which provides the docker command or you can use a prebuilt jenkins image including the docker cli, for example: https://hub.docker.com/r/trion/jenkins-docker-client
You can do a hostpath volumes and mount /usr/bin/docker, /lib64 and /usr/lib64 from the node to your pod. This would need securityContext: -> privileged: true
I am adding an environment variable to a Kubernetes replication controller spec, but when I update the running RC from the spec, the environment variable isn't added to it. How come?
I update the RC according to the following spec, where the environment variable IRON_PASSWORD gets added since the previous revision, but the running RC isn't updated correspondingly, kubectl replace -f docker/podspecs/web-controller.yaml:
apiVersion: v1
kind: ReplicationController
metadata:
name: web
labels:
app: web
spec:
replicas: 1
selector:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: quay.io/aknuds1/muzhack
# Always pull latest version of image
imagePullPolicy: Always
env:
- name: APP_URI
value: https://staging.muzhack.com
- name: IRON_PASSWORD
value: password
ports:
- name: http-server
containerPort: 80
imagePullSecrets:
- name: quay.io
After updating the RC according to the spec, it looks like this (kubectl get pod web-scpc3 -o yaml), notice that IRON_PASSWORD is missing:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"c1c4185f-0867-11e6-b557-42010af000f7","apiVersion":"v1","resourceVersion":"17714"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
web'
creationTimestamp: 2016-04-22T08:54:00Z
generateName: web-
labels:
app: web
name: web-scpc3
namespace: default
resourceVersion: "17844"
selfLink: /api/v1/namespaces/default/pods/web-scpc3
uid: c1c5035f-0867-11e6-b557-42010af000f7
spec:
containers:
- env:
- name: APP_URI
value: https://staging.muzhack.com
image: quay.io/aknuds1/muzhack
imagePullPolicy: Always
name: web
ports:
- containerPort: 80
name: http-server
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-vfutp
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: quay.io
nodeName: gke-staging-default-pool-f98acf11-ba7d
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-vfutp
secret:
secretName: default-token-vfutp
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-04-22T09:00:49Z
message: 'containers with unready status: [web]'
reason: ContainersNotReady
status: "False"
type: Ready
containerStatuses:
- containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
image: quay.io/aknuds1/muzhack
imageID: docker://8fef42c3eba5abe59c853e9ba811b3e9f10617a257396f48e564e3206e0e1103
lastState:
terminated:
containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
exitCode: 1
finishedAt: 2016-04-22T09:00:48Z
reason: Error
startedAt: 2016-04-22T09:00:46Z
name: web
ready: false
restartCount: 6
state:
waiting:
message: Back-off 5m0s restarting failed container=web pod=web-scpc3_default(c1c5035f-0867-11e6-b557-42010af000f7)
reason: CrashLoopBackOff
hostIP: 10.132.0.3
phase: Running
podIP: 10.32.0.3
startTime: 2016-04-22T08:54:00Z
Replacing the ReplicationController object does not actually recreate the underlying pods, so the pods keep the spec from the previous configuration of the RC until they need to be recreated. If you delete the running pod, the new one that gets created to replace it will have the new environment variable.
This is what the kubectl rolling update command is for, and a part of the reason why the Deployment type was added to Kubernetes 1.2.