I recently enabled RBAC at Kubernetes. Since than, Jenkins (running on Kubernetes, creating agent-pods on the very same Kubernetes) is able to create agent-pods, but is unable to connect to JNLP via Port 50'000.
I noticed a reference for Connecting to jenkins.example.de:50000, but did not find where this is configured, as it must resolve Kubernetes-Internal (Kube-DNS), as the port is not exposed from outside.
I noticed (and updated) configuration at Configure System > Jenkins Location > Jenkins URL, leading to failed RBAC logins (Keycloak), as redirect URL is set incorrectly. Futher it does not feel correct for configuring cluster-internal endpoints for JNLP. I can chose between JNLP being able to work with cluster-internal URL or Being able to login, using RBAC:
Questions
How to configure Jenkins URL correclty? (https:(jenkins.example.com?)
How to configure Jenkins JNLP correclty (jenkins-svc.jenkins.cluster.local:50000)? Where to do so?
Pod Information
kubectl get all -o wide -n jenkins
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/jenkins-64ff7ff784-nq8jh 2/2 Running 0 22h 192.168.0.35 kubernetes-slave02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/jenkins-svc ClusterIP 10.105.132.134 <none> 8080/TCP,50000/TCP 68d app=jenkins
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/jenkins 1/1 1 1 68d jenkins jenkins/jenkins:latest app=jenkins
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/jenkins-64ff7ff784 1 1 1 68d jenkins jenkins/jenkins:latest app=jenkins,pod-template-hash=64ff7ff784
kubectl describe -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
Name: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
Namespace: jenkins
Priority: 0
Node: kubernetes-slave/192.168.190.116
Start Time: Fri, 08 Jan 2021 17:16:56 +0100
Labels: istio.io/rev=default
jenkins=jenkins-slave
jenkins/label=worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897
jenkins/label-digest=9f81f8f2dabeba69de7d48422a0fc3cbdbaa8ce0
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
service.istio.io/canonical-revision=latest
Annotations: buildUrl: https://jenkins.example.de/job/APP-Kiali/job/master/63/
cni.projectcalico.org/podIP: 192.168.4.247/32
cni.projectcalico.org/podIPs: 192.168.4.247/32
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
runUrl: job/APP-Kiali/job/master/63/
sidecar.istio.io/status:
{"version":"e2cb9d4837cda9584fd272bfa1f348525bcaacfadb7e9b9efbd21a3bb44ad7a1","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Terminating (lasts <invalid>)
Termination Grace Period: 30s
IP: 192.168.4.247
IPs:
IP: 192.168.4.247
Init Containers:
istio-init:
Container ID: docker://182de6a71b33e7350263b0677f510f85bd8da9c7938ee5c6ff43b083efeffed6
Image: docker.io/istio/proxyv2:1.8.1
Image ID: docker-pullable://istio/proxyv2#sha256:0a407ecee363d8d31957162b82738ae3dd09690668a0168d660044ac8fc728f0
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
-b
*
-d
15090,15021,15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 Jan 2021 17:17:01 +0100
Finished: Fri, 08 Jan 2021 17:17:02 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment:
DNS_AGENT:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
Containers:
kubectl:
Container ID: docker://fb2b1ce8374799b6cc59db17fec0bb993b62369cd7cb2b71ed9bb01c363649cd
Image: lachlanevenson/k8s-kubectl:latest
Image ID: docker-pullable://lachlanevenson/k8s-kubectl#sha256:47e2096ae077b6fe7fdfc135c53feedb160d3b08001b8c855d897d0d37fa8c7e
Port: <none>
Host Port: <none>
Command:
cat
State: Running
Started: Fri, 08 Jan 2021 17:17:03 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/home/jenkins/agent from workspace-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
jnlp:
Container ID: docker://58ee7b399077701f3f0a99ed97eb6f1e400976b7946d209d2bee64be32a94885
Image: jenkins/inbound-agent:4.3-4
Image ID: docker-pullable://jenkins/inbound-agent#sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 08 Jan 2021 17:17:04 +0100
Finished: Fri, 08 Jan 2021 17:17:15 +0100
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 256Mi
Environment:
JENKINS_PROTOCOLS: JNLP4-connect
JENKINS_SECRET: ****
JENKINS_AGENT_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
JENKINS_DIRECT_CONNECTION: jenkins.example.de:50000
JENKINS_INSTANCE_IDENTITY: ****
JENKINS_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
JENKINS_AGENT_WORKDIR: /home/jenkins/agent
Mounts:
/home/jenkins/agent from workspace-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
istio-proxy:
Container ID: docker://9a87cafa07779cfc98c58678f484e48e28e354060573c19db9d3d9c86be7a496
Image: docker.io/istio/proxyv2:1.8.1
Image ID: docker-pullable://istio/proxyv2#sha256:0a407ecee363d8d31957162b82738ae3dd09690668a0168d660044ac8fc728f0
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b.jenkins
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--concurrency
2
State: Running
Started: Fri, 08 Jan 2021 17:17:11 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
Environment:
JWT_POLICY: first-party-jwt
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
POD_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b (v1:metadata.name)
POD_NAMESPACE: jenkins (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
HOST_IP: (v1:status.hostIP)
CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name'])
CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision'])
PROXY_CONFIG: {"proxyMetadata":{"DNS_AGENT":""}}
ISTIO_META_POD_PORTS: [
]
ISTIO_META_APP_CONTAINERS: kubectl,jnlp
ISTIO_META_CLUSTER_ID: Kubernetes
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_METAJSON_ANNOTATIONS: {"buildUrl":"https://jenkins.example.de/job/APP-Kiali/job/master/63/","runUrl":"job/APP-Kiali/job/master/63/"}
ISTIO_META_WORKLOAD_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
ISTIO_META_OWNER: kubernetes://apis/v1/namespaces/jenkins/pods/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
ISTIO_META_MESH_ID: cluster.local
TRUST_DOMAIN: cluster.local
DNS_AGENT:
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
workspace-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-7htdh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7htdh
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26s default-scheduler Successfully assigned jenkins/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b to kubernetes-slave
Normal Pulling 24s kubelet Pulling image "docker.io/istio/proxyv2:1.8.1"
Normal Pulled 21s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.8.1" in 2.897659504s
Normal Created 21s kubelet Created container istio-init
Normal Started 21s kubelet Started container istio-init
Normal Pulled 19s kubelet Container image "lachlanevenson/k8s-kubectl:latest" already present on machine
Normal Created 19s kubelet Created container kubectl
Normal Started 19s kubelet Started container kubectl
Normal Pulled 19s kubelet Container image "jenkins/inbound-agent:4.3-4" already present on machine
Normal Created 19s kubelet Created container jnlp
Normal Started 18s kubelet Started container jnlp
Normal Pulling 18s kubelet Pulling image "docker.io/istio/proxyv2:1.8.1"
Normal Pulled 11s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.8.1" in 7.484694118s
Normal Created 11s kubelet Created container istio-proxy
Normal Started 11s kubelet Started container istio-proxy
Warning Unhealthy 9s kubelet Readiness probe failed: Get "http://192.168.4.247:15021/healthz/ready": dial tcp 192.168.4.247:15021: connect: connection refused
Normal Killing 6s kubelet Stopping container kubectl
Normal Killing 6s kubelet Stopping container istio-proxy
Logs: Jenkins Agent
fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
error: a container name must be specified for pod worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b, choose one of: [kubectl jnlp istio-proxy] or one of the init containers: [istio-init]
fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c kubectl
fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c jnlp
unable to retrieve container logs for docker://58ee7b399077701f3f0a99ed97eb6f1e400976b7946d209d2bee64be32a94885fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c jnlp -c jnlppod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Error from server (BadRequest): container "jnlp" in pod "worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw" is waiting to start: PodInitializing
fabiansc#Kubernetes-Master:~$ kubectl logs -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jan 08, 2021 4:18:07 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among []
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins.example.de
Agent port: 50000
Identity: cd:35:f9:1a:60:54:e4:91:07:86:59:49:0b:b6:73:c4
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000
fabiansc#Kubernetes-Master:~$ kubectl logs -f -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jan 08, 2021 4:18:07 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among []
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins.example.de
Agent port: 50000
Identity: cd:35:f9:1a:60:54:e4:91:07:86:59:49:0b:b6:73:c4
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000 (retrying:2)
java.io.IOException: Failed to connect to jenkins.example.de:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:247)
at hudson.remoting.Engine.connectTcp(Engine.java:844)
at hudson.remoting.Engine.innerRun(Engine.java:722)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:645)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:205)
... 3 more
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:743)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:283)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:117)
at java.lang.Thread.run(Thread.java:748)
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:828)
at hudson.remoting.Engine.innerRun(Engine.java:768)
at hudson.remoting.Engine.run(Engine.java:518)
Logs: Jenkins Agent
INFO: Connecting to jenkins.example.de:50000 (retrying:2)
java.io.IOException: Failed to connect to jenkins.example.de:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:247)
at hudson.remoting.Engine.connectTcp(Engine.java:844)
at hudson.remoting.Engine.innerRun(Engine.java:722)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:645)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:205)
... 3 more
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:743)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:283)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:117)
at java.lang.Thread.run(Thread.java:748)
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:828)
at hudson.remoting.Engine.innerRun(Engine.java:768)
at hudson.remoting.Engine.run(Engine.java:518)
Found the answer. Istio was delaying connectivity of JNLP. Details on Github Issue #146. Further, Jenkins URL and Jenkins Tunnel must be configured (otherwise it fails, see Github Issue #788):
Two solutions:
Disable Istio
Create your own custom JNPLP image, utilizing delay / retry (graceful degradation). None is provided since February 2020.
Related
I am having issue with slave pods not being able to connect to Jenkins master.
This is the Jenkins build output
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘ci-xprj2-2z8qp’ is offline
I can see this in the Jenkins pod log
2020-09-24 20:16:57.778+0000 [id=6228] INFO o.c.j.p.k.KubernetesLauncher#launch: Created Pod: infrastructure/ci-xprj2-2tqzn
2020-09-24 20:16:57.778+0000 [id=24] INFO hudson.slaves.NodeProvisioner#lambda$update$6: Kubernetes Pod Template provisioning successfully completed. We have now 2 computer(s)
2020-09-24 20:16:57.779+0000 [id=24] INFO o.c.j.p.k.KubernetesCloud#provision: Excess workload after pending Kubernetes agents: 0
2020-09-24 20:16:57.779+0000 [id=24] INFO o.c.j.p.k.KubernetesCloud#provision: Template for label ci: Kubernetes Pod Template
2020-09-24 20:16:57.839+0000 [id=5801] INFO o.internal.platform.Platform#log: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
2020-09-24 20:16:59.902+0000 [id=6228] INFO o.c.j.p.k.KubernetesLauncher#launch: Pod is running: infrastructure/ci-xprj2-2tqzn
2020-09-24 20:16:59.906+0000 [id=6228] INFO o.c.j.p.k.KubernetesLauncher#launch: Waiting for agent to connect (0/100): ci-xprj2-2tqzn
2020-09-24 20:17:00.911+0000 [id=6228] INFO o.c.j.p.k.KubernetesLauncher#launch: Waiting for agent to connect (1/100): ci-xprj2-2tqzn
2020-09-24 20:17:01.917+0000 [id=6228] INFO o.c.j.p.k.KubernetesLauncher#launch: Waiting for agent to connect (2/100): ci-xprj2-2tqzn
The log from ci-xprj2-2tqzn shows this:
Sep 24, 2020 8:18:59 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: ci-xprj2-29g0p
Sep 24, 2020 8:18:59 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Sep 24, 2020 8:18:59 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Sep 24, 2020 8:18:59 PM hudson.remoting.Engine startEngine
WARNING: No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars
Sep 24, 2020 8:18:59 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://jenkins1:8080/]
Sep 24, 2020 8:19:19 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: Failed to connect to http://jenkins1:8080/tcpSlaveAgentListener/: jenkins1
java.io.IOException: Failed to connect to http://jenkins1:8080/tcpSlaveAgentListener/: jenkins1
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:217)
at hudson.remoting.Engine.innerRun(Engine.java:693)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.UnknownHostException: jenkins1
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990)
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:214)
... 2 more
My Jenkins config looks like this:
Any help?
It looks like the error to focus on would be:
SEVERE: Failed to connect to http://jenkins1:8080/tcpSlaveAgentListener/: jenkins1
java.io.IOException: Failed to connect to http://jenkins1:8080/tcpSlaveAgentListener/: jenkins1
...
Caused by: java.net.UnknownHostException
which means jenkins1 can't be resolved.
If jenkins1 corresponds to a Kubernetes service name, I would double check its name and details and then spin up another pod in your namespace that sleeps for a while so that you can exec in and see if you can resolve jenkins1.
kubectl exec -it <sleep-test-pod-name> /bin/bash
ping jenkins1
nslookup jenkins1 #install nslookup if not already installed
If jenkins1 corresponds to one of those single word domains you sometimes see at corporations, then I would double check your search prefixes in /etc/resolv.conf in your pods:
cat /etc/resolv.conf
I am runningJenkins Master & K8s-Master on same server. Jenkins running through tomcat Apache(not on K8s cluster). I have another server for K8s-Worker-Node, On both the server CentOS-8 OS installed. I have configured Jenkins Kubernetes Plugin version - 1.26.4 But while running pipeline job i always getting an error, Below is K8s cluster Jenkins agent pod log.
[root#K8s-Master /]# kubectl logs -f pipeline-test-33-sj6tl-r0clh-g559d -c jnlp
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: pipeline-test-33-sj6tl-r0clh-g559d
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Aug 08, 2020 8:37:21 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Aug 08, 2020 8:37:21 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Aug 08, 2020 8:37:21 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://jenkins-serverjenkins/]
Aug 08, 2020 8:37:41 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: Failed to connect to http://jenkins-server/jenkins/tcpSlaveAgentListener/: jenkins-server
java.io.IOException: Failed to connect to http://jenkins-serverjenkins/tcpSlaveAgentListener/: jenkins-server
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:217)
at hudson.remoting.Engine.innerRun(Engine.java:693)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.UnknownHostException: jenkins-server
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990)
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:214)
... 2 more
Below settings configuration already enabled.
Manage Jenkins --> Configure Global Security --> Agents Random [Enabled]
I am successfully able to communicate from my Jenkins to the K8s master cluster(Verified in Jenkins Cloud section).
Even in K8s master all the namespace pods are running. weave-net CNI installed, Don't know what is causing problem while agent provisioning through Jenkins.
My Jenkins/K8s master & K8s-Worker-Node /etc/hosts as follows.
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
75.76.77.5 jenkins-server jenkins-server.company.domain.com
75.76.77.6 k8s-node-1 k8s-node-1.company.domain.com
Below output getting in K8s-Worker node. It looks there is no problem in connecting jenkins-master from K8s-worker node.
# curl -I http://jenkins-server/jenkins/tcpSlaveAgentListener/
HTTP/1.1 200
Server: nginx/1.14.1
Date: Fri, 28 Aug 2020 06:13:34 GMT
Content-Type: text/plain;charset=UTF-8
Connection: keep-alive
Cache-Control: private
Expires: Thu, 01 Jan 1970 00:00:00 GMT
X-Content-Type-Options: nosniff
X-Hudson-JNLP-Port: 40021
X-Jenkins-JNLP-Port: 40021
X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnkgz8Av2x8R9R2KZDzWm1K11O01r7VDikW48rCNQlgw/pUeNSPJu9pv7kH884tOE65GkMepNdtJcOFQFtY1qZ0sr5y4GF5TOc7+U/TqfwULt60r7OQlKcrsQx/jJkF0xLjR+xaJ64WKnbsl0AiZhd8/ynk02UxFXKcgwkEP2PGpGyQ1ps5t/yj6ueFiPAHX2ssK8aI7ynVbf3YyVrtFOlqhnTy11mJFoLAZnpjYRCJsrX5z/xciVq5c2XmEikLzMpjFl0YBAsDo7JL4eBUwiBr64HPcSKrsBBB9oPE4oI6GkYUCAni8uOLfzoNr9B1eImaETYSdVPdSKW/ez/OeHjQIDAQAB
X-Jenkins-Agent-Protocols: JNLP4-connect, Ping
X-Remoting-Minimum-Version: 3.14
# curl http://jenkins-server:40021/
Jenkins-Agent-Protocols: JNLP4-connect, Ping
Jenkins-Version: 2.235.3
Jenkins-Session: 4455fd45
Client: 75.76.77.6
Server: 75.76.77.5
Remoting-Minimum-Version: 3.14
It looks Kubernetes DNS not resolving the name. So any pointers to resolve this problem will help. Thanks.
It was an Kubernetes DNS resolution issue. With the help of following link - https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution created dnsutils.yaml pod and found that my K8s cluster pods was returning following error "connection timed out; no servers could be reached" for below command.
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
So i have uninstalled and re-installed Kubernetes version - v1.19.0. Now everything working fine. Thanks.!!!
I have a docker image which was created for training images for object detection.
Here is the dockerfile for my image.
FROM python:3
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY /src/ /Training
WORKDIR /Training
CMD ["/bin/bash"]
To create this container I used
sudo docker image build -t training .
The container of this image runs fine and I'm able to train on my computer.
I pushed this image to my private docker hub repository using
docker tag training abhishekkaranath/training:training
docker push abhishekkaranath/training:training
training image in my private docker hub repository
I created a secret for my deployment file using
kubectl create secret docker-registry hubsecret --docker-server=https://index.docker.io/v1/ --docker-username=my_username --docker-password=my_docker_hub_password --docker-email=my_email
Below is my deployment.yaml file
apiVersion: v1
kind: Pod
metadata:
name: podtest
spec:
containers:
- name: podtest
image: abhishekkaranath/training:training
imagePullSecrets:
- name: hubsecret
I created this pod from my terminal using
kubectl create -f deployment.yaml
This gave result:
pod/podtest created
While checking my minikube dashboard I get an error saying "Back-off restarting failed container".
Back off restarting failed container
I have tried pulling hello world images from my private docker hub repository into kubernetes which works fine and the pods are up and running. So that means there is no problem in pulling the images from the private docker hub repository.
Logs and error description:
kubectl get pods
NAME READY STATUS RESTARTS AGE
podtest 0/1 CrashLoopBackOff 8 20m
kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 11d v1.14.2
kubectl describe pods podtest
Name: podtest
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Thu, 06 Jun 2019 18:33:02 +0400
Labels: <none>
Annotations: <none>
Status: Running
IP: 172.17.0.9
Containers:
podtest:
Container ID: docker://14b9fcc51c8b4a594e0b38580444e2fedd61a636f4e57374d788c9ba5bf9fbcf
Image: abhishekkaranath/training:training
Image ID: docker-pullable://abhishekkaranath/training#sha256:619468dd0b74b30babfd7c0702c21ea71e9fb70ba3971ec26e8279fdbd071ec7
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 06 Jun 2019 18:54:19 +0400
Finished: Thu, 06 Jun 2019 18:54:19 +0400
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lb9js (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lb9js:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lb9js
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/podtest to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "abhishekkaranath/training:training" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container podtest
Normal Started 23m (x5 over 25m) kubelet, minikube Started container podtest
Warning BackOff 4m51s (x94 over 24m) kubelet, minikube Back-off restarting failed container
kubectl --v=8 logs podtest
I0606 19:03:24.821394 3978 loader.go:359] Config loaded from file /home/abhishekkaranath/.kube/config
I0606 19:03:24.826732 3978 round_trippers.go:416] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/podtest
I0606 19:03:24.826748 3978 round_trippers.go:423] Request Headers:
I0606 19:03:24.826757 3978 round_trippers.go:426] Accept: application/json, */*
I0606 19:03:24.826764 3978 round_trippers.go:426] User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410
I0606 19:03:24.835800 3978 round_trippers.go:441] Response Status: 200 OK in 9 milliseconds
I0606 19:03:24.835818 3978 round_trippers.go:444] Response Headers:
I0606 19:03:24.835827 3978 round_trippers.go:447] Content-Length: 2693
I0606 19:03:24.835834 3978 round_trippers.go:447] Date: Thu, 06 Jun 2019 15:03:24 GMT
I0606 19:03:24.835840 3978 round_trippers.go:447] Content-Type: application/json
I0606 19:03:24.835870 3978 request.go:942] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"podtest","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/podtest","uid":"fd87b14c-8867-11e9-a507-0800276a11ac","resourceVersion":"415630","creationTimestamp":"2019-06-06T14:33:02Z"},"spec":{"volumes":[{"name":"default-token-lb9js","secret":{"secretName":"default-token-lb9js","defaultMode":420}}],"containers":[{"name":"podtest","image":"abhishekkaranath/training:training","resources":{},"volumeMounts":[{"name":"default-token-lb9js","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"imagePullSecrets":[{"name":"hubsecret"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","oper [truncated 1669 chars]
I0606 19:03:24.840211 3978 round_trippers.go:416] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/podtest/log
I0606 19:03:24.840227 3978 round_trippers.go:423] Request Headers:
I0606 19:03:24.840235 3978 round_trippers.go:426] Accept: application/json, */*
I0606 19:03:24.840241 3978 round_trippers.go:426] User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410
I0606 19:03:24.843633 3978 round_trippers.go:441] Response Status: 200 OK in 3 milliseconds
I0606 19:03:24.843657 3978 round_trippers.go:444] Response Headers:
I0606 19:03:24.843666 3978 round_trippers.go:447] Date: Thu, 06 Jun 2019 15:03:24 GMT
I0606 19:03:24.843673 3978 round_trippers.go:447] Content-Type: text/plain
kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
37m Warning BackOff pod/dhubservice Back-off restarting failed container
31m Normal Scheduled pod/podtest Successfully assigned default/podtest to minikube
29m Normal Pulled pod/podtest Container image "abhishekkaranath/training:training" already present on machine
29m Normal Created pod/podtest Created container podtest
29m Normal Started pod/podtest Started container podtest
69s Warning BackOff pod/podtest Back-off restarting failed container
docker pull abhishekkaranath/training:training
training: Pulling from abhishekkaranath/training
Digest: sha256:619468dd0b74b30babfd7c0702c21ea71e9fb70ba3971ec26e8279fdbd071ec7
Status: Image is up to date for abhishekkaranath/training:training
#Mark is correct. The reason for CrashLoopBackOff is that the container was created it ran /bin/bash as per the CMD in your dockerfile and exited.
If you want container to keep running, you should to execute your python code using:
CMD ["python", "-m", "<module_name>"]
Another major thing to keep in mind is that container would exit as soon as your app stops. So if app is a dummy code, make sure to run it in a infinite loop to keep the container and app running.
Looking into:
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 06 Jun 2019 18:54:19 +0400
Finished: Thu, 06 Jun 2019 18:54:19 +0400
Ready: False
Restart Count: 9
Your pod was scheduled, container was created and finished his job.
As per documentation Pod Lifecycle.
I have a Kubernetes 1.10.0, Docker 17.03.2-ce, and Jenkins 2.107.1 running on an Ubuntu 17.04 VM with Kubernetes Plugin 1.5 installed in Jenkins. I have 4 other Ubuntu VM(s) successfully set up as nodes in the cluster, including the untainted master. I can deploy nginx-based services directly and have unfettered access to the dashboard. So, Kubernetes itself seems happy enough.
Before you mention it, let me say that we don't have short term plans to run Jenkins master inside Kubernetes itself. So, I'd prefer to get this strategy working.
The plugin config for a Kubernetes Cloud is thus:
"Name": kubernetes
"Kubernetes URL": https://172.20.43.30:6443
from
# kubectl describe pods/kube-apiserver-jenkins-kube-master --namespace=kube-system | grep Liveness
Liveness: http-get https://172.20.43.30:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
after accepting the insecure cert, a browser to https://172.20.43.30:6443/ will show
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
"Kubernetes server certificate key" obtained from
# kubectl get pods/kube-apiserver-jenkins-kube-master -o yaml --namespace=kube-system | grep tls
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
# cat /etc/kubernetes/pki/apiserver.crt
-----BEGIN CERTIFICATE-----
MIIDZ******
*******************
****PP5wigl
-----END CERTIFICATE-----
"Kubernetes Namespace": jenkins-slaves
the jenkins-slaves namespace setup like this ...
create jenkins-namespace.yaml and add this:
apiVersion: v1
kind: Namespace
metadata:
name: jenkins-slaves
labels:
name: jenkins-slaves
spec:
finalizers:
- kubernetes
then
# kubectl create -f jenkins-namespace.yaml
namespace "jenkins-slaves" created
# kubectl -n jenkins-slaves create sa jenkins
serviceaccount "jenkins" created
# kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods
role.rbac.authorization.k8s.io "jenkins" created
# kubectl create rolebinding jenkins --role=jenkins --serviceaccount=jenkins-slaves:jenkins
rolebinding.rbac.authorization.k8s.io "jenkins" created
# kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=jenkins-slaves:jenkins
clusterrolebinding.rbac.authorization.k8s.io "jenkins" created
added a Jenkins credential of "secret text" using the token spit out from
# kubectl get -n jenkins-slaves sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n jenkins-slaves get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d -
a "Test Connection" shows "Connection test successful"
It should be noted that that same token can be used to login to the Kubernetes dashboard with full access rights.
"Jenkins URL": http://172.20.43.30:8080
"Kubernetes Pod Template:Name": jnlp slave
"Kubernetes Pod Template:Namespace": jenkins-slaves
"Kubernetes Pod Template:Labels": jenkins-slaves
"Kubernetes Pod Template:Usage": Only build jobs with label expressions matching this node
"Kubernetes Pod Template:Container Template:Name": jnlp-slave
"Kubernetes Pod Template:Container Template:Docker image": jenkins/jnlp-slave
"Kubernetes Pod Template:Container Template:Working directory": ./.jenkins-agent
At this point, if I create a job and "Restrict where this project can be run" to a "Label Expression" of "jenkins-slaves", I get:
Label jenkins-slaves is serviced by no nodes and 1 cloud. Permissions or other restrictions provided by plugins may prevent this job from running on those nodes.
If I try to build the job, it will sit in the build queue and the "Build Executor Status" will periodically say "jnlp-slave-##### (offline) (suspended)" and then disappear a couple seconds later.
The system log says:
Apr 03, 2018 12:16:21 PM SEVERE org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher logLastLines
Error in provisioning; agent=KubernetesSlave name: jnlp-slave-t8004, template=PodTemplate{inheritFrom='', name='jnlp slave', namespace='jenkins-slaves', label='jenkins-slaves', nodeSelector='', nodeUsageMode=EXCLUSIVE, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume#44dcba2d, containers=[ContainerTemplate{name='jnlp-slave', image='jenkins/jnlp-slave', workingDir='./.jenkins-agent', command='/bin/sh -c', args='cat', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe#58f0ceec}]}. Container jnlp exited with error 255. Logs: Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior
Warning: SECRET is defined twice in command-line arguments and the environment variable
Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable
Apr 03, 2018 4:16:16 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: jnlp-slave-t8004
Apr 03, 2018 4:16:16 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Apr 03, 2018 4:16:16 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 3.19
Apr 03, 2018 4:16:16 PM hudson.remoting.Engine startEngine
WARNING: No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars
Apr 03, 2018 4:16:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://172.20.43.30:8080/]
Apr 03, 2018 4:16:17 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: http://172.20.43.30:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found
java.io.IOException: http://172.20.43.30:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:197)
at hudson.remoting.Engine.innerRun(Engine.java:518)
at hudson.remoting.Engine.run(Engine.java:469)
Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
Terminating Kubernetes instance for agent jnlp-slave-t8004
Apr 03, 2018 12:16:21 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount
Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Apr 03, 2018 12:16:21 PM INFO okhttp3.internal.platform.Platform log
ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
Terminated Kubernetes instance for agent jenkins-slaves/jnlp-slave-t8004
Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
Disconnected computer jnlp-slave-t8004
Apr 03, 2018 12:16:25 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
Excess workload after pending Kubernetes agents: 1
Apr 03, 2018 12:16:25 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
Template: Kubernetes Pod Template
Apr 03, 2018 12:16:25 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount
Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Apr 03, 2018 12:16:25 PM INFO okhttp3.internal.platform.Platform log
ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
Apr 03, 2018 12:16:25 PM INFO hudson.slaves.NodeProvisioner$StandardStrategyImpl apply
Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: 0
Apr 03, 2018 12:16:35 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount
Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Apr 03, 2018 12:16:35 PM INFO hudson.slaves.NodeProvisioner$2 run
Kubernetes Pod Template provisioning successfully completed. We have now 2 computer(s)
Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
Excess workload after pending Kubernetes agents: 0
Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
Template: Kubernetes Pod Template
Apr 03, 2018 12:16:35 PM INFO okhttp3.internal.platform.Platform log
ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: jnlp-slave-bnz94 in namespace jenkins-slaves
Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
-Steve Maring
Orlando, FL
I went to http://172.20.43.30:8080/configureSecurity/ and set "Agents:TCP port for JNLP agents" to "random"
I then got a "jnlp-slave-ttm5v (suspended)" that stays in the "Build Executor Status"
and the log said:
Container is waiting jnlp-slave-ttm5v [jnlp-slave]:
ContainerStateWaiting(message=Error response from daemon: the working directory './.jenkins-agent' is invalid, it needs to be an absolute path, reason=CreateContainerError, additionalProperties={})
After setting "Working directory" to "/home/jenkins" I saw a pod actually get created on k8s:
# kubectl get pods --namespace=jenkins-slaves
NAME READY STATUS RESTARTS AGE
jnlp-slave-1ds27 2/2 Running 0 42s
and my job ran successfully!
Started by user Buildguy
Agent jnlp-slave-1ds27 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (jenkins-slaves):
* [jnlp-slave] jenkins/jnlp-slave(resourceRequestCpu: , resourceRequestMemory: , resourceLimitCpu: , resourceLimitMemory: )
Building remotely on jnlp-slave-1ds27 (jenkins-slaves) in workspace
/home/jenkins/workspace/maven-parent-poms
I'm need up Kafka and Cassandra in Minikube
Host OS is Ubuntu 16.04
$ uname -a
Linux minikuber 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Minikube started normally:
$ minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Services list:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
Zookeeper and Cassandra is running, but kafka crashing with error "CrashLoopBackOff"
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
zookeeper-775db4cd8-lpl95 1/1 Running 0 1h
cassandra-d84d697b8-p5wcs 1/1 Running 0 1h
kafka-6d889c567-w5n4s 0/1 CrashLoopBackOff 25 1h
View logs:
kubectl logs kafka-6d889c567-w5n4s -p
Output:
waiting for kafka to be ready
...
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
...
INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server '' with timeout of 6000 ms
...
INFO shutting down (kafka.server.KafkaServer)
INFO shut down completed (kafka.server.KafkaServer)
FATAL Exiting Kafka. (kafka.server.KafkaServerStartable)
Сan any one help how to solve the problem of restarting the container?
kubectl describe pod kafka-6d889c567-w5n4s
Output describe:
Name: kafka-6d889c567-w5n4s
Namespace: default
Node: minikube/192.168.99.100
Start Time: Thu, 23 Nov 2017 17:03:20 +0300
Labels: pod-template-hash=284457123
run=kafka
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"kafka-6d889c567","uid":"0fa94c8d-d057-11e7-ad48-080027a5dfed","a...
Status: Running
IP: 172.17.0.5
Created By: ReplicaSet/kafka-6d889c567
Controlled By: ReplicaSet/kafka-6d889c567
Info about Containers:
Containers:
kafka:
Container ID: docker://7ed3de8ef2e3e665ba693186f5125c6802283e1fabca8f3c85eb584f8de19526
Image: wurstmeister/kafka
Image ID: docker-pullable://wurstmeister/kafka#sha256:2aa183fd201d693e24d4d5d483b081fc2c62c198a7acb8484838328c83542c96
Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 27 Nov 2017 09:43:39 +0300
Finished: Mon, 27 Nov 2017 09:43:49 +0300
Ready: False
Restart Count: 1003
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bnz99 (ro)
Info about Conditions:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Info about volumes:
Volumes:
default-token-bnz99:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bnz99
Optional: false
QoS Class: BestEffort
Info about events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 38m (x699 over 2d) kubelet, minikube pulling image "wurstmeister/kafka"
Warning BackOff 18m (x16075 over 2d) kubelet, minikube Back-off restarting failed container
Warning FailedSync 3m (x16140 over 2d) kubelet, minikube Error syncing pod