I have kubernetes job and I would like to get his pod logs in the jenkins pipeline.
So I try to grep pod name to the jenkins variable and then get logs.
POD_NAME = sh script: "kubectl describe jobs.batch ${JOB_NAME} | grep 'Created pod' | cut -d':' -f2"
echo "${POD_NAME}"
sh "kubectl logs --follow ${POD_NAME}"
But I got null in the POD_NAME variable.
I assume that your jenkins controller or agent is able to query the kubernetes api with kubectl because it has a serviceaccount or some other form of credential to access kubernetes.
If that is true, I propose that you use a label to identify the pods created by the job and to query anything related to them.
You can do that by adding a label to the .spec.metadata.labels section as shown below and then query with kubectl and the --selector flag:
---
apiVersion: batch/v1
kind: Job
metadata:
name: MYAPP
...
spec:
template:
metadata:
...
labels:
test: value
spec:
containers:
- name: MYAPP
image: python:3.7.6-alpine3.10
...
kubectl logs --follow --selector test=value
Use kubectl logs --help to get further information and examples.
stage('Check pod status'){
steps {
script{
sh '''
POD_NAME=$(kubectl describe job -n ${NAMESPACE} ${JOB_NAME} | grep Created | cut -d ':' -f2)
echo "${POD_NAME}"
'''
}
}
}
Related
Our pipeline by default tries to use a container that matches the name of the current stage.
If this container doesn't exist, the container 'default' is used.
This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control.
Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.
I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:
node(POD_LABEL)
stage('Check Version (Maven)') {
container('containerThatDoesNotExist'}{
try{
sh 'mvn --version'
}catch(Exception e){
// catch Exception
}
}
java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:
pull an image of kubectl on worker.
Add a stage for getting the running container - use a label or timestamp to get the desire one.
Use the right container 'default' or rather 'some-container'.
Example:
pipeline {
environment {
CURRENT_CONTAINER="default"
}
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: some-app
image: XXX/some-app
imagePullPolicy: IfNotPresent
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
'''
}
}
stages {
stage('Set Container Name') {
steps {
container('kubectl') {
withCredentials([
string(credentialsId: 'minikube', variable: 'api_token')
]) {
script {
CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath="{.items[*].spec.containers[0].name}"',
returnStdout: true
).trim()
echo "Exec container ${CURRENT_CONTAINER}"
}
}
}
}
}
stage('Echo Container Name') {
steps {
echo "CURRENT_CONTAINER is ${CURRENT_CONTAINER}"
}
}
}
}
Intention is to execute gatling perf tests from command line .Equivalent docker command is
docker run --rm -w /opt/gatling-fundamentals/
tarunkumard/tarungatlingscript:v1.0
./gradlew gatlingRun-simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
Now to map above docker run in Kubernetes using kubectl, I have created a pod for which gradlewcommand.yaml file is below
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
restartPolicy: OnFailure
Now pod is created using below command:-
kubectl apply -f gradlewcommand.yaml
Now comes my actual requirement or question that how do i run or trigger kubectl run command so as to run container inside the above pod created? ,mind you pod name is gradlecommandfromcommandline
Here is the command which solves the problem:
kubectl exec gradlecommandfromcommandline -- \
./gradlew gatlingRun-simulations.RuntimeParameters \
-DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
I´d like to login on my newly installed kubernetes dashboard (k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0) via token and it doesn´t work.
I have the exact same problem as on How to log in to Kubernetes Dashboard UI with Service Account's token
but I verified my token and it fits. I also DON´T get the "Authentication failed..." error.
When I enter the token just nothing happens, but I see new entries in the logfile:
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 192.168.178.10:60092: { contents hidden }\n","stream":"stdout","time":"2018-12-07T14:59:49.655298186Z"}
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.655840444Z"}
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 POST /api/v1/login request from 192.168.178.10:60092: { contents hidden }\n","stream":"stdout","time":"2018-12-07T14:59:49.665272088Z"}
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.670318659Z"}
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 192.168.178.10:60092: {}\n","stream":"stdout","time":"2018-12-07T14:59:49.688294191Z"}
{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.691135283Z"}
{"log":"2018/12/07 14:59:52 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.\n","stream":"stdout","time":"2018-12-07T14:59:52.237740364Z"}
What I´ve done:
kubectl create serviceaccount myservice
kubectl get serviceaccount myservice -o yaml
Token:
TOKEN=$(echo "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltMTVjMlZ5ZG1salpTMTBiMnRsYmkxa09ISnlaQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp0ZVhObGNuWnBZMlVpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUpoWWpFeVlUVmpOUzFtWVRKakxURXhaVGd0WVRZNE55MHdNRFV3TlRZNE9EZzRNak1pTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpHVm1ZWFZzZERwdGVYTmxjblpwWTJVaWZRLm0yR2F4VmNsOTYzVkVjbUltb3dzY25aeWdrd2hQTTBlZmNjUnVoaGNmdlNWXzU5Y29wNkdMc2t0bTRtY1FqcjBnaWhzMTZXZjFrd1VkVjBlTFJNVE1zaWZudlQxR2J6Smd3ZURydTVMbHVteW5tY3Y3Sm1GVDFGLXpJSjI0SFRERVhlVTNtMV9OVjJHcUZHdTNmVTlxOVFscG44ZVRxR2FuNDZLdEM2OTZGUVBqbjFhVnRER28wMlVrU2NwVGRHckNkenFMUjFBT0ZMTXVyUWFjWldIbHlhTmZ4Sy02bU16aDBZdG1seHdfcEFSeVlySXJMVlR2dXlLeDRmQzRvWUx2elVia1pkWmp1eUlJWnFmYXVUMTFKQUFad243MHZyZW1xbVVHTXBsdXNaYVdiU2h3SlJkRWZmMzdjTEd3R3lwdU1SeXI2a3NsVlJiLW50eXdWbHYxQQ==" | base64 -d)
echo $TOKEN
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im15c2VydmljZS10b2tlbi1kOHJyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJteXNlcnZpY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYjEyYTVjNS1mYTJjLTExZTgtYTY4Ny0wMDUwNTY4ODg4MjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpteXNlcnZpY2UifQ.m2GaxVcl963VEcmImowscnZygkwhPM0efccRuhhcfvSV_59cop6GLsktm4mcQjr0gihs16Wf1kwUdV0eLRMTMsifnvT1GbzJgweDru5Llumynmcv7JmFT1F-zIJ24HTDEXeU3m1_NV2GqFGu3fU9q9Qlpn8eTqGan46KtC696FQPjn1aVtDGo02UkScpTdGrCdzqLR1AOFLMurQacZWHlyaNfxK-6mMzh0Ytmlxw_pARyYrIrLVTvuyKx4fC4oYLvzUbkZdZjuyIIZqfauT11JAAZwn70vremqmUGMplusZaWbShwJRdEff37cLGwGypuMRyr6kslVRb-ntywVlv1A
I start
kubectl proxy --port=9999 --address='192.168.178.10' --accept-hosts="^*$"
Does it work just on localhost (I don´t want to install a browser nor desktop)?
I´d also like to know, to get the dashboard permanently run, as after "ctrl + c" the "kubectl proxy" command.
I found the workarroud
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Running this command and "skip" on the dashboard and I´m logged in, but how to get rid of this user as I can´t find it anymore via
kubectl get serviceaccounts --all-namespaces
nor
kubectl get serviceaccounts -n kube-system
?
How to get it run via https?
Thanks in advance
Tom
The answer to all my questions I found on
http://www.joseluisgomez.com/containers/kubernetes-dashboard/
Access via kubectl proxy is not recommended for productive use (but unfortunately the only explained way on the kubernetes documentation).
It´s possible to access out of the box via https, but there are some additional steps required.
Create a certificate:
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
You´ll get a kubecfg.p12 which you have to download from kubernetes master and install on your client (double-click, next, next, next - Chrome Browser is recommended).
Install a service account with ClusterRoleBinding role:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
EOF
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
Get the bearer token for the account "kube-admin":
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Access https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy, choose "Token" and put in the bearer token you´ll get from previous step and you´re done.
Note: Information concering your cluster you´ll get via kubectl cluster-info.
This is a bit tricky,I have a K8s cluster up and running and i am able to execute a docker image inside that cluster and i can see contents of command “kubectl get pods -o wide” .Now i have Gitlab setted up with this K8 cluster
I have set up variables $KUBE_URL $KUBE_USER and $KUBE_PASSWORD respectively in Gitlab with above K8 cluster
Here Gitlab runner console displays all these information as shown in console log below,at the end it fails for
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
Here is full console log:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on WotC-Docker-ip-10-102-0-70 d457d50a
Using Docker executor with image docker:latest …
Pulling docker image docker:latest …
Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest …
Running on runner-d457d50a-project-185-concurrent-0 via ip-10-102-0-70…
Fetching changes…
Removing cluster1-config
HEAD is now at 25846c4 Initial commit
From https://git.com/core-systems/gatling
25846c4…bcaa89b master -> origin/master
Checking out bcaa89bf as master…
Skipping Git submodules setup
$ uname -a
Linux runner-d457d50a-project-185-concurrent-0 4.14.67-66.56.amzn1.x86_64 #1 SMP Tue Sep 4 22:03:21 UTC 2018 x86_64 Linux
$ apk add --no-cache curl
fetch htt p://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch ht tp://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.32.0-r0)
(2/4) Installing libssh2 (1.8.0-r3)
(3/4) Installing libcurl (7.61.1-r1)
(4/4) Installing curl (7.61.1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 6 MiB in 18 packages
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s ht tps : //storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
95 37.3M 95 35.8M 0 0 37.8M 0 --:–:-- --:–:-- --:–:-- 37.7M
100 37.3M 100 37.3M 0 0 38.3M 0 --:–:-- --:–:-- --:–:-- 38.3M
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
$ kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true
Cluster “nosebit” set.
$ kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
User “admin” set.
$ kubectl config set-context default --cluster=nosebit --user=admin
Context “default” created.
$ kubectl config use-context default
Switched to context “default”.
$ cat $HOME/.kube/config
apiVersion: v1
clusters:
cluster:
insecure-skip-tls-verify: true
server: https://18.216.8.240:443
name: nosebit
contexts:
context:
cluster: nosebit
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
name: admin
user:
password: |-
MIIDOzCCAiOgAwIBAgIJALOrUrxmhgpHMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV
BAMMDTEzLjU4LjE3OC4yNDEwHhcNMTgxMTI1MjIwNzE1WhcNMjgxMTIyMjIwNzE1
WjAYMRYwFAYDVQQDDA0xMy41OC4xNzguMjQxMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA4jmyesjEiy6T2meCdnzzLfSE1VtbY//0MprL9Iwsksa4xssf
PXrwq97I/aNNE2hWZhZkpPd0We/hNKh2rxwNjgozQTNcXqjC01ZVjfvpvwHzYDqj
4cz6y469rbuKqmXHKsy/1docA0IdyRKS1JKWz9Iy9Wi2knjZor6/kgvzGKdH96sl
ltwG7hNnIOrfNQ6Bzg1H6LEmFP+HyZoylWRsscAIxD8I/cmSz7YGM1L1HWqvUkRw
GE23TXSG4uNYDkFaqX46r4nwLlQp8p7heHeCV/mGPLd0QCUaCewqSR+gFkQz4nYX
l6BA3M0Bo4GHMIGEMB0GA1UdDgQW
BBQqsD7FUt9vBW2LcX4xbqhcO1khuTBIBgNVHSMEQTA/gBQqsD7FUt9vBW2LcX4x
bqhcO1khuaEcpBowGDEWMBQGA1UEAwwNMTMuNTguMTc4LjI0MYIJALOrUrxmhgpH
MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAY
6mxGeQ90mXYdbLtoVxOUSvqk9+Ded1IzuoQMr0joxkDz/95HCddyTgW0gMaYsv2J
IZVH7JQ6NkveTyd42QI29fFEkGfPaPuLZKn5Chr9QgXJ73aYrdFgluSgkqukg4rj
rrb+V++hE9uOBtDzcssd2g+j9oNA5j3VRKa97vi3o0eq6vs++ok0l1VD4wyx7m+l
seFx50RGXoDjIGh73Gh9Rs7/Pvc1Pj8uAGvj8B7ZpAMPEWYmkkc4F5Y/14YbtfGc
2VlUJcs5p7CbzsqI5Tqm+S9LzZXtD1dVnsbbbGqWo32CIm36Cxz/O/FCf8tbITpr
u2O7VjBs5Xfm3tiW811k
username: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdzZqdDYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFiMjc2YzIxLWYxMDAtMTFlOC04YjM3LTAyZDhiMzdkOTVhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNQifQ.RCQQWjDCSkH8YckBeck-EIdvOnTKBmUACXVixPfUp9gAmUnit5qIPvvFnav-C-orfYt552NQ5GTLOA3yR5-jmxoYJwCJBfvPRb1GqqgiiJE2pBsu5Arm30MOi2wbt5uCNfKMAqcWiyJQF98M2PFc__jH6C1QWPXgJokyk7i8O6s3TD69KrrXNj_W4reDXourLl7HwHWoWwNKF0dgldanug-_zjvE06b6VZBI-YWpm9bpe_ArIOrMEjl0JRGerWahcQFVJsmhc4vgw-9-jUsfKPUYEfDItJdQKyV9dgdwShgzMINuuHlU7w7WBxmJT6cqMIvHRnDHuno3qMKTJTuh-g
$ kubectl config view --minify > cluster1-config
$ export KUBECONFIG=$HOME/.kube/config
$ kubectl --kubeconfig=cluster1-config config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
default nosebit admin
$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1
==================================================================================================
Here is my .gitlab-ci.yml content, could you suggest why kubectl get pods not displaying pods of the remote cluster even when KUBECONFIG set up is done successfully?
image : docker:latest
variables:
CONTAINER_DEV_IMAGE: https://hub.docker.com/r/tarunkumard/gatling/:$CI_COMMIT_SHA
stages:
deploy
deploy:
stage: deploy
tags:
- docker
script:
‘uname -a’
‘apk add --no-cache curl’
‘curl -LO http s://storage.go ogleapis.com/kubernetes-release/release/$(curl -s htt ps:// storage.googlea pis .com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl’
‘chmod +x ./kubectl’
‘mv ./kubectl /usr/local/bin/kubectl’
‘kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true’
‘kubectl config set-credentials admin --username=" " --password="$KUBE_PASSWORD"’
‘kubectl config set-context default --cluster=nosebit --user=admin’
‘kubectl config use-context default’
‘cat $HOME/.kube/config’
‘kubectl config view --minify > cluster1-config’
‘export KUBECONFIG=$HOME/.kube/config’
‘kubectl --kubeconfig=cluster1-config config get-contexts’
'kubeconfig=cluster1-config kubectl get pods -o wide ’
Why gitlab runner failing to get pods from Kubernetes cluster(Note This cluster is up and running using and I am able to see pods using kubectl get pods command )
Basically,
kubectl config view --minify > cluster1-config
Won't do it, because the output will be something like this with no actual credentials/certs:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<kube-apiserver>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
You need:
kubectl config view --raw > cluster1-config
If that's not the issue. It could be that your credentials don't have the right RBAC permissions. I would try to find the ClusterRoleBinding or RoleBinding that is bound for that admin user. Something like:
$ kubectl get clusterrolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
$ kubectl get rolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"\n"}{end}' | grep admin
Once you find the role, you can see if it has the right permissions to view pods. For example:
$ kubectl get clusterrole cluster-admin -o=yaml
I've bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I've started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out https://github.com/jenkinsci/kubernetes-plugin .
I have already created a serviceaccount based on the proposal in https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
> kubectl -n dev-infra create sa jenkins
> kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins
> kubectl -n dev-infra get sa jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins
namespace: dev-infra
resourceVersion: "1295580"
selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins
uid: d040041c-1311-11e8-a4f8-005056039a14
secrets:
- name: jenkins-token-vmt79
> kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml
apiVersion: v1
data:
ca.crt: LS0tL...0tLQo=
namespace: ZGV2LWluZnJh
token: ZXlK...tdVE=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: jenkins
kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins-token-vmt79
namespace: dev-infra
resourceVersion: "1295579"
selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79
uid: d041fa6c-1311-11e8-a4f8-005056039a14
type: kubernetes.io/service-account-token
After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.
When I hit test connection I get "Error testing connection https://url:port: Failure executing: GET at: https://url:port/api/v1/namespaces/dev-infra/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:dev-infra:default" cannot list pods in the namespace "dev-infra".
I don't want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can't understand how to configure the credentials in Jenkins. When I hit add credentials on the https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png I get
<select class="setting-input dropdownList">
<option value="0">Username with password</option>
<option value="1">Docker Host Certificate Authentication</option>
<option value="2">Kubernetes Service Account</option>
<option value="3">OpenShift OAuth token</option>
<option value="4">OpenShift Username and Password</option>
<option value="5">SSH Username with private key</option>
<option value="6">Secret file</option>
<option value="7">Secret text</option>
<option value="8">Certificate</option></select>
I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins.
Could you please help me to find step-by-step guide - what kind of of credentials I need?
Regards,
Pavel
The best practice is to launch you Jenkins master pod with the serviceaccount you created, instead of creating credentials in Jenkins
See example yaml
The Kubernetes plugin for Jenkins reads this file /var/run/secrets/kubernetes.io/serviceaccount/token. Please see if your Jenkins pod has this. The service account should have permissions targeting pods in the appropriate namespace.
In fact, we are using Jenkins running outside kubernetes 1.9. We simply picked the default service account token (from default namespace), and put it in that file on the Jenkins master. Restarted ... and the kubernetes token credential type was visible.
We do have a role and rolebinding though:
kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods
kubectl create rolebinding jenkins --role=jenkins --serviceaccount=default:default
In our case, Jenkins is configured to spin up slave pods in the default namespace. So this combination works.
More questions (similar):
Can I use Jenkins kubernetes plugin when Jenkins server is outside of a kubernetes cluster?
After some digging it appears that the easiest way to go(without giving extra permissions to the default service account for the name space) is to
kubectl -n <your-namespace> create sa jenkins
kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=<your-namespace>:jenkins
kubectl get -n <your-namespace> sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n <your-namespace> get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d -
Seems like you can store this token as type Secret text in Jenkins and the plugin is able to pick it up.
Another advantage of this approach compared to overwriting the default service account, as mentioned earlier above is that you can have secret per cluster - meaning you can use one jenkins to connect to for example dev -> quality -> prod namespaces or clusters with separate accounts.
Please feel free to contribute, if you have a better way to go.
Regards,
Pavel
For more details you can check:
- https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
- https://github.com/openshift/origin/issues/6807