Run jenkins slave nodes on an eks cluster by kubernetes plugin - jenkins

I am using Jenkins Kubernetes plugin and i have been trying to connect to the eks cluster via Jenkins.my jenkins-master is running on a standalone server and eks is running separately.i want the slave nodes to be provisioned as pods in the cluster.however when i use the Kubernetes plugin in my case to connect to the cluster using the kubeconfig file,it gives me this error.
Error testing connection : Failure executing: GET at: https://*******/api/v1/namespaces/default/pods. Message: Forbidden! User arn:aws:eks:eu-west-1:******:cluster/******* doesn't have permission. pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "default".
i have tried creating roles and rolebinding,which are given below but still i am unable to provision to eks cluster
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default # your namespace
subjects:
- kind: User
name: system:anonymous # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
This is the role-binding i created and this is the error i am still getting

Related

Kubernetes - Changing autoscaling default version from autoscaling/v1 to autoscaling/v2beta2

I am using Kubernetes v1.22.2. I am trying to implement HorizontalPodAutoscaler but with api version autoscaling/v2beta2 as I want to implement autoscaling feature using ram and cpu resource.
To implement k8s in my CICD pipeline using Jenkins I am using Kubernetes Continues Deploy and Kubernetes plugin. When I am trying to deploy using jenkins I am getting following error ,
hudson.remoting.ProxyException:
io.fabric8.kubernetes.client.KubernetesClientException: Failure
executing: POST at:
https://MY_IP:6443/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers.
Message: the API version in the data (autoscaling/v2beta2) does not
match the expected API version (autoscaling/v1). Received status:
Status(apiVersion=v1, code=400, details=null, kind=Status, message=the
API version in the data (autoscaling/v2beta2) does not match the
expected API version (autoscaling/v1),
metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null,
additionalProperties={}), reason=BadRequest, status=Failure,
additionalProperties={}). at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
at
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:780)
at
io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:349)
Here is the output from my k8s master ,
kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
Here is my hba file ,
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: getuser
spec:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 15
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: getuser
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 55
How do I enable or set default api version for autoscaling apart from autoscaling/v1 ?

how to know external endpoints or the ip of Loadbalancer Service inside a pod

I have a helm chart that installs/creates an instance of our app. Our app consist of multiple micro-services and one of them is nginx. The nginx service is of type loadbalancer.
So when user first tries to hit the loadbalancer IP from browser, I want to open a web page which will ask him to bind some domains (e.g. a.yourdomain.com and b.yourdomain.com) with the loadbalancer IP and once he does that, he will click on "verify" button and at that time I want to check on the server side if the domains are correctly pointing to the loadbalancer IP or not.
Now the problem is how can I get the loadbalancer external IP inside the nginx pod so that I can ping the domains and check if they are poining to the loadbalancer IP or not.
Edit
Note: I would like to avoid using kubectl because I do not want to install this extra utility for one time job.
I have found a solution, tested and it's working.
To find ExternalIP associated with nginx service of type LoadBalancer you want to create a service account:
kubectl create serviceaccount hello
and also create Role and RoleBindind like folllowing:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-services
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-services
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-services
subjects:
- kind: ServiceAccount
name: hello
namespace: default
Then you create your pod with serviceAccount: hello
and now you can make a curl request to api-server like shown in k8s documentation:
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/$NAMESPACE/services/nginx/
under .status.loadBalancer.ingress[0].ip should be IP you are looking for.
Let me know if it was helpful.
The value of external IP will be in the status of service object.
kubectl get svc $SVC_NAME -n $NS_NAME -o jsonpath="{.status.loadBalancer.ingress[*].ip}” will get the externalIP.
I found the solution, the trick is to call the k8s api server with the default token that is seeded by k8s. These two simple commands will do the trick:
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/<your_namespace>/services/nginx \
| jq -r '.status.loadBalancer.ingress[0].ip'

Configuring kubernetes plugin in Jenkins

I am trying to configure kubernetes plugin in Jenkins. Here are the details I am putting in:
Now, when I click on test connection, I get the following error:
Error testing connection https://xx.xx.xx.xx:8001: Failure executing: GET at: https://xx.xx.xx.xx:8001/api/v1/namespaces/default/pods. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked. Unauthorized.
After doing some google, I realized it might be because of role binding, so I create a role binding for my default service account:
# kubectl describe rolebinding jenkins
Name: jenkins
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: pod-reader
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default default
Here is the pod-reader role:
# kubectl describe role pod-reader
Name: pod-reader
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list]
But I still get the same error. Is there anything else that needs to be done here? TIA.
I think it's not working because you didn't provide the certificate. This worked for me.
Figured it out, I was using credentials as plain text. I changed that to kubernetes secret, and it worked.

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

Kubernetes - Jenkins integration

I've bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I've started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out https://github.com/jenkinsci/kubernetes-plugin .
I have already created a serviceaccount based on the proposal in https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
> kubectl -n dev-infra create sa jenkins
> kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins
> kubectl -n dev-infra get sa jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins
namespace: dev-infra
resourceVersion: "1295580"
selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins
uid: d040041c-1311-11e8-a4f8-005056039a14
secrets:
- name: jenkins-token-vmt79
> kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml
apiVersion: v1
data:
ca.crt: LS0tL...0tLQo=
namespace: ZGV2LWluZnJh
token: ZXlK...tdVE=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: jenkins
kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins-token-vmt79
namespace: dev-infra
resourceVersion: "1295579"
selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79
uid: d041fa6c-1311-11e8-a4f8-005056039a14
type: kubernetes.io/service-account-token
After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.
When I hit test connection I get "Error testing connection https://url:port: Failure executing: GET at: https://url:port/api/v1/namespaces/dev-infra/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:dev-infra:default" cannot list pods in the namespace "dev-infra".
I don't want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can't understand how to configure the credentials in Jenkins. When I hit add credentials on the https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png I get
<select class="setting-input dropdownList">
<option value="0">Username with password</option>
<option value="1">Docker Host Certificate Authentication</option>
<option value="2">Kubernetes Service Account</option>
<option value="3">OpenShift OAuth token</option>
<option value="4">OpenShift Username and Password</option>
<option value="5">SSH Username with private key</option>
<option value="6">Secret file</option>
<option value="7">Secret text</option>
<option value="8">Certificate</option></select>
I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins.
Could you please help me to find step-by-step guide - what kind of of credentials I need?
Regards,
Pavel
The best practice is to launch you Jenkins master pod with the serviceaccount you created, instead of creating credentials in Jenkins
See example yaml
The Kubernetes plugin for Jenkins reads this file /var/run/secrets/kubernetes.io/serviceaccount/token. Please see if your Jenkins pod has this. The service account should have permissions targeting pods in the appropriate namespace.
In fact, we are using Jenkins running outside kubernetes 1.9. We simply picked the default service account token (from default namespace), and put it in that file on the Jenkins master. Restarted ... and the kubernetes token credential type was visible.
We do have a role and rolebinding though:
kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods
kubectl create rolebinding jenkins --role=jenkins --serviceaccount=default:default
In our case, Jenkins is configured to spin up slave pods in the default namespace. So this combination works.
More questions (similar):
Can I use Jenkins kubernetes plugin when Jenkins server is outside of a kubernetes cluster?
After some digging it appears that the easiest way to go(without giving extra permissions to the default service account for the name space) is to
kubectl -n <your-namespace> create sa jenkins
kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=<your-namespace>:jenkins
kubectl get -n <your-namespace> sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n <your-namespace> get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d -
Seems like you can store this token as type Secret text in Jenkins and the plugin is able to pick it up.
Another advantage of this approach compared to overwriting the default service account, as mentioned earlier above is that you can have secret per cluster - meaning you can use one jenkins to connect to for example dev -> quality -> prod namespaces or clusters with separate accounts.
Please feel free to contribute, if you have a better way to go.
Regards,
Pavel
For more details you can check:
- https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
- https://github.com/openshift/origin/issues/6807

Resources