Kubernetes - Jenkins integration - docker

I've bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I've started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out https://github.com/jenkinsci/kubernetes-plugin .
I have already created a serviceaccount based on the proposal in https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
> kubectl -n dev-infra create sa jenkins
> kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins
> kubectl -n dev-infra get sa jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins
namespace: dev-infra
resourceVersion: "1295580"
selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins
uid: d040041c-1311-11e8-a4f8-005056039a14
secrets:
- name: jenkins-token-vmt79
> kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml
apiVersion: v1
data:
ca.crt: LS0tL...0tLQo=
namespace: ZGV2LWluZnJh
token: ZXlK...tdVE=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: jenkins
kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins-token-vmt79
namespace: dev-infra
resourceVersion: "1295579"
selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79
uid: d041fa6c-1311-11e8-a4f8-005056039a14
type: kubernetes.io/service-account-token
After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.
When I hit test connection I get "Error testing connection https://url:port: Failure executing: GET at: https://url:port/api/v1/namespaces/dev-infra/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:dev-infra:default" cannot list pods in the namespace "dev-infra".
I don't want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can't understand how to configure the credentials in Jenkins. When I hit add credentials on the https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png I get
<select class="setting-input dropdownList">
<option value="0">Username with password</option>
<option value="1">Docker Host Certificate Authentication</option>
<option value="2">Kubernetes Service Account</option>
<option value="3">OpenShift OAuth token</option>
<option value="4">OpenShift Username and Password</option>
<option value="5">SSH Username with private key</option>
<option value="6">Secret file</option>
<option value="7">Secret text</option>
<option value="8">Certificate</option></select>
I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins.
Could you please help me to find step-by-step guide - what kind of of credentials I need?
Regards,
Pavel

The best practice is to launch you Jenkins master pod with the serviceaccount you created, instead of creating credentials in Jenkins
See example yaml

The Kubernetes plugin for Jenkins reads this file /var/run/secrets/kubernetes.io/serviceaccount/token. Please see if your Jenkins pod has this. The service account should have permissions targeting pods in the appropriate namespace.
In fact, we are using Jenkins running outside kubernetes 1.9. We simply picked the default service account token (from default namespace), and put it in that file on the Jenkins master. Restarted ... and the kubernetes token credential type was visible.
We do have a role and rolebinding though:
kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods
kubectl create rolebinding jenkins --role=jenkins --serviceaccount=default:default
In our case, Jenkins is configured to spin up slave pods in the default namespace. So this combination works.
More questions (similar):
Can I use Jenkins kubernetes plugin when Jenkins server is outside of a kubernetes cluster?

After some digging it appears that the easiest way to go(without giving extra permissions to the default service account for the name space) is to
kubectl -n <your-namespace> create sa jenkins
kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=<your-namespace>:jenkins
kubectl get -n <your-namespace> sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n <your-namespace> get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d -
Seems like you can store this token as type Secret text in Jenkins and the plugin is able to pick it up.
Another advantage of this approach compared to overwriting the default service account, as mentioned earlier above is that you can have secret per cluster - meaning you can use one jenkins to connect to for example dev -> quality -> prod namespaces or clusters with separate accounts.
Please feel free to contribute, if you have a better way to go.
Regards,
Pavel
For more details you can check:
- https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
- https://github.com/openshift/origin/issues/6807

Related

Can I use `envoyExtAuthzHttp` with Anthos for OIDC?

Currently I am handling OIDC using OAuth2-proxy and Istio. We would now like to upgrade to Anthos since we are mainly on GCP. Everything works but I need to configure envoyExtAuthzHttp. Previously I would run kubectl edit configmap istio -n istio-system and add the following...
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: http-oauth-proxy.istio-system.svc.cluster.local
port: 4180
includeRequestHeadersInCheck: ['cookie']
headersToUpstreamOnAllow: ['authorization']
headersToDownstreamOnDeny: ['content-type', 'set-cookie']
However, ASM does not seem to install that config map...
Error from server (NotFound): configmaps "istio" not found
I noticed there is an istio-asm-managed config map. So I tried adding the config to that but when I do I am not sure how to restart ASM as this command I am used to using isn't working kubectl rollout restart deployment/istiod -n istio-system.
When I try to go to the site instead of being redirected I see...
RBAC: access denied
What worked for me, after studying what asmcli does when you follow the migration steps here, is setting this configmap in istio-system before enabling the Anthos Service Mesh:
apiVersion: v1
data:
mesh: |
extensionProviders:
...<your settings here>...
kind: ConfigMap
metadata:
name: istio-asm-managed-rapid
namespace: istio-system
I have not verified whether it was actually necessary to do this before enabling the ASM, but that is how I did it.

How to fetch secrets from vault to my jenkins configuration as code installation with helm?

I am triying to deploy a Jenkins using helm with JCASC to get vault secrets. I am using a local minikube to create mi k8 cluster and a local vault instance in my machine (not in k8 cluster).
Even that I am trying using initContainerEnv and ContainerEnv I am not able to reach the vault values. For CASC_VAULT_TOKEN value I am using vault root token.
This is helm command i run locally:
helm upgrade --install -f values.yml mijenkins jenkins/jenkins
And here is my values.yml file code:
controller:
installPlugins:
# need to add this configuration-as-code due to a known jenkins issue: https://github.com/jenkinsci/helm-charts/issues/595
- "configuration-as-code:1414.v878271fc496f"
- "hashicorp-vault-plugin:latest"
# passing initial environments values to docker basic container
initContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
ContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
JCasC:
configScripts:
here-is-the-user-security: |
jenkins:
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${JENKINS_ADMIN_ID}"
password: "${JENKINS_ADMIN_PASSWORD}"
And in my local vault I can see/reach values:
>vault kv get cubbyhole/jenkins
============= Data =============
Key Value
--- -----
JENKINS_ADMIN_ID alan
JENKINS_ADMIN_PASSWORD acosta
Any of you have an idea what I could be doing wrong?
I haven't used Vault with jenkins so I'm not exactly sure about your particular situation but I am very familiar with how finicky the Jenkins helm chart is and I was able to configure my securityRealm (with the Google Login plugin) by creating a k8s secret with the values needed first:
kubectl create secret generic googleoauth --namespace jenkins \
--from-literal=clientid=${GOOGLE_OAUTH_CLIENT_ID} \
--from-literal=clientsecret=${GOOGLE_OAUTH_SECRET}
then passing those values into helm chart values.yml via:
controller:
additionalExistingSecrets:
- name: googleoauth
keyName: clientid
- name: googleoauth
keyName: clientsecret
then reading them into JCasC like so:
...
JCasC:
configScripts:
authentication: |
jenkins:
securityRealm:
googleOAuth2:
clientId: ${googleoauth-clientid}
clientSecret: ${googleoauth-clientsecret}
In order for that to work the values.yml also needs to include the following settings:
serviceAccount:
name: jenkins
rbac:
readSecrets: true # allows jenkins serviceAccount to read k8s secrets
Note that I am running jenkins as a k8s serviceAccount called jenkins in the namespace jenkins
After debugging my jenkins installation I figured out that the main issue was not my values.yml neither my JCASC integration as I was able to see the ContainerEnv values if I go inside my jenkins pod with:
kubectl exec -ti mijenkins-0 -- sh
So I needed to expose my vault server so my jenkins is able to reach it, I used this Vault tutorial to achieve it. Which in, brief, instead of using normal:
vault server -dev
We need to use:
vault server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200
Then we need to export an environment variable for the vault CLI to address the Vault server.
export VAULT_ADDR=http://0.0.0.0:8200
After that, we need to determine the vault address which we are going to redirect our jenkins ping, to do that we need start a minukube ssh session:
minikube ssh
Within this SSH session, retrieve the value of the Minikube host.
$ dig +short host.docker.internal
192.168.65.2
After retrieving the value, we are going to retrieve the status of the Vault server to verify network connectivity.
$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status
And now we can connect our jenkins pod with our vault, we just need to change CASC_VAULT_URL to use http://192.168.65.2:8200 in our main .yml file like this:
- name: CASC_VAULT_URL
value: "http://192.168.65.2:8200"

Run jenkins slave nodes on an eks cluster by kubernetes plugin

I am using Jenkins Kubernetes plugin and i have been trying to connect to the eks cluster via Jenkins.my jenkins-master is running on a standalone server and eks is running separately.i want the slave nodes to be provisioned as pods in the cluster.however when i use the Kubernetes plugin in my case to connect to the cluster using the kubeconfig file,it gives me this error.
Error testing connection : Failure executing: GET at: https://*******/api/v1/namespaces/default/pods. Message: Forbidden! User arn:aws:eks:eu-west-1:******:cluster/******* doesn't have permission. pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "default".
i have tried creating roles and rolebinding,which are given below but still i am unable to provision to eks cluster
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default # your namespace
subjects:
- kind: User
name: system:anonymous # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
This is the role-binding i created and this is the error i am still getting

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

Pulling images from private registry in Kubernetes

I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson

Resources