Connect kubernetes with Jenkins pipeline - jenkins

I am trying to deploy container using helm from Jenkins pipeline. I have install Kubernetes plugin for jenkins and provided it local running kubernetes URL and the config file in credentials. When I am doing 'Test connection', it is showing'Connected to Kubernetes 1.16+'.
But when I run helm install command from pipeline it gives error
Error: Kubernetes cluster unreachable: the server could not find the requested resource
Note: I am able to do all operation using CLI and also from Jenkins pipeline by using withCredentials and passing cred file variable name(created in jenkins credentials). I just want to do this without wrapping it inside 'withCredentials' .
Both Jenkins and kubernetes are running seperately on windows 10. Please help

Helm uses kubectl config file. I'm using an step like this.
steps {
container('helm') {
withCredentials([file(credentialsId: 'project-credentials', variable: 'PULL_KEYFILE')]) {
sh """
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project project-name
gcloud container clusters get-credentials cluster-name --zone us-east1
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
"""
helm upgrade --install release-name .
}
}
}

Related

Execute Skaffold deployment using Google Cloud Build?

I developed the yaml files for kubernetes and skaffold and the dockerfile. My deployment with Skaffold work well in my local machine.
Now I need to implement the same deployment in my k8s cluster in my Google Cloud project, triggered by new tags in a GitHub repository. I found that I have to use Google Cloud Build, but I don't know how to execute Skaffold from the cloudbuild.yaml file.
There is a skaffold image in https://github.com/GoogleCloudPlatform/cloud-builders-community
To use it, follow the following steps:
Clone the repository
git clone https://github.com/GoogleCloudPlatform/cloud-builders-community
Go to the skaffold directory
cd cloud-builders-community/skaffold
Build the image:
gcloud builds submit --config cloudbuild.yaml .
Then, in your cloudbuild.yaml, you can add a step based on this one:
- id: 'Skaffold run'
name: 'gcr.io/$PROJECT_ID/skaffold:alpha' # https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/skaffold
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=[YOUR_CLUSTER_NAME]'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud container clusters get-credentials [YOUR_CLUSTER_NAME] --region us-central1-a --project [YAOUR_PROJECT_NAME]
if [ "$BRANCH_NAME" == "master" ]
then
skaffold run
fi

Jenkins on GCE - Deploying Google Cloud Function to different GCP project

I have Jenkins installed on GCE VM (Debian) on project xxxx.
I need to deploy a cloud function with source in Google cloud repository to project yyyy.
I do it successfully from the shell of Jenkins VM.
In order to deploy a function from pipeline I did follow:
Create a service account in yyyy project.
Upload the key (json file) to VM.
Activate the account gcloud auth activate-service-account yyyy-sa#yyyy.iam.gservice account.com --key-file jenkins-test.json
Define pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'gcloud config set account yyyy#yyyy.iam.gserviceaccount.com'
sh '''
gcloud functions deploy helloWorld --region=us-central1 --runtime nodejs8 --trigger-http --project yyyy \
--source https://source.developers.google.com/projects/xxxx/repos/test1/moveable-aliases/master/paths/HelloWorld/
'''
}
}
}
}
Anyway, I receive:
gcloud functions deploy helloWorld --region=us-central1 --runtime nodejs8 --trigger-http
--project yyyy --source https://source.developers.google.com/projects/xxxx/repos/test1/moveable-
aliases/master/paths/HelloWorld/
ERROR: (gcloud.functions.deploy) Your current active account [yyyy#yyyy.iam.gserviceaccount.com] **does not have any valid credentials**

Jenkins sh script hangs when run in specific container

I'm trying to automate deployments using the official ArgoCD docker image (https://hub.docker.com/r/argoproj/argocd/dockerfile)
I've created a declarative jenkins pipeline using the kubernetes plugin for the agents and define the pod using yaml, the container definition looks like this:
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: agent
spec:
containers:
- name: maven
image: maven:slim
command:
- cat
tty: true
volumeMounts:
- name: jenkins-maven-cache
mountPath: /root/.m2/repository
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
...
I'm trying to run commands inside that container, that step in the pipeline looks like this:
stage('Build') {
steps {
container('maven') {
sh 'echo testing' // this works just fine
}
}
}
stage('Deploy') {
steps {
container('argocd') {
sh "echo testing" // this does not work
// more deploy scripts here, once sh works
}
}
}
So I have two containers, one where the sh script works just fine and another where it doesn't. The sh scripts in the "argocd" container just hangs for 5 minutes and then Jenkins kills it, the exit message is:
process apparently never started in /home/jenkins/agent/workspace/job-name#tmp/durable-46cefcae (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I can't do echo a simple string in this particular container.
It works fine in other containers like the official for Maven from Docker, I use to build the spring boot application. I can also run commands directly in the argocd container manually from commandline with docker exec, but jenkins just won't in the pipeline for some reason.
What could it be?
I am running the latest version (1.33) of the durable task plugin.
Update:
Turns out that the image for argo-cd (continuous deployment tool) argoproj/argocd:latest does not include other commands except argocd, so the issue was with the container image I tried to use and not Jenkins itself. My solution was to install the Argo-CD CLI into a custom docker container and use that instead of the official one.
I've just encountered a similar issue with a custom docker image created by myself.
It turns out, I was using USER nobody in Dockerfile of that image and somehow, this way Jenkins agent pod was unable to run cat command or any other shell command from my pipeline script. Running specific container with root user worked for me.
So in your case I would add securityContext: runAsUser: 0 like below.
...
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
securityContext:
runAsUser: 0
...
Kubernetes reference: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
If the issue is Jenkins related here are some things that may help to solve the problem:
Issues with the working directory, if you updated Jenkins from some older version the workdir was /home/jenkins while in the recent versions it should be /home/jenkins/agent
or if you are running it in Windows the path should start with C:\dir and not with /dir
You can try a new clean install with apt-get --purge remove jenkins and then apt-get install jenkins
This is not your case as you run latest version of durable task plugin. But for other people reference versions prior to 1.28-1.30 caused the same issue.
If your Jenkins is clean the issue should be investigated in a different way, it seems that it's not returning an exit code to the sh command and/or the script is executed in a different shell.
I would try to do an sh file to be placed in the working directory of the container
#!/bin/bash
echo "testing"
echo $?
and try to run it with source my_script.sh
or with bash my_script.sh
$? is the exit code of the latest bash operation, having it printed will make sure that your script is terminated correctly. The source command to run the script will make it run in the same shell that is calling it so the shell variables are accessible. Bash command will run it in another subshell instead.

docker buildkit not supported by daemon in AWS EKS kubernetes cluster

I am using Build Toolkit to build docker image for each microservice.
./build.sh
export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
This works on my machine with docker (18.09.2).
However, it does not work with Jenkins, that I setup as following :
EKS is provisioned with a Terraform module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
Jenkins is deployed on EKS (v1.12.10-eks-ffbd9 , docker://18.6.1) via this Helm Chart.
Jenkins plugins as defined in Values of the helm release:
kubernetes:1.18.1
workflow-job:2.33
workflow-aggregator:2.6
credentials-binding:1.19
git:3.11.0
blueocean:1.19.0
bitbucket-oauth:0.9
Jenkins Pipeline is declarative, and it uses a Pod template where the container image is docker:18-dind and the container name is dind.
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
}
When Jenkins executes this pipeline, it shows this error :
buildkit not supported by daemon
I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?
Terraform eks Module which is now 5.0.0 ?
Or
docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?
Or
the Jenkins Plugin kubernetes:1.18.1 ?
As per docker-ce sources, there are two requirements to make successful check isSessionSupported for starting buildkit session:
dockerCli.ServerInfo().HasExperimental
versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"
So:
check version of your docker-cli library
and is HasExperimental option enabled.
To check if it has Experimantal support, run:
docker version -f '{{.Server.Experimental}}'
Docker buildkit support came out of experimental in 18.09, so you may need to upgrade docker inside of EKS:
EKS (v1.12.10-eks-ffbd9 , docker://18.6.1
Or perhaps you have an old dind image (the 18-dind should be new enough, but an older version of this tag pointing to 18.06 or 18.03 would not). You can try 18.09-dind and 19-dind which should both work if the build is actually happening inside dind.

Managing multi-pod integration tests with kubernetes and jenkins

I am trying to set up a testing framework for my kubernetes cluster using jenkins and the jenkins kubernetes plugin.
I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.
Essentially I want to do something like this:
podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) {
node('pod1') {
container('container1') {
// start service 1
}
}
}
podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) {
node('pod2') {
container('container2') {
// start service 2
}
}
stage ('Run test') {
node {
sh 'run something that causes service 1 to query service 2'
}
}
I have two main problems:
Pod lifecycle:
As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?
ContainerTemplate from docker image:
I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?
It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.
The solution to multi-service tests ended up being simply using an pod template that has the google cloud library, and assigning that worker a service-account credential plus a secret key so that it can kubectl commands on the cluster.
Dockerfile for worker, replace "X"s with desired version:
FROM google/cloud-sdk:alpine
// Install some utility functions.
RUN apk add --no-cache \
git \
curl \
bash \
openssl
// Used to install a custom version of kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &&\
chmod +x ./kubectl &&\
mv ./kubectl /usr/local/bin/kubectl
// Helm to manage deployments.
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh &&\
chmod 700 get_helm.sh && ./get_helm.sh --version vX.XX.X
Then in the groovy pipeline:
pipeline {
agent {
kubernetes {
label 'kubectl_helm'
defaultContainer 'jnlp'
serviceAccount 'helm'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: gcloud
image: your-docker-repo-here
command:
- cat
tty: true
"""
}
}
environment {
GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds')
}
stages {
stage('Do something') {
steps {
container('gcloud') {
sh 'kubectl apply -f somefile.yaml'
sh 'helm install something somerepo/somechart'
}
}
}
}
Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests.
NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from GOOGLE_APPLICATION_CREDENTIALS as the gsutils tools will be looking for that environmental variable.

Resources