I am trying to deploy k8s cluster using Helm 3 and jenkins. Jenkins and k8s running on different servers.I merged the kubeconfig files and I had all information in one config file ./kube directory. I would like to deploy my app to the related environment and namespace according to the GIT_BRANCH value. I have two question for below script.
1.What is the best way should I store k8s cluster credentials and will use in pipeline. I saw some plugins such as Kubernetes CLI but I can not be sure whether it will cover my requirement. If I use this plugin, should I store k8s file in to Jenkins machine manually or this plugin already handle this with uploading config file.
2.Should I change anything in below script to follow best practices?
stage('Deploy to dev'){
script{
steps{
if(env.GIT_BRANCH.contains("dev")){
def namespace="dev"
def ENV="development"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
stage('Deploy to Test'){
script{
steps{
if(env.GIT_BRANCH.contains("test")){
def namespace="test"
def ENV="test"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
}
stage ('Deploy to Production'){
when {
anyOf{
environment name: 'DEPLOY_TO_PROD' , value: 'true'
}
}
steps{
script{
DEPLOY_PROD = false
def namespace = "production"
withCredentials([file(credentialsId: 'kube-config', variable: 'kubecfg')]){
//Change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying to production"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
I have never tried this, but in theory the credentials variable is available as environment variable. Try to use KUBECONFIG as a variable name
withCredentials([file(credentialsId: 'secret', variable: 'KUBECONFIG')]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
A workaround that worked for me:
withCredentials([file(credentialsId: 'k8s-dk-staging', variable: 'KUBECRED')]) {
sh 'cat $KUBECRED > ~/.kube/config'
sh './deploy-app.sh'
}
I don't like to do that, ideally I would like use KUBECONFIG but for now this is that works for me.
Related
I am running Terraform job using Jenkins pipeline. Terraform refresh is taking too long 10m~, using -parallelism=60 (local)terraform runs much faster 2.5m~.
When running the same config through Jenkins salve with parallelism I don't see any improve in running time.
Jenkins ver. 2.154
Jenkins Docker plugin 1.1.6
SSH Agent plugin 1.19
Flow: Jenkins master creates job -> Jenkins slave running Docker image of terraform
def terraformRun(String envName, String terraformAction, String dirName = 'env') {
sshagent(['xxxxxxx-xxx-xxx-xxx-xxxxxxxx']) {
withEnv(["ENV_NAME=${envName}", "TERRAFORM_ACTION=${terraformAction}", "DIR_NAME=${dirName}"]) {
sh '''
#!/bin/bash
set -e
ssh-keyscan -H "bitbucket.org" >> ~/.ssh/known_hosts
AUTO_APPROVE=""
echo terraform "${TERRAFORM_ACTION}" on "${ENV_NAME}"
cd "${DIR_NAME}"
export TF_WORKSPACE="${ENV_NAME}"
echo "terraform init"
terraform init -input=false
echo "terraform refresh"
terraform apply -refresh-only -auto-approve -parallelism=60 -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars # Refresh is working but it seems to ignore parallelism
echo "terraform ${TERRAFORM_ACTION}"
if [ ${TERRAFORM_ACTION} = "apply" ]; then
AUTO_APPROVE="-auto-approve"
fi
terraform ${TERRAFORM_ACTION} -refresh=false -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars ${AUTO_APPROVE}
echo "terraform ${TERRAFORM_ACTION} on ${ENV_NAME} finished successfully."
'''
}
}
}
I can't seem to get the variable glooNamespaceExists to actually print.I see in the console the response from kubectl is there but the variable seems to be null - I'm looking to skip this entire build stage if the namespace already exists..
stage('Setup Gloo Ingress Controller') {
def glooNamespaceExists = sh(script: "kubectl get ns gloo-system -o jsonpath='{.status.phase}'")
if (glooNamespaceExists != 'Active') {
sh 'helm repo add gloo https://storage.googleapis.com/solo-public-helm'
sh 'helm repo update'
sh 'kubectl create namespace gloo-system'
sh 'helm install gloo gloo/gloo --namespace gloo-system'
}
}
EDIT: Console output of run:
[Pipeline] { (Setup Gloo Ingress Controller)
[Pipeline] sh
+ kubectl get ns gloo-system -o jsonpath={.status.phase}
Active
[Pipeline] sh
+ helm repo add gloo https://storage.googleapis.com/solo-public-helm
Please add returnStdout: true to your script command which will return exact output. Otherwise it will return status code which 0. Please add returnStdout: true like this so your command should be like this,
def glooNamespaceExists = sh(script: "kubectl get ns gloo-system -o jsonpath='{.status.phase}'", returnStdout: true)
I have been trying to deploy the image built on jenkins by docker to helm charts, i have referred couple of documents on website https://dev.to/sword-health/seamless-ci-cd-with-jenkins-helm-and-kubernetes-5e00
and https://cloudcompilerr.wordpress.com/2018/06/03/docker-jenkins-kubernetes-run-jenkins-on-kubernetes-cluster/ and managed till the point where docker image gets pushed into dockerhub but i get stuck at helm
i'm not getting what the error exactly is.
JENKINS ERROR
+ helm list
/var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: 1: /var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: helm: not found
PIPELINESCRIPT
pipeline {
environment {
registry = "hemanthpeddi/springboot"
registryCredential = 'dockerhub'
}
agent any
tools {maven "maven" }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/hrmanth/game-of-life.git'
}
}
stage('Build'){
steps{
sh script: 'mvn clean package'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
stage('Run Helm') {
steps {
script {
container('helm') {
sh "helm ls"
}
}
}
}
}
}
Is there any specific configuration that i'm missing before i use helm in jenkins? And i have configured my kubernetes IP in the cloud configuration in jenkins, Please help
Plugins Installed
Kubernetes Plugin
Docker Plugin
You need helm, it is not available by default. You could add helm as a tool in Jenkins and use it.
https://www.jenkins.io/doc/book/pipeline/syntax/#tools
you can install helm in the container itself by adding an extra stage
stage("install helm"){
steps{
sh 'wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz'
sh 'ls -a'
sh 'tar -xvzf helm-v3.6.1-linux-amd64.tar.gz'
sh 'sudo cp linux-amd64/helm /usr/bin'
sh 'helm version'
}
}
I am not so familiar with that, but when you are using the "container('helm')" step, I think it refers to
Kubernetes Plugin.
So, reading this docs, I think that the "podTemplate" is missing in your configuration.
Thus what you need to do is to configure a Helm container in the "podTemplate" and put the name "helm". You can try to use, for example, the "alpine/helm" image.
See you later.
I have the following Jenkinsfile:
node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'jenkins-deployer', serverUrl: 'https://192.168.64.2:8443']) {
sh 'kubectl apply -f '
}
}
}
While running it, I got "kubectl: not found". I installed Kubernetes-cli plugin to Jenkins, generated secret key via kubectl create sa jenkins-deployer. What's wrong here?
I know this is a fairly old question, but I decided to describe an easy workaround that might be helpful.
To use the Kubernetes CLI plugin we need to have an executor with kubectl installed.
One possible way to get kubectl is to install it in the Jenkins pipeline like in the snipped below:
NOTE: I'm using ./kubectl get pods to list all Pods in the default Namespace. Additionally, you may need to change kubectl version (v1.20.5).
node {
stage('List pods') {
withKubeConfig([credentialsId: 'kubernetes-config']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods'
}
}
}
As a result, in the Console Output, we can see that it works as expected:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl
...
[Pipeline] sh
+ chmod u+x ./kubectl
[Pipeline] sh
+ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-zhxwb 1/1 Running 0 34s
my-jenkins-0 2/2 Running 0 134m
You call kubectl from the shellscript step. To be able to do that the agent (node) executing the build needs to have kubectl available as executable.
I would like to integrate our Jenkins and Kubernetes clusters which works different servers.I have 2 cluster per stage and production. I already create a 2 name spaces on stage cluster to divide development and stage. I divide my values.yaml such as below.
valeus.dev.yaml
values.stage.yaml
values.prod.yaml
So according to the GIT_BRANCH value; I would like to set namespace variable and deploy via helm install command. At this circumstances,
My question is, what is the best way to connect 2 cluster in Jenkinsfile for this condition cause for dev and test namespace I need to one cluster , for production I need to deploy another cluster.
stage('deploy') {
steps {
script {
if (env.GIT_BRANCH == "origin/master") {
def namepsace="dev"
sh "helm upgrade --install -f values.dev.yaml --namespace ${namespace}"
} else if (env.GIT_BRANCH =="origin/test"){
def namepsace="stage"
sh "helm upgrade --install -f values.stage.yaml --namespace ${namespace}"
} else {
def namepsace="prod"
sh "helm upgrade --install -f values.prod.yaml --namespace ${namespace}"
}
you will need to create the Jenkins secrets to add both kubeconfig files for your k8s Clusters, and in the if statement you load the kubeconfig for your environment
for example using your code above
stage('deploy') {
steps {
script {
if (env.GIT_BRANCH == "origin/master") {
def namepsace="dev"
withCredentials([file(credentialsId: 'kubeconfig-dev', variable: 'config')]) {
sh """
export KUBECONFIG=\${config}
helm upgrade --install -f values.dev.yaml --namespace ${namespace}"
"""
}
} else if (env.GIT_BRANCH =="origin/test"){
def namepsace="stage"
withCredentials([file(credentialsId: 'kubeconfig-stage', variable: 'config')]) {
sh """
export KUBECONFIG=\${config}
helm upgrade --install -f values.dev.yaml --namespace ${namespace}"
"""
}
} else {
def namepsace="prod"
withCredentials([file(credentialsId: 'kubeconfig-prod', variable: 'config')]) {
sh """
export KUBECONFIG=\${config}
helm upgrade --install -f values.dev.yaml --namespace ${namespace}"
"""
}
}
}
}
}