Jenkins build failed because of an no value to a variable - jenkins

I have a Jenkins file as like below:
stage("Deploy artifact for k8s sync")
{
sh '''
ns_exists=`kubectl get ns | grep ${target_cluster}`
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${target_cluster}"
echo "Creating namespace ${consider_namespace} in the cluster ${target_cluster}"
kubectl apply "some yaml file"
else
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "serviceaccounts" ]
then
echo "Applying source serviceaccounts on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "secrets" ]
then
echo "Applying source secrets on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "configmaps" ]
then
echo "Applying source configmaps on target cluster ${target_cluster}"
kubectl apply -f ${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-configmaps.yaml
fi
However, when I run, it fails with the error like below:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy artefact for k8s sync) (hide)
[Pipeline] sh
+ kubectl get ns
+ grep test-central-eks
+ ns_exists=
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Wondering how to resolve it and why it should fail in first place?

As far as I can see you have a couple of things that make this to fail.
From your pipeline extract I can see that you are using variables that are not defined in the script, so I guess those are somehow environment variables or coming from previous step. In order to be able to use it, you will need string interpolation, in your case, a triple double quote https://groovy-lang.org/syntax.html#_triple_double_quoted_string
stage("Deploy artifact for k8s sync")
{
sh """
In the other hand, grep exit code is 1 when there is not match, in case you need to continue in the case that there is not match you can put a block command with a true condition
ns_exists=`kubectl get ns | {grep ${target_cluster} || true; }`
Finally, a good way to catch this problems is to replay the pipeline with debug in the shell blocks by adding set -x at the beginning of the sh sh block.
Please note as well that stderr is not printed, in case you need it you would need to redirect to the stdout and readed from returnStdout: true option of jenkins sh or a custom file where the stdout is redirected

Related

Retrieving secret from HashiCorp Vault in jenkins pipeline

I am trying to retreive hashicorp vault secret and use it in jenkins pipeline. I managed to connect to hashicorp vault, but pipeline fails to retreive the secret saved in vault.
Pipeline output:
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test_pipeline
[Pipeline] {
[Pipeline] withVault
Retrieving secret: my.secrets/data/dev
Access denied to Vault Secrets at 'my.secrets/data/dev'
[Pipeline] {
[Pipeline] sh
+ echo
[Pipeline] }
[Pipeline] // withVault
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Pipeline:
key heslo exists in my.secrets/data/dev path
node {
def secrets = [
[path: 'my.secrets/data/dev', engineVersion: 2, secretValues: [
[envVar: 'value', vaultKey: 'heslo']
]]
]
def configuration = [vaultUrl: 'http://10.47.0.235:8200/',
vaultCredentialId: 'b0467c75-24e4-4307-9a35-f7da364f6285',
engineVersion: 2]
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $value'
}
}
my jenkins-policy.hcl file for approle method to access vault from jenkins:
path "my.secrets/data/dev" {
capabilities = [ "read" ]
}
Thank you in advance
Remove the "data" from the "path" definition:
path: 'my.secrets/dev'
You must use the "data" in the policy path but not when retrieving the secret.

Need to change a shell command into groovy

I've managed to get what docker images have been deployed but it has to be written in groovy.
I have the following:
sh script: '''
export PATH=\"$PATH\":\"${WORKSPACE}\"
for docker-image in interface data keycloak artifactory ; do
DOCKERHOST=`echo ${DOCKERURL}/images-rancher/$docker-image | sed 's!^localhost://!!g'`
DOCKERVERSION=`docker image ls ${DOCKERHOST} --format '{{ json .Tag }}' | head -1`
echo "${DOCKERHOST} - ${DOCKERVERSION}"
done
'''
Changing it into groovy:
def image = [ "interface", "data" , "keycloak", "artifactory" ]
.
.
.
for docker-image in image
println docker-image
How would you put that in a groovy script?
Thanks
Here's how you can get most of the way to using Groovy instead of bash. The doRegexManipulation() function is left as an exercise for you to implement.
Note that the docker image ls sh step is still required, and cannot be translated to "pure" Groovy.
withEnv(["PATH=${env.PATH}:${env.WORKSPACE}"]) {
def images = [ "interface", "data" , "keycloak", "artifactory" ]
for (String docker_image : images) {
def DOCKERHOST = doRegexManipulation("${DOCKERURL}/images-rancher/$docker_image")
def DOCKERVERSION = sh(
script: """docker image ls '${DOCKERHOST}' --format '{{ json .Tag }}' | head -1""",
returnStdout: true,
)
echo "${DOCKERHOST} - ${DOCKERVERSION}"
}​
}
If you wanted to, you can go one step further and replace the head -1 part with Groovy code, since that can be done in Groovy as well.
The withEnv step is documented here. It is used to set environment variables for a block of Groovy code, thereby making those environment variables available to any child processes spawned in the block of Groovy code.

How to Access the Application after the kubernetes deployment

I'm new to kubernetes tool, i'm trying to deploy the Angular application using docker + kubernetes, here the below Jenkins script.
stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=demo"
echo 'deployment to PRERELEASE!'
sh "kubectl config get-contexts"
sh "kubectl -n demo get pods"
sh "${kubectl} apply -f ./environment/pre-release -n=pre-release"
}
}
}
}
Please find the below jenkins outputs
/home/jenkins/agent/workspace/DevOps-CI_future-master-fix
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] echo
deploy to deployment!!
[Pipeline] echo
deploy to PRERELEASE!
[Pipeline] sh
+ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* demo kubernetes kubernetes-admin demo
kubernetes-admin#kubernetes kubernetes kubernetes-admin
[Pipeline] sh
+ kubectl -n demo get pods
NAME READY STATUS RESTARTS AGE
worker-f99adee3-dedd-46ca-bc0d-6b24391e5865-qkd47-mwl3v 5/5 Running 0 26s
[Pipeline] sh
+ kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
service/frontend unchanged
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Now the questions is after the deployment i am not able to see the pods and deployment in both machine master machine using below command, can you please some one help me how to access the application after the successful deployment .
kubectl get pods
kubectl get services
kubectl get deployments
You're setting the namespace to pre-release when running "${kubectl} apply -f ./environment/pre-release -n=pre-release".
To get pods in this namespace, use: kubectl get pods -n pre-release.
Namespaces are a way to separate different virtual clusters inside your single physical Kubernetes cluster. See https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more detail.
You are creating the resources in a namespace called pre-release using -n option when you run the following command.
kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
You need to to list the resources in the same namespace.
kubectl get pods -n pre-release
kubectl get services -n pre-release
kubectl get deployments -n pre-release
By default kubectl will do the requested operation in default namespace. If you want to set your current namespace to pre-release so that you need not append -n pre-release with every kubectl command, you can run the following command:
kubectl config set-context --current --namespace=pre-release

Facing SSH connection issue during jenkins pipeline

I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}

Jenkinsfile: Curl logs from previous (still running) stage

I'm in a process of migrating from freestyle jobs chained into pipeline to have the pipeline in a Jenkinsfile.
My current pipeline will execute 2 jobs in parallel, one will create a tunnel to database (with a randomly generated port) and the next job needs to get this port number, so I'm performing a curl command and reading the console of the create-db-tunnel job and storing the port number. The create-db-tunnel needs to keep running as the follow up job is connecting to the database and is taking DB dump. This is the curl command which I run on the second job and which is returning the randomly generated port number from the established DB tunnel:
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/create-db-tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
I wonder if there is anything similar I can use in Jenkinsfile? I currently have the 2 jobs triggered in parallel, but since the create-db-tunnel is no longer a freestyle job, I'm not sure if I can get the port number still? I can confirm that the console logs for the db_tunnel stage has the port number in there, just not sure how can I query that console. Here is my jenkinsfile:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 15s
//this will not work
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/db_tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=$Port" > port.txt
"""
}
}
}
}
}
}
Actually, I found a solution to my question - it was a very similar curl command I had to run and I'm now getting the desired port number I needed. Here is the jenkinsfile if someone is interested:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
JENKINS_TOKEN = credentials('jenkins')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 20
Port=\$(curl -u $JENKINS_TOKEN_USR:$JENKINS_TOKEN_PSW http://myjenkinsurl.com/job/schema-archive-jenkinsfile/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=\$Port" > port.txt
"""
}
}
}
}
}
}

Resources