Jenkins pass variable out of script section - jenkins

I'm trying to create a pipeline that creates EC2 instances and I want to execute some shell commands on it.
After the Terraform applies the plan, AWS create new instances:
Build [no info about public IP yet]
Staging [no info about public IP yet]
It's impossible to retrieve public ipv4 information at the plan stage, therefore I created that construction:
stage ('Running command on a Build server') {
steps {
script {
BUILD_INSTANCE_IP = sh (
script: """
aws --region eu-central-1 ec2 describe-instances --filter \
"Name=instance-state-name,Values=running" --query \
"Reservations[*].Instances[*].[PublicIpAddress, Tags[?Key=='Name'].Value|[0]]" \
--output text | grep Build | cut -f1 //retrieve 'Build' instance IP
""", returnStdout: true
).trim()
sleep time: 3, unit: 'MINUTES' // wait 3 minutes until instance will be available to connect
sh ( script: """ssh -i ${AWS_KEY} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu#${BUILD_INSTANCE_IP} touch /home/ubuntu/test.txt""", returnStdout: true ).trim()
So, here I need to connect to the instance via sshagent {} directive, but after Jenkins moves to a new section, the data of variable 1 is reset to zero.
How do I make a variable that won't change?

I am sure this example will help you, How to use or re-use a variable across the stage throughout the pipeline
def x
node() {
stage('shell') {
x = sh(script: "echo 10", returnStdout: true) as int // needed to convert from String to int
print x // result = 10
}
stage('groovy') {
x = x + 10
print x // result = 20
}
}

Related

Jenkins Pipeline: I cannot add a variable from a concatenated command in bash script

I have created several bash scripts that work perfect in the Linux shell, but when I try to incorporate them in a Jenkins Pipeline I get multiple errors, I attach an example of my Pipeline where I just want to show the value of my variables, the pipeline works fine except when I added in line 5 the environment, you can see that there are special characters that are not interpreted by Groovy as the Bash does.
pipeline {
agent {
label params.LABS == "any" ? "" : params.LABS
}
environment{
PORT_INSTANCE="${docker ps --format 'table {{ .Names }} \{{ .Ports }}' --filter expose=7000-8999/tcp | (read -r; printf "%s\n"; sort -k 3) | grep web | tail -1 | sed 's/.*0.0.0.0.0://g'|sed 's/->.*//g'}"
}
stages {
stage('Setup parameters') {
steps {
script {
properties([
parameters([
choice(
choices: ['LAB-2', 'LAB-3'],
name: 'LABS'
),
string(
defaultValue: 'cliente-1',
name: 'INSTANCE_NAME',
trim: true
),
string(
defaultValue: '8888',
name: 'PORT_NUMBER',
trim: true
),
string(
defaultValue: 'lab.domain.com',
name: 'DOMAIN_NAME',
trim: true
)
])
])
}
sh """
echo '${params.INSTANCE_NAME}'
echo '${params.PORT_NUMBER}'
echo '${params.DOMAIN_NAME}'
echo '${PORT_INSTANCE}
"""
}
}
}
}
I already tried the same thing from the sh section """ command """ and they throw the same errors.
Can someone help me to know how to run advanced commands that work in the linux shell (bash), that is, is there any way to migrate scripts from bash to Jenkins?
Thank you very much for your help ;)
I want to be able to create a variable from a bash script command from the Pipeline in Jenkins
PORT_INSTANCE="${docker ps --format 'table {{ .Names }} {{ .Ports }}' --filter expose=7000-8999/tcp | (read -r; printf "%s\n"; sort -k 3) | grep web | tail -1 | sed 's/.0.0.0.0.0://g'|sed 's/->.//g'}"
I believe that you can't execute a bash script in the environment step based on the documentation.
You can create a variable from a bash script using the sh step with returnStdout set to true. Declarative pipeline doesn't allow you to assign the retrun value to a variable, so you will need to call sh inside a script like this:
stage('Calculate port') {
steps {
script {
// When you don't use `def` in front of a variable, you implicitly create a global variable
// This means that the variable will exist with a value, and can be used in any following line in your scipt
PORT_INSTANCE = sh returnStdout: true, script: "docker ps --format 'table {{ .Names }} \{{ .Ports }}' --filter expose=7000-8999/tcp | (read -r; printf \"%s\n\"; sort -k 3) | grep web | tail -1 | sed 's/.*0.0.0.0.0://g'|sed 's/->.*//g'"
// Shell output will contain a new line character at the end, remove it
PORT_INSTANCE = PORT_INSTANCE.trim()
}
}
}
I would add a stage like this, as the first stage in my pipeline.
Note that I didn't run the same shell command as you when I was testing this, so my command may have issues like un-escaped quotes.

make a deployment redownload an image with jenkins

I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page.
I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.
My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).
I have the autopullpolicy set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the BUILD_NUMBER as the tag for the image in the deployment).
Here is my pipeline:
pipeline {
agent any
environment
{
user = "NAME"
repo = "prework"
imagename = "${user}/${repo}"
registryCreds = 'dockerhub'
containername = "${repo}-test"
}
stages
{
stage ("Build")
{
steps {
// Building artifact
sh '''
docker build -t ${imagename} .
docker run -p 80 --name ${containername} -dt ${imagename}
'''
}
}
stage ("Test")
{
steps {
sh '''
IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername})
STATUS=$(curl -sL -w "%{http_code} \n" $IP:80 -o /dev/null)
if [ $STATUS -ne 200 ]; then
echo "Site is not up, test failed"
exit 1
fi
echo "Site is up, test succeeded"
'''
}
}
stage ("Store Artifact")
{
steps {
echo "Storing artifact: ${imagename}:${BUILD_NUMBER}"
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
def customImage = docker.image(imagename)
customImage.push(BUILD_NUMBER)
customImage.push("latest")
}
}
}
}
stage ("Deploy to Kubernetes")
{
steps {
echo "Deploy to k8s"
script {
kubernetesDeploy(configs: "deployment.yaml", kubeconfigId: "kubeconfig") }
}
}
}
post {
always {
echo "Pipeline has ended, deleting image and containers"
sh '''
docker stop ${containername}
docker rm ${containername} -f
'''
}
}
}
EDIT:
I used sed to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now.
Thanks.
According to the information from Kubernetes Continuous Deploy Plugin p.6. you can add enableConfigSubstitution: true to kubernetesDeploy() section and use ${BUILD_NUMBER} instead of latest in deployment.yaml:
By checking "Enable Variable Substitution in Config", the variables
(in the form of $VARIABLE or `${VARIABLE}) in the configuration files
will be replaced with the values from corresponding environment
variables before they are fed to the Kubernetes management API. This
allows you to dynamically update the configurations according to each
Jenkins task, for example, using the Jenkins build number as the image
tag to be pulled.

Jenkins scripted pipeline environment variable

I'm using the Jenkins scripted pipeline and since the last update I get a new warning, which I want to silence, here is an MWE:
// FROM https://jenkins.io/doc/pipeline/examples/#parallel-multiple-nodes
def labels = []
if (Host == 'true') {
labels.add('<host-slavename>')
}
def builders = [:]
for (x in labels) {
def label = x
builders[label] = {
ansiColor('xterm') {
node(label) {
stage('cleanup') {
deleteDir()
}
stage('build') {
def timestamp = sh (script: 'echo -n `(date +%Y%m%d%H%M%S)`', returnStdout: true)
withCredentials([string(credentialsId: 'TEST_PASSWORD', variable: 'PASSWORD')]){
sh """
logger \
TEST_1 "${PASSWORD}" TEST_2 \
TEST_3 $timestamp TEST_4
"""
sh '''
logger \
TEST_1 "$PASSWORD" TEST_2 \
TEST_3 $timestamp TEST_4
'''
}
}
}
}
}
}
parallel builders
the first sh block returns
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [PASSWORD]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
+ logger TEST_1 **** TEST_2 TEST_3 20211029074911 TEST_4
which at least prints the timestamp and the password (it's censored, but works), but raises the warning.
the second sh block returns
+ logger TEST_1 **** TEST_2 TEST_3 TEST_4
So no warning, but also no timestamp.
Is there a way to write a scripted pipeline that shows no warning, but still the timestamp?
The warning occurs when you use Groovy string interpolation in the first sh step like "${PASSWORD}" for the reasons explained in Interpolation of Sensitive Environment Variables.
That's why you should always let the shell resolve environment variables as you correctly do in the 2nd sh step.
To use non-environment variables like timestamp, convert them to environment variables by wrapping the sh step in withEnv step:
withEnv(["timestamp=$timestamp"]) {
sh '''
logger \
TEST_1 "$PASSWORD" TEST_2 \
TEST_3 $timestamp TEST_4
'''
}
This limits the scope of the environment variable to the withEnv block.
Alternatively you could add a member to the env map:
env.timestamp = sh (script: 'echo -n `(date +%Y%m%d%H%M%S)`', returnStdout: true)
sh '''
logger \
TEST_1 "$PASSWORD" TEST_2 \
TEST_3 $timestamp TEST_4
'''

aws ecs ec2 continuous deployment with jenkins

I am using jenkins for continuous deployment from gitlab into aws ecs ec2 container instance. I am using jenkins file for this purpose. For registering the task definition on each push I have placed the task definition json file in an aws folder in the gitlab. Is it possible to put the task definition json file in the jenkins so that we can need to keep only the jenkinsfile in the gitlab?
There is a workspace folder in jenkins /var/lib/jenkins/workspace/jobname which is created after first build. Can we place the task definition in that place?
My Jenkinsfile is pasted below
stage 'Checkout'
git 'git#gitlab.xxxx.com/repo.git'
stage ("Docker build") {
sh "docker build --no-cache -t xxxx:${BUILD_NUMBER}" ."
}
stage("Docker push") {
docker.withRegistry('https://xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com', 'ecr:regopm:ecr-credentials') {
docker.image("xxxxx:${BUILD_NUMBER}").push(remoteImageTag)
}
}
stage ("Deploy") {
sh "sed -e 's;BUILD_TAG;${BUILD_NUMBER};g' aws/task-definition.json > aws/task-definition-${remoteImageTag}.json"
sh " \
aws ecs register-task-definition --family ${taskFamily} \
--cli-input-json ${taskDefile} \
"
def taskRevision = sh (
returnStdout: true,
script: " aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' '| awk '{print \$2}' "
).trim()
sh " \
aws ecs update-service --cluster ${clusterName} \
--service ${serviceName} \
--task-definition ${taskFamily}:${taskRevision} \
--desired-count 1 \
"
}
On the very same approach, but putting togheter some reusable logic, we just open-sourced our "glueing" tool, that we're using from Jenkins as well (please, see Extra section for templates on Jenkins pipelines):
https://github.com/GuccioGucci/yoke
use esc-cli as an alternative incase if you do not want to use task definition , install esc-cli on jenkins node and run it, but that still need docker-compose on the git.

Jenkins pipeline sh does not seem to respect pipe in shell command

I am using a Jenkinsfile in a pipeline on version 2.32.2.
For various reasons I want to extract the version string from the pom. I was hoping I wouldn't have to add the maven help plugin and use evaluate.
I quickly came up with a little sed expression to get it out of the pom which uses pipes and works on the commandline in the jenkins workspace on the executor.
$ sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
1.0.0-SNAPSHOT
It could probably be optimized, but I want to understand why the pipeline seems to be failing on piped sh commands. I've played with various string formats and am currently using a dollar slashy string.
The pipeline step looks like the following to allow for easy output of the command string:
script {
def ver_script = $/sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
When run in the jenkins pipeline I get the following console output where it seems to be separating the piped commands into separate commands:
[Pipeline] script
[Pipeline] {
[Pipeline] echo
sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
[Pipeline] sh
[FRA-198-versioned-artifacts-44SD6DBQOGOI54UEF7NYE4ECARE7RMF7VQYXDPBVFOHS5CMSTFLA] Running shell script
+ sed -n /<version>/,/<version/p pom.xml
+ head -1
+ sed s/[[:blank:]]*<\/*version>//g
sed: couldn't write 89 items to stdout: Broken pipe
[Pipeline] }
[Pipeline] // script
Any guidance out there on how to properly use piped commands in a jenkinsfile ?
I finally put some thought into it and realized that pipe subshells are probably causing the issue. I know some of the evils of eval but I ended up wrappping this in an eval:
script {
def ver_script = $/eval "sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'"/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
I know this kind of late answer, but whoever you who needs the solution without eval you can use /bin/bash -c "script" to make pipe works
script {
POM_VERSION = sh(script: "/bin/bash -c 'sed -n \'/<version>/,/<version/p\' pom.xml | head -1 | sed \'s/[[:blank:]]*<\/*version>//g\'\''", returnStdout: true)
echo "${POM_VERSION}"
}
The only problem with this method is hellish escape yet this way the subshell of pipe will be handled by our boy /bin/bash -c
If your environment allows it, I've found a simple solution to this problem to be to place your script containing pipes into a file, and then run that with sh, like so:
script.sh:
#!/bin/sh
kubectl exec --container bla -i $(kubectl get pods | awk '/foo-/{ print $1 }') -- php /code/dostuff
Jenkinsfile:
stage('Run script with pipes') {
steps {
sh "./script.sh"
}
}
The pipeline-utility-steps plugin nowadays includes a readMavenPom step, which allows to access the version as follows:
version = readMavenPom.getVersion()
So nothing detailed above worked for me using the scripted Jenkinsfile syntax with Groovy. I was able to get it working, however. The type of quotations you use are important. In the example below, I am trying to fetch the latest git tag from GitHub.
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
Notice how the shell execution to assign LATEST_TAG works as expected (assigning the variable to v2.1.0). If we were to try the same thing (with single quotes) to assign VERSION_NUMBER, it would NOT work - the pipe messes everything up. Instead, we wrap the script in double quotes.
The first echo prints VERSION_NUMBER: 2.1.0 but the second prints VERSION_NUMBER:. If you want VERSION_NUMBER to be available in the shell commands, you have to assign the output of the shell command to env.VERSION_NUMBER as shown below:
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
env.VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
The first echo prints VERSION_NUMBER: 2.1.0 and the second prints VERSION_NUMBER: 2.1.0.
I am also struggling with the usage of pipe inside my jenkins pipeline but as a side note, if you want a simple way to extract the version of a maven pom, here's a very clean one I found in another post and that I'm using :
stage('Preparation') {
version = getVersion()
print "version : " + version
}
def getVersion() {
def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
matcher ? matcher[0][1] : null
}
gives you :
[Pipeline] echo
releaseVersion : 0.1.24
[Pipeline] sh

Resources