Accessing jenkins shell variables within a k8s container - jenkins

stages
{
stage('test')
{
steps
{
withCredentials([string(credentialsId: 'kubeconfigfile', variable: 'KUBECONFIG' )])
{
container('deploycontainer')
{
sh 'TEMPFILE=$(mktemp -p "${PWD}" kubeconfig.XXXXX)'
sh 'echo "${TEMPFILE}"'
}
}
}
}
}
I'm new to creating pipelines and am trying to covert a freestyle job over to a pipeline.
I'm trying to create a temp file for a kubeconfig file within the container.
I've tried everyway I could think of to access the vars for the shell and not a groovy var.
even trying the below prints nothing on echo:
sh 'TEMPFILE="foo"'
sh 'echo ${TEMPFILE}'
I've tried escaping and using double quotes as well as single and triple quote blocks.
How do you access the shell vars from within the container block/how do you make a temp file and echo it back out within that container block?

With Jenkinsfiles, each sh step runs its own shell. When each shell terminates, all of its state is lost.
If you want to run multiple shell commands in order, you can do one of two things.
You can have a long string of commands separated by semi-colons:
sh 'cmd1; cmd2; cmd3; ...'
Or you can use ''' or """ to extend the commands over multiple lines (note of course that if you use """ then groovy will perform string interpolation):
sh """
cmd1
cmd2
cmd3
...
"""
In your specific case, if you choose option 2, it will look like this:
sh '''
TEMPFILE=$(mktemp -p "${PWD}" kubeconfig.XXXXX)
echo "${TEMPFILE}"
'''
Caveat
If you are specifying a particular shebang, and you are using a multiline string, you MUST put the shebang immediately after the quotes, and not on the next line:
sh """#!/usr/bin/env zsh
cmd1
cmd2
cmd3
...
"""

Related

How can I make this pipeline work for multiple runs at the same time?

Currently it creates a network name "denpal_default" and it does give this message:
[1BRemoving network denpal_default
Network denpal_default not found.
Network test-network is external, skipping
I haven't tested it yet, but I assume if it makes the denpal_default network and deletes it, it cannot run multiple builds at the same time.
I was thinking about a solution that would create a random COMPOSE_PROJECT_NAME="denpal-randomnumber" and build based on that.
But how do I use a variable set in the "Docker build"-stage in the "Verification"-stage later on?
stage('Docker Build') {
steps {
sh '''
docker-compose config -q
docker network prune -f && docker network inspect test-network >/dev/null || docker network create test-network
COMPOSE_PROJECT_NAME=denpal docker-compose down
COMPOSE_PROJECT_NAME=denpal docker-compose up -d --build "$#"
'''
}
}
stage('Verification') {
steps {
sh '''
docker-compose exec -T cli curl http://nginx:8080 -v
COMPOSE_PROJECT_NAME=denpal docker-compose down
'''
}
}
you can use variables in sh commands in pipeline as basically it's a string, and leverage groovy gstring ( http://groovy-lang.org/syntax.html )
example for scripted pipeline, for declarative use env vars
def random = UUID.randomUUID().toString()
sh '''
echo "hello ${random}"
'''
two common pitfalls, you must use double quotes ( gstring, single quote is regular string ), and "stage" is scoped, so define you're var's as global or in the same stage.

How to Create Jenkins Input thats no blocking, and based on previous command output

I have 2 issues, that are both part of the same problem. I am running terraform inside a JenkinsFile, this is all happening on a docker container that runs on a specific node. I have a few different environments with the ec2_plugin, that are labeled 'environment_ec2'. Its done this way since we use ansible, and I want to be able to execute ansible locally in the VPC.
1) How do you create an input and stage that are only executed if a previous command returns a specific output?
2) How can I make this non blocking?
node('cicd_ec2') {
stage('Prepare Environment'){
cleanWs()
checkout scm
}
withAWSParameterStore(credentialsId: 'jenkin_cicd', naming: 'relative', path: '/secrets/cicd/', recursive: true, regionName: 'us-east-1') {
docker.image('jseiser/jenkins_devops:0.7').inside {
stage('Configure Git Access') {
sh 'mkdir -p ~/.ssh'
sh 'mv config ~/.ssh/config'
sh 'chmod 600 ~/.ssh/config'
sh "echo '$BITBUCKET_CLOUD' > ~/.ssh/bitbucket_rsa"
sh 'chmod 600 ~/.ssh/bitbucket_rsa'
sh "echo '$CICD_CODE_COMMIT_KEY' > ~/.ssh/codecommit_rsa"
sh 'chmod 600 ~/.ssh/codecommit_rsa'
sh "echo '$IDAUTO_CICD_MGMT_PEM' > ~/.ssh/idauto-cicd-mgmt.pem"
sh 'chmod 600 ~/.ssh/idauto-cicd-mgmt.pem'
sh 'ssh-keyscan -t rsa bitbucket.org >> ~/.ssh/known_hosts'
sh 'ssh-keyscan -t rsa git-codecommit.us-east-1.amazonaws.com >> ~/.ssh/known_hosts'
}
stage('Terraform'){
sh './init-ci.sh'
sh 'terraform validate'
sh 'terraform plan -detailed-exitcode -out=create.tfplan'
}
input 'Deploy stack?'
stage ('Terraform Apply') {
sh 'terraform apply -no-color create.tfplan'
}
stage('Ansible'){
sh 'ansible-galaxy -vvv install -r requirements.yml'
sh 'ansible-playbook -i ~/ vpn.yml'
}
}
}
}
I only want to run the input and terraform apply, if the result of the below command is == 2.
terraform plan -detailed-exitcode
Since this all has to run on a ec2 instance, and it all has to use this container, I am not sure how I can do this input outside of a node like its recommended. Since if the input sits long enough, this instance may go down and the rest of the code would be run on a new instance/workspace and the information I need from the git repo's and the terraform plan would not be present. The git repo that I checkout contains the terraform configurations, the ansible configurations, and some configuration for SSH so that terraform and ansible are able to pull in their modules/roles from private git repos. The 'create.tfplan' that I would need to use IF terraform has a change would also need to be passed around.
Just really confused how I can get a good input, only get that input if I really need to run terraform apply, and how I can make it non blocking.
I had to adopt this from my work-in-progess which is based on declarative pipeline, but I hope it still mostly works..
def tfPlanExitCode
node {
stage('Checkout') {
checkout scm
}
stage('Plan') {
tfPlanExitCode = sh('terraform plan -out=create.tfplan -detailed-exitcode', [returnStatus: true])
stash 'workspace'
}
}
if (tfPlanExitCode == "2") {
input('Deploy stack?')
stage('Apply') {
node {
unstash 'workspace'
sh 'terraform apply -no-color create.tfplan'
}
}
}
The building blocks are:
don't allocate an executor while the input is waiting (for hours..)
stash your workspace contents (you can optionally specify which files to copy) and unstash later on the agent that continues the build
The visualization might be a bit screwed up, when some builds have the Apply stage and some don't. That's why I'm using the declarative pipelines, which allows to nicely/explicitly skip stages.

setting environment variable in scripted pipeline

Im trying to create a virtualenv(stage) in Jenkins and setting the needed environment variables before the virtualenv can be created.
stage('create virtualenvironment') {
sh 'export PATH=/usr/local/bin/virtualenv:$PATH'
sh 'export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python'
sh 'export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv'
sh 'source /usr/local/bin/virtualenvwrapper.sh'
echo 'createvirtualenvwrapper'
sh 'mkvirtualenv testproject'
}
When I execute this script - I get this message -
mkvirtualenv: command not found
When I print all the above env variables nothing is set? Not sure if the sh command is working as expected in scripted pipeline.
I'm not 100% sure but my guess is that, when you do a sh 'Some command' it executes a shell script and it is done.
So what is happening is that, each of your sh commands is being treated as a separate shell script which is executing the commands and is alive only for that session and closes once the script is done.
So try to combine all of the above commands to a single sh command along with the mkvirtualenv testproject and it should work.
For readability create a new Shell script like runProject.sh and the above commands in this shell script and then you can just call
sh runProject.sh
Hope it helps :)

Access a Groovy variable from within shell step in Jenkins pipeline

Using the Pipeline plugin in Jenkins 2.x, how can I access a Groovy variable that is defined somewhere at stage- or node-level from within a sh step?
Simple example:
node {
stage('Test Stage') {
some_var = 'Hello World' // this is Groovy
echo some_var // printing via Groovy works
sh 'echo $some_var' // printing in shell does not work
}
}
gives the following on the Jenkins output page:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test Stage)
[Pipeline] echo
Hello World
[Pipeline] sh
[test] Running shell script
+ echo
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
As one can see, echo in the sh step prints an empty string.
A work-around would be to define the variable in the environment scope via
env.some_var = 'Hello World'
and print it via
sh 'echo ${env.some_var}'
However, this kind of abuses the environmental scope for this task.
To use a templatable string, where variables are substituted into a string, use double quotes.
sh "echo $some_var"
I am adding the comment from #Pedro as an answer because I think it is important.
For sh env vars we must use
sh "echo \$some_var"
You need to do something like below if a bash script is required :
Set this variable at global or local(function) level where from these can be accessible to sh script:
def stageOneWorkSpace = "/path/test1"
def stageTwoWorkSpace = "/path/test2"
In shell script call them like below
sh '''
echo ''' +stageOneWorkSpace+ '''
echo ''' +stageTwoWorkSpace+ '''
cp -r ''' +stageOneWorkSpace+'''/qa/folder1/* ''' +stageOneWorkSpace+'''/qa/folder2
'''
Make sure you start and end sh with three quotes like '''
I would like to add another scenario to this discussion.
I was using shell environment variables and groovy variables in the same script.
format='html'
for file in *.txt;
do mv -- "\$file" "\${file%.txt}.$format";
done
So here, What I have done is use \$ only for shell environment variables and use $ for groovy variables.
This is extension to #Dave Bacher's answer. I'm running multiple shell command in Groovy file & want to use output of one shell command to the next command as groovy variable. Using double quotes in shell command, groovy passes variable from one to another command but using single quotes it does not work, it returns null.
So use shell command like this in double quotes: sh "echo ${FOLDER_NAME}"
FOLDER_NAME = sh(script: $/
awk -F '=' '/CODE_COVERAGE_FOLDER/ {gsub("\"","");print$2}' ${WORKSPACE}/test.cfg
/$, returnStdout: true).trim()
echo "Folder: ${FOLDER_NAME}" // print folder name in groovy console
sh "mkdir -p ${WORKSPACE}/${FOLDER_NAME} && chmod 777 ${WORKSPACE}/${FOLDER_NAME}"

How can I start a bash login shell in jenkins pipeline (formerly known as workflow)?

I am just starting to convert my Jenkins jobs into the new Jenkins Pipeline(workflow) tool, and I'm having trouble getting the sh command to use a bash login shell.
I've tried
sh '''
#!/bin/bash -l
echo $0
'''
but the echo $0 command is always being executed in an interactive shell, rather then a bash login shell.
#izzekil is right!!!! Thank you so much!
So to elaborate a little bit about what is going on. I used sh with ''' , which indicates a multiple line script. HOWEVER, the resulting shell script that gets dumped on to the jenkins node will be one line down, rather then the first line. So I was able to fix this with this
sh '''#!/bin/bash -l
echo $0
# more stuff I needed to do,
# like use rvm, which doesn't work with shell, it needs bash.
'''

Resources