Jenkins Pipeline ${params.Environment} bad substitution - docker

I am trying to send commands to a docker container within docker run via a Jenkins pipeline.
The Jenkins machine is at a different server and the docker image is in a different server.
When I hard code the environment param, the scripts execute as expected. But whenever I try to replace it with the params, it errors out saying :
bash: ${params.Environment} bad substitution
This is my pipeline script
pipeline {
agent any
parameters {
choice(
name: 'Environment',
choices: ['qa','dev'],
description: 'Passing the Environment'
)
}
stages {
stage('Environment') {
steps {
echo " The environment is ${params.Environment}"
}
}
stage('run') {
steps {
sh 'ssh 10.x.x.x \'sudo docker run --name docker_container_name docker_image_name sh -c "cd mytests ; pip3 install -r requirements.txt ; python3 runTests.py -env ${params.Environment} "\''
}
}
}
}

The sh command's parameters needs to have double quotes, or triple double quotes.
sh """ssh 10.x.x.x 'sudo docker run --name docker_container_name docker_image_name sh -c "cd mytests ; pip3 install -r requirements.txt ; python3 runTests.py -env ${params.Environment} "'"""
In the Groovy language used by pipeline scripts, single-quoted strings don't do any interpolation at all, so the ${params.Environment} string gets passed on as-is to the shell. Double-quoted strings do perform interpolation, so the Groovy engine substitutes ${params.Environment} before invoking the shell.
(You might look at the native support for Using Docker with Pipeline which can avoid the ssh 'sudo "..."' wrapping, though it requires Jenkins be able to run Docker itself on the worker nodes.)

Related

How to run commands on remote from a jenkins-worker via ssh?

I am writing a declarative pipeline in a Jenkinsfile in order to build and deploy an app.
The deployment is usually done by sshing to the docker host and running:
cd myDirectory
docker stack deploy --compose-file docker-compose.yml foo
I managed to run a single shell command via ssh, but don't know how to run multiple commands after eacht other.
This is what I have now:
pipeline {
agent { label 'onlyMyHost' }
stages {
stage("checkout"){...}
stage("build"){...}
stage("deploy") {
steps {
sshagent(credentials: ['my-sshKey']){
sh 'ssh -o StrictHostKeyChecking=no myUser#foo.bar.com hostname'
sh ("""
ssh -o StrictHostKeyChecking=no myUser#foo.bar.com 'bash -s' < "cd MyDirectory && docker stack deploy --composefile docker-compose.yml foo"
""")
}
}
}
}
}
This fails. What is a good way of running a script on a remote from my specific jenkins-worker
Not sure why 'bash -s' is added here. Removing that from your SSH command should allow you to execute docker deploy remotely.
Moreover, you may run any number of commands in the same line by appending with ; after each command. For example:
ssh -o StrictHostKeyChecking=no myUser#foo.bar.com < "cd MyDirectory && docker stack deploy --composefile docker-compose.yml foo ; docker ps"

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

How can I make this pipeline work for multiple runs at the same time?

Currently it creates a network name "denpal_default" and it does give this message:
[1BRemoving network denpal_default
Network denpal_default not found.
Network test-network is external, skipping
I haven't tested it yet, but I assume if it makes the denpal_default network and deletes it, it cannot run multiple builds at the same time.
I was thinking about a solution that would create a random COMPOSE_PROJECT_NAME="denpal-randomnumber" and build based on that.
But how do I use a variable set in the "Docker build"-stage in the "Verification"-stage later on?
stage('Docker Build') {
steps {
sh '''
docker-compose config -q
docker network prune -f && docker network inspect test-network >/dev/null || docker network create test-network
COMPOSE_PROJECT_NAME=denpal docker-compose down
COMPOSE_PROJECT_NAME=denpal docker-compose up -d --build "$#"
'''
}
}
stage('Verification') {
steps {
sh '''
docker-compose exec -T cli curl http://nginx:8080 -v
COMPOSE_PROJECT_NAME=denpal docker-compose down
'''
}
}
you can use variables in sh commands in pipeline as basically it's a string, and leverage groovy gstring ( http://groovy-lang.org/syntax.html )
example for scripted pipeline, for declarative use env vars
def random = UUID.randomUUID().toString()
sh '''
echo "hello ${random}"
'''
two common pitfalls, you must use double quotes ( gstring, single quote is regular string ), and "stage" is scoped, so define you're var's as global or in the same stage.

Jenkins docker container simply hangs and never executes steps

I'm trying to run a Python image in Jenkins to perform a series of unit tests with pytest, but I'm getting some strange behavior with Docker.
My Jenkinsfile pipeline is
agent {
docker { image 'python:3.6-jessie' }
}
stages {
stage('Run tests') {
steps {
withCredentials([
string(credentialsId: 'a-secret', variable: 'A_SECRET')
{
sh label: "Install dependencies", script: 'pip install -r requirements.txt'
sh label: 'Execute tests', script: "pytest mytests.py"
}
}
}
}
However, when I run the pipeline, Docker appears to be executing a very long instruction (with significantly more -e environment variables than I defined as credentials?), followed by cat.
The build then simply hangs and never finishes:
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 996:994
-w /var/lib/jenkins/workspace/myproject
-v /var/lib/jenkins/workspace/myproject:/var/lib/jenkins/workspace/myproject:rw,z
-v /var/lib/jenkins/workspace/myproject#tmp:/var/lib/jenkins/workspace/myproject#tmp:rw,z
-e ******** -e ******** python:3.6-jessie cat
When I SSH into my instance and run docker ps, I see
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
240d00459d92 python:3.6-jessie "cat" About a minute ago Up About a minute kind_wright
Why is Jenkins running cat? Why does Jenkins say I am not running inside a container, when it has clearly created a container for me? And most importantly, why are my pip install -r requirements and other steps not executing?
I finally figured this out. If you have empty global environment variables in your Jenkins configuration, it appears that you'll get a malformed docker run command since Jenkins will write the command, with your empty string environment variable, as docker run -e some_env_var=some_value -e = ...
This will cause the container to simply hang.
A telltale sign that this is happening is you'll get the error message:
invalid argument "=" for "-e, --env" flag: invalid environment variable: =
This is initially difficult to diagnose since Jenkins (rightfully) hides your actual credentials with ***, so the empty environment strings do not show up as empty.
You need to check your Jenkins global configuration and make sure you don't have any empty environment variables accidentally defined:
If these exist, you need to delete them and rerun.

Jenkins Maven Pipeline

I want to make a Jenkinsfile that will do tests and build my Spring boot Java application. The problem is that my tests require Postgres and RabbitMQ.
What I'm trying to do:
1) Setup Jenkins in docker
## Run Jenkins Docker :
sudo docker run -d -p 8080:8080 -p 50000:50000 -v /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins
Bash into docker container
## Bash into new docker container
docker exec -it {{ontainer_ID}} bash
## Download an install docker as root
curl -sSL https://get.docker.com/ | sh
exit
2) Make pipeline to do it:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
/* Run some tests which require PostgreSQL */
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
}
}
My goal to add postgres and rabbit to be launched on the phase right before tests. I found this https://jenkins.io/doc/book/pipeline/docker/
There is an example how to run additional docker images:
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
Looking for some expirienced devops who can help with my setup. Thanks.
At the time of writing, declarative pipeline doesn't support such sidecar containers (as described in the docs. So what you found is correct for your problem.
The snippet you found is, however, for scripted pipeline. To use this within your declarative pipeline, you need to wrap it in a script step:
stage('Test') {
steps {
docker.image('postgres:9').withRun('<whatever perameters you need>') { c ->
sh 'mvn test'
}
}
}
Of course, replace this with the postgres

Resources