I am fairly new to Jenkins and CI/CD in general, but believe that I have searched long enough to conclude things are not as I expect.
I want to do some frontend tests on my website and just as in real life I want to test with the site in one Docker container and the database in another container. Jenkins has this documented as "sidecar" containers which can be part of a pipeline.
Their example:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
The thing is that I do not have a 'traditional' Jenkins pipeline, but I am running Jenkins Blue Ocean instead. This gives me a fancy pipeline editor, but also my pipeline code (Jenkinsfile) looks really different from the example:
pipeline {
agent {
docker {
image 'php'
}
}
stages {
stage('Build') {
steps {
sh 'composer --version'
sh 'composer install'
}
}
stage('Tests') {
steps {
echo 'Do test'
}
}
}
}
So how would I be spawning (and tearing down) these "sidecar" containers in a Blue Ocean pipeline? Currently the Pipeline editor has no available options if I want to add a step related to Docker. Can I still use docker.image? I do have the Docker Pipeline plugin installed.
.
The example provided by Jenkins in the link is actually a fully functional pipeline, with one exception. You need to comment out checkout scm if you provide the pipeline script directly into Jenkins.
node {
// checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
What it may be confusing to you is that the code style in the example above is very different than the one generated by Blue Ocean pipeline editor. That is because the script is written in the Scripted Pipeline and Blue Ocean has generated a Declarative Pipeline. Both are fully supported in Jenkins and both use the same engine underneath, but the syntax differences may lead to confusion at start.
You can use the Scripted Pipeline example above just fine, but if you want to keep the Declarative Pipeline, you can run the scripted part inside the script step. In both cases you need to change the docker images and executed commands according to your needs.
pipeline {
agent any
stages {
stage('Build and test') {
steps {
script {
node {
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
}
}
}
}
}
Please note:
Docker container link feature used in this example is a legacy feature, it may eventually be removed.
The pipeline will fail at make check, as make is not provided in centos:7 image.
More then half a year later I finally figured out it was much simpler than I thought. It can be done with docker-compose.
You need to make sure that your Jenkins has access to docker-compose. So if you are running Jenkins as a docker container ensure it has access to the Docker socket. Also Jenkins is not likely to ship with docker-compose included (JENKINS-51898) so you will have to build your own blue ocean image to install docker-compose.
Rather than copying below file, check https://docs.docker.com/compose/install/ for the latest version!
# Dockerfile
FROM jenkinsci/blueocean
USER root
RUN curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
USER jenkins
Once you have Jenkins and Docker up and running you can deploy a test version of your application with a regular docker-compose file, including all database and other containers you might need. You can install dependencies and start the tests by using docker-compose exec to execute commands inside containers started with docker-compose.
Note that docker-compose -f docker-compose.test.yml exec -T php composer-install executes the composer-install command in the container that was defined as the php service in the docker-compose file.
In the end, no matter the outcome of the test, all containers and associated volumes (-v flag) are shutdown and removed.
# Jenkinsfile
pipeline {
agent any
stages {
stage('Start') {
steps {
sh 'docker-compose -f docker-compose.test.yml up -d'
}
}
stage('Composer install') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php composer install --no-interaction --no-progress --optimize-autoloader'
}
}
stage('Test') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php <run test script>'
}
}
}
post {
always {
sh 'docker-compose -f docker-compose.test.yml down -v'
}
}
}
Related
I have a jenkins job DSL that run on a remote node (Linux OS) using "Restrict where this project can be run" Label.
It has "Build" step -> "Execute shell"
In the execute shell i have mentioned
sh /app/runalljobs.sh &
On the remote node host runalljobs.sh looks like below:
cat runalljobs.sh
ansible-playbook /app/test.yml -e argu=arg1
ansible-playbook /app/test.yml -e argu=arg2
.....
.....
ansible-playbook /app/test.yml -e argu=arg16
The runalljobs.sh is suppose to start 16 ansible processes in the background when runalljob.sh is executed.
This works fine when the script is executed manually from the remote node putty shell.
However, I want the script to start the ansible processes to run in the background on the remote node when invoked using jenkins job which is not happening.
I also tried commenting sh /app/runalljobs.sh &
and adding individual ansible command in the "Execute shell" as below:
ansible-playbook /app/test.yml -e argu=arg1 &
ansible-playbook /app/test.yml -e argu=arg2 &
.....
.....
ansible-playbook /app/test.yml -e argu=arg16 &
But this too did not trigger the ansible processes on the target node.
It works if i remove the "&" and then all the ansible commands runs serially one after the other on remote.
However, I wish all the ansible commands to be triggered parallely in the background and the Jenkins executor should proceed into executing other execute shell tasks.
Can you please suggest how can i acheive the requirement ?
Jenkins allows you to perform tasks in parallel, but there's a catch. This requires you switching to Jenkins Pipeline and then use of parallel. Then, your build script would look like:
pipeline {
agent 'my-remote-machine'
stages {
...
stage('Ansible stuff') {
parallel {
stage('arg1') {
steps {
sh 'ansible-playbook /app/test.yml -e argu=arg1'
}
}
stage('arg2') {
steps {
sh 'ansible-playbook /app/test.yml -e argu=arg2'
}
}
...
}
}
}
}
If your command lines are quite similar (as in your example), you could use matrix section, to simplify the code:
matrix {
axes {
axis {
name 'ARG'
values 'arg1', 'arg2', 'arg3'
}
}
stages {
stage('test') {
sh 'ansible-playbook /app/test.yml -e argu=${ARG}'
}
}
}
I know that this solution is a radical change - Jenkins Pipelines is a whole new world of CI. But it may be worth of effort because Pipelines are very promoted by Jenkins authors and lot of plugins are rewritten to work with them.
I've been trying to follow the CI set up example that cypress provides for Jenkins here.
I'm using the following Jenkins pipeline script (also provided by the good folks at Cypress):
// Example Jenkins pipeline with Cypress end-to-end tests running in parallel on 2 workers
// Pipeline syntax from https://jenkins.io/doc/book/pipeline/
// Setup:
// before starting Jenkins, I have created several volumes to cache
// Jenkins configuration, NPM modules and Cypress binary
// docker volume create jenkins-data
// docker volume create npm-cache
// docker volume create cypress-cache
// Start Jenkins command line by line:
// - run as "root" user (insecure, contact your admin to configure user and groups!)
// - run Docker in disconnected mode
// - name running container "blue-ocean"
// - map port 8080 with Jenkins UI
// - map volumes for Jenkins data, NPM and Cypress caches
// - pass Docker socket which allows Jenkins to start worker containers
// - download and execute the latest BlueOcean Docker image
// docker run \
// -u root \
// -d \
// --name blue-ocean \
// -p 8080:8080 \
// -v jenkins-data:/var/jenkins_home \
// -v npm-cache:/root/.npm \
// -v cypress-cache:/root/.cache \
// -v /var/run/docker.sock:/var/run/docker.sock \
// jenkinsci/blueocean:latest
// If you start for the very first time, inspect the logs from the running container
// to see Administrator password - you will need it to configure Jenkins via localhost:8080 UI
// docker logs blue-ocean
pipeline {
agent {
// this image provides everything needed to run Cypress
docker {
image 'cypress/base:10'
}
}
stages {
// first stage installs node dependencies and Cypress binary
stage('build') {
steps {
// there a few default environment variables on Jenkins
// on local Jenkins machine (assuming port 8080) see
// http://localhost:8080/pipeline-syntax/globals#env
echo "Running build ${env.BUILD_ID} on ${env.JENKINS_URL}"
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('start local server') {
steps {
// start local server in the background
// we will shut it down in "post" command block
sh 'nohup npm start &'
}
}
// this tage runs end-to-end tests, and each agent uses the workspace
// from the previous stage
stage('cypress parallel tests') {
environment {
// we will be recordint test results and video on Cypress dashboard
// to record we need to set an environment variable
// we can load the record key variable from credentials store
// see https://jenkins.io/doc/book/using/using-credentials/
CYPRESS_RECORD_KEY = credentials('cypress-example-kitchensink-record-key')
// because parallel steps share the workspace they might race to delete
// screenshots and videos folders. Tell Cypress not to delete these folders
CYPRESS_trashAssetsBeforeRuns = 'false'
}
// https://jenkins.io/doc/book/pipeline/syntax/#parallel
parallel {
// start several test jobs in parallel, and they all
// will use Cypress Dashboard to load balance any found spec files
stage('tester A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run e2e:record:parallel"
}
}
// second tester runs the same command
stage('tester B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run e2e:record:parallel"
}
}
}
}
}
post {
// shutdown the server running in the background
always {
echo 'Stopping local server'
sh 'pkill -f http-server'
}
}
}
I'm using a "Trigger builds remotely" build trigger. When I execute that URL, the first steps (downloading the newest blue-ocean docker image) seem to work successfully.
When npm tries to launch package.json, however, it doesn't seem to find it. Images of my Jenkins console log here and here. I'm note sure why the package.json file is not there? Should I be adding it from the kitchensink repo. somehow?
Many thanks for any tips.
I have Jenkins running on an EC2 Instance. I have the EC2 Plugin Configured in a Peered VPC, and when a job is tagged 'support_ubuntu_docker' it will spin up an Jenkins Slave, with Docker pre-installed.
I am able to follow the examples, and get my job to connect to the local docker running on the Slave, and run commands inside the container.
Working: https://pastebin.com/UANvjhnA
pipeline {
agent {
docker {
image 'node:7-alpine'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Not Working https://pastebin.com/MsSZaPha
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
I have tried with the ansible/ansible:default image, as well as a image I created myself.
FROM alpine:3.7
RUN apk add --no-cache terraform
RUN apk add --no-cache ansible
ENTRYPOINT ["/bin/ash"]
This image behaves locally.
[jenkins_test] docker exec -it 3843653932c8 ash 10:56:42 ☁ master ☂ ⚡ ✭
/ # terraform --version
Terraform v0.11.0
/ # ansible --version
ansible 2.4.6.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Aug 22 2018, 13:24:18) [GCC 6.4.0]
/ #
I really just want to be able to clone my terraform git repo, and use the terraform in the container to run my init/plan/applies.
The error im getting for all of these is.
java.io.IOException: Failed to run top 'c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3'. Error: Error response from daemon: Container c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3 is not running
The question really should have been a Docker question; what's the difference between node:7-alpine and hashicorp/terraform:light?
hashicorp/terraform:light has an ENTRYPOINT entry, pointing to /bin/terraform.
Basically that means you run it this way:
docker run hashicorp/terraform:light --version
And it will exit right away, i.e., it's not interactive.
So if you want an interactive shell within that Docker container, you'll have to override the ENTRYPOINT to point at a shell, say, /bin/bash and also tell Docker to run interactively:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-it --entrypoint=/bin/bash'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
In a scripted pipeline you can do this:
docker.image(dockerImage).inside("--entrypoint=''") {
// code to run on container
}
If you are creating the image to use in Jenkins from a base image that already has an ENTRYPOINT instruction, you can override it by adding this line to the end of your own Dockerfile:
ENTRYPOINT []
Then the whole --entrypoint is no longer needed.
I had to change the entrypoint to empty to get it working with the following script it worked like a charm:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
Here is my Jenkins Pipeline:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm build'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh './jenkins/scripts/kill.sh'
}
}
stage('Deploy') {
steps {
sh './jenkins/scripts/deploy.sh'
}
}
} }
I use Docker and jenkinsci/blueocean image to run Jenkins. The first two stages are kind of standard to build a NodeJS app, the third one, however, is the part that I want to Jenkins copy new files to the server. Here is the deploy.sh files:
#!/usr/bin/env sh
set -x
scp -o StrictHostKeyChecking=no -r dist/* deviceappstore:/var/www/my_website/static/
There are two problems, first jenkinsci/blueocean does not have scp (not setup) and second, the ~/.ssh/config does not exist inside of the Jankins docker image then SCP will fail to authenticate. My solution was to build a custom image extends from jenkinsci/blueocean, setup SCP and copy config file and SSH key into it.
There are some plugins like Publish Over SSH but it seems it's not useful for Pipeline projects.
Is there any better solution? It the whole scenario right or I'm doing something wrong? I'm looking for most secure and standard solution for this problem.
OK, I think I found a good solution.
Thanks to SSH Agent plugin I can easily pass the credentials to the SCP command and copy the files to the server. Something like this:
...
stage('Deploy') {
steps {
sshagent(['my SSH']) {
echo 'this works...'
sh 'scp -o StrictHostKeyChecking=no -r dist/* my_server:/var/www/my_site/static/'
}
}
}
...
This is perfect because all the credentials are inside of Jenkins server and there's nothing about it in the repo.
And to be able to use this, there's just one solution. You need to use apk inside of the jenkinsci/blueocean (alpine) image and setup openssh:
apk add openssh
Or better solution create a new Dockerfile and build your own version.
I want to make a Jenkinsfile that will do tests and build my Spring boot Java application. The problem is that my tests require Postgres and RabbitMQ.
What I'm trying to do:
1) Setup Jenkins in docker
## Run Jenkins Docker :
sudo docker run -d -p 8080:8080 -p 50000:50000 -v /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins
Bash into docker container
## Bash into new docker container
docker exec -it {{ontainer_ID}} bash
## Download an install docker as root
curl -sSL https://get.docker.com/ | sh
exit
2) Make pipeline to do it:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
/* Run some tests which require PostgreSQL */
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
}
}
My goal to add postgres and rabbit to be launched on the phase right before tests. I found this https://jenkins.io/doc/book/pipeline/docker/
There is an example how to run additional docker images:
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
Looking for some expirienced devops who can help with my setup. Thanks.
At the time of writing, declarative pipeline doesn't support such sidecar containers (as described in the docs. So what you found is correct for your problem.
The snippet you found is, however, for scripted pipeline. To use this within your declarative pipeline, you need to wrap it in a script step:
stage('Test') {
steps {
docker.image('postgres:9').withRun('<whatever perameters you need>') { c ->
sh 'mvn test'
}
}
}
Of course, replace this with the postgres