Jenkins Pipeline Across Multiple Docker Images - docker

Using a declarative pipeline in Jenkins, how do I run stages across multiple versions of a docker image. I want to execute the following jenkinsfile on python 2.7, 3.5, and 3.6. Below is a pipeline file for building and testing a python project in a docker container
pipeline {
agent {
docker {
image 'python:2.7.14'
}
}
stages {
stage('Build') {
steps {
sh 'pip install pipenv'
sh 'pipenv install --dev'
}
}
stage('Test') {
steps {
sh 'pipenv run pytest --junitxml=TestResults.xml'
}
}
}
post {
always {
junit 'TestResults.xml'
}
}
}
What is minimal amount of code to make sure the same steps succeed across python 3.5 and 3.6? The hope is that if a test fails, it is evident which version(s) the test fails on.
Or is what I'm asking for not possible for declarative pipelines (eg. scripted pipelines may be what would most elegantly solve this problem)
As a comparison, this is how Travis CI let's you specify runs across different python version.

I had to resort to a scripted pipeline and combine all the stages
def pythons = ["2.7.14", "3.5.4", "3.6.2"]
def steps = pythons.collectEntries {
["python $it": job(it)]
}
parallel steps
def job(version) {
return {
docker.image("python:${version}").inside {
checkout scm
sh 'pip install pipenv'
sh 'pipenv install --dev'
sh 'pipenv run pytest --junitxml=TestResults.xml'
junit 'TestResults.xml'
}
}
}
The resulting pipeline looks like
Ideally we'd be able to break up each job into stages (Setup, Build, Test), but
the UI currently doesn't support this (still not supported).

Related

Question on Jenkins BitBucket using pipeline and pipeline script but also running when new data is pushed to bitbucket

I have created an itiem using pipeline, and then in the pipeline selecting the pipeline script,
This allows me to run the build in stages. As below
[code]
pipeline {
agent any
tools {
terraform 'terraform-11'
}
stages {
stage('Git Checkout terraform') {
steps {
git credentialsId: '********', url: 'https://******/********.git'
}
}
stage('Terraform Init') {
steps {
sh 'terraform init'
}
}
stage('Terraform A'){
steps {
dir(dev){
sh 'terraform plan -var-file="terraform.tfvars"'
sh 'terraform apply -auto-approve'
}
}
}
stage('Terraform B'){
steps {
dir(env){
sh 'terraform plan -var-file="terraform.tfvars"'
sh 'terraform apply -auto-approve'
}
}
}
}
}
[/code]
This works very well, I take the code out and run a series of stages. There are more stages than this. What I would like to do is have the jenkins build run every time the terrform scripts are updated. I have look at examples but none of the examples are part of the PipeLine/PipeLine Script
There is Freestyle project, but it does not allow me to build all the stages I need.
There is PipeLine /Pipeline script from SCM which again does not allow me to build all the stages I need.
What I want to do is stick with my current pipeline, but set it so it can be run when scripts are pushed to Bitbucket. All I need is pointing at the right documentation. If this is possible. If its not possible, then I will go back to the drawing board.
I worked out the solution. I set up a Item that is a Folder, set up the Git Repo. Then I created a Jenkins file called JenkinsFile with all the stages and steps. This is then uploaded to the repo being built. So the build will run the main item which the pulls in the JenkinsFile and runs it.

How to use a Jenkinsfile for these build steps?

I'm learning how to use Jenkins and working on configuring a Jenkins file instead of the build using the Jenkins UI.
The source code management step for building from Bitbucket:
The build step for building a Docker container:
The build is of type multi configuration project:
Reading the Jenkins file documentation at https://www.jenkins.io/doc/book/pipeline/jenkinsfile/index.html and creating a new build using Pipeline :
I'm unsure how to configure the steps I've configured via the UI: Source Code Management & Build. How to convert the config for Docker and Bitbucket that can be used with a Jenkinsfile ?
The SCM will not be changed, regardless if you are using UI configuration or a pipeline, although in theory you can do the git clone from the steps in the pipeline, if you really insist convert the SCM steps in pure pipeline steps.
The pipeline will can have multiple stages, and each of the stages can have different execution environment. You can use the Docker pipeline plug-in, or you can use plain sh to issue the docker commands on the build agent.
Here is small sample from one of my manual build pipelines:
pipeline {
agent none
stages {
stage('Init') {
agent { label 'docker-x86' }
steps {
checkout scm
sh 'docker stop demo-001c || true'
sh 'docker rm demo-001c || true'
}
}
stage('Build Back-end') {
agent { label 'docker-x86' }
steps {
sh 'docker build -t demo-001:latest ./docker'
}
}
stage('Test') {
agent {
docker {
label 'docker-x86'
}
}
steps {
sh 'docker run --name demo-001c demo-001:latest'
sh 'cd test && make test-back-end'
}
}
}
}
You need to create a Pipeline type of a project and specify the SCM configuration in the General tab. In the Pipeline tab, you will have option to select Pipeline script or Pipeline script from SCM. It's always better to start with the Pipeline script while you are building and modifying your workflow. Once it's stabilized, you can add it to the repository.

Using withEnv in a declarative pipeline

I'm trying to run docker command in my declarative pipeline, to install docker env on my slave machine i'm trying to use docker commons plugin "https://plugins.jenkins.io/docker-commons/", but no success.
Further research i have got below link mentioning how to use this plugin.
https://automatingguy.com/2017/11/06/jenkins-pipelines-simple-delivery-flow/
I have configured docker in manage jenkins -> global tool configuration, but dont find how to use below section in my declarative pipeline of jenkins, i think below structure/syntax will work for scripted jenkins pipeline
def dockerTool = tool name: 'docker', type:
'org.jenkinsci.plugins.docker.commons.tools.DockerTool'
withEnv(["DOCKER=${dockerTool}/bin"]) {
stages{}
}
Can someone pls help, how i can use docker common tool in declarative pipeline of jenkins.
Note: I cannot switch to scripted pipeline due to standardization with other projects
Here is the working example
pipeline{
agent any
stages{
stage('test') {
steps{
script{
test_env="this is test env"
withEnv(["myEnv=${test_env}"]){
echo "${env.myEnv}"
}
}
}
}
}
}
I have this feeling that you are don't need to use either withEnv or docker commons. Have you seen this? https://www.jenkins.io/doc/book/pipeline/docker/
There are plenty of good examples of how to use docker with Jenkinsfile.
My attempt to answer your question (if I got it right), if you are asking about declarative equivalent for scripted withEnv, then probably you are looking for environment {}? Something like this:
pipeline {
agent any
environment {
DOCKER = "${dockerTool}/bin"
}
stages {
stage('One') {
steps {
// steps here
}
}
}
}
Here is a working declarative pipeline solution as of Docker Commons v1.17
Note: the tool name, dockerTool is a keyword and docker-19.03.11 is name I gave my installation in Jenkins > Manage Jenkins > Global Tool Configuration page.
pipeline {
agent any
tools {
dockerTool 'docker-19.03.11'
}
stages {
stage('build') {
steps {
sh'''
echo 'FROM mongo:3.2' > Dockerfile
echo 'CMD ["/bin/echo", "HELLO WORLD...."]' >> Dockerfile
'''
script {
docker.withRegistry('http://192.168.99.100:5000/v2/') {
def image = docker.build('test/helloworld2:$BUILD_NUMBER')
}
}
}
}
}
}

Jenkins Declarative Pipeline - SCM

I am taking some Jenkins tutorial. The sample code I read is
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
stage('Deliver') {
agent {
docker {
image 'cdrx/pyinstaller-linux:python2'
}
}
steps {
sh 'pyinstaller --onefile sources/add2vals.py'
}
post {
success {
archiveArtifacts 'dist/add2vals'
}
}
}
}
}
So basically there are three steps Build, Test and Deliver. They all use different images to generate different containers. But this Jenkins job is configured to use the Git as the SCM.
So if this Jenkins build is run, says the project is built on the first container. Then the second stage is testing the project on another container, followed by the deliver on the third container. How does this Jenkins job make sure that these three steps are performing on the code sequentially.
Based on my understanding, each stage needs to perform git clone/git pull, and before the stage finishes, the git push is required.
If SCM IS configured through Jenkins to use Git, do we need to include the git clone/git pull', as well as 'git push' in the corresponding shell scripts(understeps, or it it already taken into consideration by theSCM` function of Jenkins?
Thanks
In this case, you must ensure that the binary that is in the QA environment must be the same as it should be in the UAT environment and then in Production.
For this, you must use an artifact repository or registry (Artifactory, Nexus, Docker Registry, etc.) to promote the artifacts to the Production environment.
See this link and see how it was done in the Pipeline.

Jenkins 2 Build Processors for one Job

i have a little problem with an multibranch Pipeline Job. Following problem I have a job that always requires 2 build processors. Unfortunately I don't want to unlock more build processors in Jenkins but want to know why Jenkins always uses 2 build cores for this job. Can anyone help me why jenkins uses 2 processors for this job at the same time ?
pipeline {
options { disableConcurrentBuilds() }
agent { label 'myServer' }
stages {
stage('helloworld') {
agent {
docker {
image 'ubuntu:16.04'
label 'myServer'
}
}
steps {
dir('build') {
sh 'npm i'
sh 'npm run gulp clean:all'
sh 'npm run gulp ci:all'
}
}
}
}
}
There is an option reuseNode for docker agents which is false by default. I assume that this may be the reason why jenkins needs 2 build cores (one for each docker agent) in your case, altough I'm not sure.
The option can be found in the Jenkins Declarative Syntax Documentation ( https://jenkins.io/doc/book/pipeline/syntax/#common-options ) at Sections > Agent > Common Options > reuseNode.
Could you try with reuseNode enabled and report whether it solved the problem?

Resources