Parallel execution of jenkins builds in different directories - jenkins

I use Jenkinse(file) Piplines and want to run multiple build steps in parallel (different constants for example - these can't be passed to the compiler, source code has to be changed by a script).
this could look something like that:
stage('Build') {
steps {
parallel(
build_default: {
echo "WORKSPACE: ${WORKSPACE}"
bat 'build.bat'
},
build_remove: {
echo "WORKSPACE 2: ${WORKSPACE}"
// EXAMPLE: only to test interference
deleteDir() // <- this would be code changes
}
)
}
}
This is not working since all the code is deleted before compilation is done. I want to run both steps in parallel like jenkins does it (creating multiple temp directories #2 and so on) when 2 builds run in parallel (triggered by button presses for example).
The only thing I found out so far is to create temp dirs myself in the working dir and copy the sourcecode to them and work there. But I'm looking for a nicer/more automatic solution. (when using the node command I have the same problems since I only have one node)

Related

run multiple steps in Jenkins pipeline

In my project, I have requirements to run multiple steps.
I followed this guideline : Jenkins Guide
Here are the code :
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'echo "Hello World"'
sh '''
echo "Multiline shell steps works too"
ls -lah
'''
}
}
}
}
Do I have other alternative to handle the multiple steps in Jenkins pipeline ? I also thinking to use script inside steps I am not sure that's also good way to do
I am trying to understand what is the best practice to run multiple steps
You need to better define what it is you intend to. Do. A starting example is here .
You need to understand the meaning of stage and step.
A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages)
Step: A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process).
You need to think of both stage and step as atomic units. Eg: Deploy is an atomic activity, but may consist of numerous steps, some of which might have multiple instructions/commands, copy artifact (to different targets, copy data, , launch app. etc.
This Tutorial may be useful, too.
Also, review best practices

Reusing workspace across multiple nodes in parallel stages

I have parallel stages setup in my Jenkins pipeline script which execute tests on separate nodes (AWS EC2 instances)
e.g.
stage('E2E-PR') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_slave_cypress' }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_slave_cypress' }
steps {
runE2eTests()
}
}
}
}
I would like to re-use the checked out repo\generated workspace generated from the parent node used at the start of my pipeline rather than each parallel stage checking out it's own copy.
I came across an approach using sequential stages, and nesting stages within stages to share workspaces across multiple stages, which I tried like below
parallel {
stage('Cypress Tests') {
agent { label 'aws_slave_cypress' }
stages {
stage('Cypress 1') {
steps {
runE2eTests()
}
}
stage('Cypress 2') {
steps {
runE2eTests()
}
}
}
}
}
But I can see from my Jenkins build output that only one aws instance gets spun up and used for both of my nested stages which doesnt give me the benefit of parallelism.
I have also come across the stash\unstash commands, but I have read these should be used for small files and not for large directories\entire repositories?
What's the right approach here to allow me to have parallel stages across multiple nodes use the same originally generated workspace? Is this possible?
Thanks
I can share a bit information on a similar situation we have in our company:
We have an automation regression suite for UI testing that needs to be executed on several different machines (Win32,Win64, Mac and more). The suite configuration is controlled by a config file that contains all relevant environment parameters and URLs.
The Job allows a user to select the suites that will be executed (and the git branch), select labels (agents) that the suites will be executed on, and select the environment configuration that those suites will use.
How does the flow look like:
Generate the config file according to the user input and save (stash) it.
In parallel for all given labels: clone the suites repository using shallow clone -> copy the configuration file (unsatsh) -> run the tests -> save (stash) the output of the tests.
Collect the outputs of all tests (unstash), combine them into a single report, publish the report and notify users.
The pipeline itself (simplified) will look like:
pipeline {
agent any
parameters {
extendedChoice(name: 'Agents', ...)
}
stages {
stage('Create Config') {
steps {
// create the config file called config.yaml
...
stash name: 'config' , includes: 'config.yaml'
}
}
stage('Run Tests') {
steps {
script {
parallel params.Agents.collectEntries { agent ->
["Running on ${agent}": {
stage(agent) {
node(agent) {
println "Executing regression tests on ${agent}"
stage('Clone Tests') {
// Git - shallow clone the automation tests
}
stage('Update Config') {
unstash 'config'
}
stage('Run Suite') {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// Run the tests
}
}
stage('Stash Results'){
stash name: agent , includes: 'results.json'
}
}
}
}]
}
}
}
}
stage('Combine and Publish Report') {
steps {
// Unstash all results from each agent to the same folder
params.Agents.each { agent ->
dir(agent){
unstash agent
}
}
// Run command to combine the results and generated the HTML report
// Use publishHTML to publish the HTML report to Jenkins
}
}
}
post{
// Notify users
}
}
So as you can see for handling relatively small files like the config files and the test result files (which are less the 1M) stash and unstash are quick and very easy to use and offer great flexibility, but when you use them for bigger files, then they start to overload the system.
Regarding the tests themself, eventually you must copy the files to each workspace in the different agents, and shallow cloning them from the repository is probably the most efficient and easy to use method if your tests are not packaged.
If your tests can easily be packaged, like a python package for example then you can create a different CI pipeline that will archive them as a package for each code change, and then your automation pipeline can use them with tools like pip or so and avoid the need to clone the repository.
That is also true when you run your tests in a container, as you can create a CI that will prepare an image that already contains the tests and then just use that image in the pipeline without need for cloning it agin.
If your Git sever is somewhat very slow you can always package the tests in your CI using zip or a similar archiver, upload them to S3 (using aws-steps) or equivalent and download them during the automation pipeline execution - but in most cases this will not offer any performance benefits.
I suppose you need to send generated files to s3 on post step for example. Download these files on the first step of your pipeline.
Think about storage, which you need to share between jenkins agents.

Jenkins: How to skip a specific stage in a multibranch pipeline?

We have "stage (build)" on all our branches. Temporarily how can we skip this stage to run on all our branches in multibranch pipeline. I know one solution is use when condition on stage and ask all developers to pull that branch into their branch. But thats lot of work and coordination. Instead I am looking for a global configuration where we can simply skip the stage by name on any branch.
It sounds like you keep the Jenkinsfile alongside the code, but want to change how the Jenkinsfile runs from an administrative point of view.
You could store this stage in a separate (version control flavor of preference) repository. Before you execute the stage, load in the repository then load in the script file and within the stage execute a method defined in that file.
Alternatively, add a parameter to the build job (this assumes you aren't setting parameters via options in the Jenkinsfile) that is a boolean to use or skip the stage and you can modify the project configuration when you want to turn it on or off
What about this approach ?
stage('build') {
if(buildFlag=="disable"){
//jump to next stage
return;
}
//do something
}
buildFlag could be
a simple var in the same pipeline
a global var configured in jenkins. You could set disabled temporarily to skip build stage and change to enabled to restore to normalcy
Also you could set the status to failure, instead of return :
currentBuild.result = 'FAILURE'
Or throw an exception and exit the entire job or pipeline.
One option is to skip the body of the stage if the condition is met. For example:
stage('Unit Tests') {
steps {
script{
if (env.branch.startsWith(releaseBranch)} {
echo 'Running unit tests with coverage...'
} else {
// run actual unit tests
}
}
}
}
The only downside is that the UI will show the "green box" for this step - even though it effectively did nothing.
If you want to remove the stage completely for the branch, use a when directive.
stage('deploy') {
when {
branch 'production'
}
steps {
echo 'Deploying'
}
}
Bonus: You can also specify a "when not" directive as well. See How to specify when branch NOT (branch name) in jenkinsfile?

Is there a way to run a pre-checkout step in declarative Jenkins pipelines?

Jenkins declarative pipelines offer a post directive to execute code after the stages have finished. Is there a similar thing to run code before the stages are running, and most importantly, before the SCM checkout?
For example something along the lines of:
pre {
always {
rm -rf ./*
}
}
This would then clean the workspace of my build before the source code is checked out.
pre is a cool feature idea, but doesn't exist yet. skipDefaultCheckout and checkout scm (which is the same as the default checkout) are the keys:
pipeline {
agent { label 'docker' }
options {
skipDefaultCheckout true
}
stages {
stage('clean_workspace_and_checkout_source') {
steps {
deleteDir()
checkout scm
}
}
stage('build') {
steps {
echo 'i build therefore i am'
}
}
}
}
For the moment there are no pre-build steps but for the purpose you are looking for, it can be done in the pipeline job configurarion and also multibranch pipeline jobs, when you define where is your jenkinsfile, choose Additional Behaviours -> Wipe out repository & force clone.
Delete the contents of the workspace before building, ensuring a fully fresh workspace.
If you do not really want to delete everything and save some network usage, you can just use this other option: Additional Behaviours -> Clean before checkout.
Clean up the workspace before every checkout by deleting all untracked files and directories, including those which are specified in .gitignore. It also resets all tracked files to their versioned state. This ensures that the workspace is in the same state as if you cloned and checked out in a brand-new empty directory, and ensures that your build is not affected by the files generated by the previous build.
This one will not delete the workspace but just reset the repository to the original state and pull new changes if there are some.
I use "Prepare an environment for the run / Script Content"

Obtaining test results from another job in jenkins

I have a Jenkins pipeline A that looks something like this
Run prebuild tests
Build project
Run post-build tests
Run another pipeline B with parameters extracted from current build
I was wondering if there was a way to get test results from pipeline B and aggregate them with the tests results of pipeline A.
Currently, I have to open the console output and open the Url to the external build.
If the above is not possible, is it possible to display this Url somewhere else than the console (e.g. as an artifact ).
I believe what you are looking for is "stash". Below is copied directly from https://jenkins.io/doc/pipeline/examples/
Synopsis
This is a simple demonstration of how to unstash to a different directory than the root directory, so that you can make sure not to overwrite directories or files, etc.
// First we'll generate a text file in a subdirectory on one node and stash it.
stage "first step on first node"
// Run on a node with the "first-node" label.
node('first-node') {
// Make the output directory.
sh "mkdir -p output"
// Write a text file there.
writeFile file: "output/somefile", text: "Hey look, some text."
// Stash that directory and file.
// Note that the includes could be "output/", "output/*" as below, or even
// "output/**/*" - it all works out basically the same.
stash name: "first-stash", includes: "output/*"
}
// Next, we'll make a new directory on a second node, and unstash the original
// into that new directory, rather than into the root of the build.
stage "second step on second node"
// Run on a node with the "second-node" label.
node('second-node') {
// Run the unstash from within that directory!
dir("first-stash") {
unstash "first-stash"
}
// Look, no output directory under the root!
// pwd() outputs the current directory Pipeline is running in.
sh "ls -la ${pwd()}"
// And look, output directory is there under first-stash!
sh "ls -la ${pwd()}/first-stash"
}
Basically you can copy your artifacts, say .xml files that result from running unit tests, from the first job to the node running the second job. Then have the Unit test processor run on both the results from the first and the second job.

Resources