How to execute script before the job starts? - jenkins

Is it possible to execute my shell script before the job starts? We are using the jenkins pipeline, but it is already late when Jenkins is processing this script - we are dealing with unknown problem with keychain and git, but we are using global libraries as well, that need to be downloaded from git before the pipeline script is executed.
Therefore we need to delete the items which are causing the problems from keychain BEFORE it downloads the global library for the job. Is there anything like this available in Jenkins?

I recommend using a pipeline, you can control which stage is executed.
see below example:
pipeline {
agent any
stages {
stage('before job starts') {
steps {
sh 'your_scripts.sh'
}
}
stage('the job') {
steps {
sh 'run_job.sh'
}
}
}
post {
always {
echo 'I will always run!'
}
}
}

Related

Reusing workspace across multiple nodes in parallel stages

I have parallel stages setup in my Jenkins pipeline script which execute tests on separate nodes (AWS EC2 instances)
e.g.
stage('E2E-PR') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_slave_cypress' }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_slave_cypress' }
steps {
runE2eTests()
}
}
}
}
I would like to re-use the checked out repo\generated workspace generated from the parent node used at the start of my pipeline rather than each parallel stage checking out it's own copy.
I came across an approach using sequential stages, and nesting stages within stages to share workspaces across multiple stages, which I tried like below
parallel {
stage('Cypress Tests') {
agent { label 'aws_slave_cypress' }
stages {
stage('Cypress 1') {
steps {
runE2eTests()
}
}
stage('Cypress 2') {
steps {
runE2eTests()
}
}
}
}
}
But I can see from my Jenkins build output that only one aws instance gets spun up and used for both of my nested stages which doesnt give me the benefit of parallelism.
I have also come across the stash\unstash commands, but I have read these should be used for small files and not for large directories\entire repositories?
What's the right approach here to allow me to have parallel stages across multiple nodes use the same originally generated workspace? Is this possible?
Thanks
I can share a bit information on a similar situation we have in our company:
We have an automation regression suite for UI testing that needs to be executed on several different machines (Win32,Win64, Mac and more). The suite configuration is controlled by a config file that contains all relevant environment parameters and URLs.
The Job allows a user to select the suites that will be executed (and the git branch), select labels (agents) that the suites will be executed on, and select the environment configuration that those suites will use.
How does the flow look like:
Generate the config file according to the user input and save (stash) it.
In parallel for all given labels: clone the suites repository using shallow clone -> copy the configuration file (unsatsh) -> run the tests -> save (stash) the output of the tests.
Collect the outputs of all tests (unstash), combine them into a single report, publish the report and notify users.
The pipeline itself (simplified) will look like:
pipeline {
agent any
parameters {
extendedChoice(name: 'Agents', ...)
}
stages {
stage('Create Config') {
steps {
// create the config file called config.yaml
...
stash name: 'config' , includes: 'config.yaml'
}
}
stage('Run Tests') {
steps {
script {
parallel params.Agents.collectEntries { agent ->
["Running on ${agent}": {
stage(agent) {
node(agent) {
println "Executing regression tests on ${agent}"
stage('Clone Tests') {
// Git - shallow clone the automation tests
}
stage('Update Config') {
unstash 'config'
}
stage('Run Suite') {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// Run the tests
}
}
stage('Stash Results'){
stash name: agent , includes: 'results.json'
}
}
}
}]
}
}
}
}
stage('Combine and Publish Report') {
steps {
// Unstash all results from each agent to the same folder
params.Agents.each { agent ->
dir(agent){
unstash agent
}
}
// Run command to combine the results and generated the HTML report
// Use publishHTML to publish the HTML report to Jenkins
}
}
}
post{
// Notify users
}
}
So as you can see for handling relatively small files like the config files and the test result files (which are less the 1M) stash and unstash are quick and very easy to use and offer great flexibility, but when you use them for bigger files, then they start to overload the system.
Regarding the tests themself, eventually you must copy the files to each workspace in the different agents, and shallow cloning them from the repository is probably the most efficient and easy to use method if your tests are not packaged.
If your tests can easily be packaged, like a python package for example then you can create a different CI pipeline that will archive them as a package for each code change, and then your automation pipeline can use them with tools like pip or so and avoid the need to clone the repository.
That is also true when you run your tests in a container, as you can create a CI that will prepare an image that already contains the tests and then just use that image in the pipeline without need for cloning it agin.
If your Git sever is somewhat very slow you can always package the tests in your CI using zip or a similar archiver, upload them to S3 (using aws-steps) or equivalent and download them during the automation pipeline execution - but in most cases this will not offer any performance benefits.
I suppose you need to send generated files to s3 on post step for example. Download these files on the first step of your pipeline.
Think about storage, which you need to share between jenkins agents.

How to create a post-build script for all Jenkins jobs

Is there a way to create a post build script for all Jenkins jobs? Some script that is shared across jobs? I would like to avoid manually creating a post-build script for each job if possible.
AFAIK there is no job that will always run after any other job. You can emulate that creating a new job and then either configure a post build trigger on all your jobs to run the new one, or configure a build trigger in the new job to run after all the jobs you specify.
However, if all your jobs are pipelines and you have a shared library you can create a step that is actually a pipeline with a built-in post, for example consider a step called postPipeline.groovy:
def call(Closure body) {
pipeline {
agent any
stages {
stage('Run pipeline') {
steps {
script {
body()
}
}
}
}
post {
always {
<< routine post actions go here >>
}
}
}
}
By changing all the pipelines to use this step you ensure they all run the post script:
postPipeline {
stage('Pipeline stage') {
<< code >>
}
.
.
.
}
Still, in any case you get yourself involved in manual labor.

Jenkins Declarative Pipeline - SCM

I am taking some Jenkins tutorial. The sample code I read is
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
stage('Deliver') {
agent {
docker {
image 'cdrx/pyinstaller-linux:python2'
}
}
steps {
sh 'pyinstaller --onefile sources/add2vals.py'
}
post {
success {
archiveArtifacts 'dist/add2vals'
}
}
}
}
}
So basically there are three steps Build, Test and Deliver. They all use different images to generate different containers. But this Jenkins job is configured to use the Git as the SCM.
So if this Jenkins build is run, says the project is built on the first container. Then the second stage is testing the project on another container, followed by the deliver on the third container. How does this Jenkins job make sure that these three steps are performing on the code sequentially.
Based on my understanding, each stage needs to perform git clone/git pull, and before the stage finishes, the git push is required.
If SCM IS configured through Jenkins to use Git, do we need to include the git clone/git pull', as well as 'git push' in the corresponding shell scripts(understeps, or it it already taken into consideration by theSCM` function of Jenkins?
Thanks
In this case, you must ensure that the binary that is in the QA environment must be the same as it should be in the UAT environment and then in Production.
For this, you must use an artifact repository or registry (Artifactory, Nexus, Docker Registry, etc.) to promote the artifacts to the Production environment.
See this link and see how it was done in the Pipeline.

Jenkins Pipeline: Get build output from slave agent

Background
Let's say I have two jobs, one 'Pipeline Job' and one 'Build Job'. The 'Pipeline Job' runs on master and is of course a pipeline (using groovy). Then for a build part in the pipeline I use a slave running on Windows, the 'Build Job', which is responsible for building something I can't do on the master. The master is also running on Windows but lack some software needed for the specific build.
The Question
I have a groovy script that looks something like this:
#!groovy
node {
stage('Environment Preparation') {
// fetches stuff and sets up the environment on master
}
stage('Unit Testing') {
// some testing
}
stage('Build on Slave') {
def slaveJob = build job: 'BuildJob'
}
}
It works fine, where 'BuildJob' is "Restrict where this project can be run", i.e., on the slave.
My issue is that I want the output from 'BuildJob' to print in the pipeline logs. Do you have some clever ways of how this could be done? I'm open for everything, so if you know of more clever ways to start the 'BuildJob' etc. I'm eager to here it.
Thanks!
EDITED
You have to approve the things you want to access under script-approval. Not sure if you really neeed getRawBuild but it worked.
Search through console output of a Jenkins job
#!groovy
node {
stage('Environment Preparation') {
// fetches stuff and sets up the environment on master
}
stage('Unit Testing') {
// some testing
}
stage('Build on Slave') {
def slaveJob = build job: 'BuildJob'
println slaveJob.rawBuild.log
}
}
jenkinsurl/scriptApproval/ you approve the following:
method hudson.model.Run getLog
method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild
Well, sometimes stating a question makes you think from another perspective.
Hopefully someone will benefit from this.
I stumbled upon a Pipeline-Plugin tutorial where they showed how you could use node, to label where script code should run. The resulting groovy file looks something like this:
#!groovy
stage('Environment Preparation') {
node('master') {
// fetches stuff and sets up the environment on master
}
}
stage('Unit Testing') {
node('master') {
// some testing
}
}
stage('Build on Slave') {
node('remote') {
def out = bat script: 'C:\\Build\\build.bat', returnStdout: true
}
}
As you can see the tutorial made me refactor the script a bit. The node('remote') part is what defines that the upcoming stuff should be run on the slave machine.
I had to make some customizations in the batch script, so that everything important was printed to stdout.
You have to let Jenkins know which node is 'remote' by going in to Manage Jenkins > Manage Nodes, choose the slave agent in mind, Configure Node and add 'remote' or whatever suits you to the labels field.

Running a Post Build script when a Jenkins job is aborted

Is there a possible way / plugin that can run a post build script when a Jenkins job is aborted.
I do see that the post build plugin provides an action to execute a set of scripts, but these can be run only on 2 options either a successful job or a failed job.
This question is positively answered here.
The Post Build Task plugin is run even if a job is aborted.
Use it to search the log text for "Build was aborted" and you can specify a shell script to run.
Works like a charm. :-)
For a declarative Jenkins pipeline, you can achieve it as follows:
pipeline {
agent any
options {
timeout(time: 2, unit: 'MINUTES') // abort on exceeding the timeout
}
stages {
stage('Echo 1') {
steps {
echo 'Hello World'
}
}
stage('Sleep'){
steps {
sh 'sleep 180'
}
}
stage('Wake up'){
steps {
echo "Woken up"
}
}
}
// this post part is executed if job is aborted
post {
aborted {
script{
echo "Damn it. I was aborted!"
}
}
}
}
As far as I know, if a build is aborted, there's no way to execute any build steps (or post build steps) in it any more - which makes sense, that's what I would expect of "abort".
What you could do is create another job that monitors the first one's status, and triggers if it was aborted (e.g. see the BuildResultTrigger plugin).
Another solution might be to create a "wrapper" job, which calls the first one as a build step - this way you can execute additional steps after its completion, like checking its status, even if it was aborted.
If you use a scripted pipeline, you can always use a try/catch/finally set, where the build gets executed in the try block and the postbuild steps are in the finally block. This way, even if the build fails, post build steps are executed.
try {
build here
}catch(FlowInterruptedException interruptEx){
catch exception here
} finally {
postBuild(parameters)
}

Resources