Run postbuild action for every test on specific jenkins node - jenkins

I have a remote agent and multiple local agents on my jenkins server. I have a script that I want to run only after those test that are builded on the remote agent. Is it possible somehow?
Thanks

Using jenkins pipeline you have the possiblity of running actions according to the result of the build. Take a look here:
https://jenkins.io/doc/book/pipeline/syntax/#post
You can even separate your build in "stages" and run actions according to the result of the stage using the same method.
On the entire pipeline, or on a specific stage, or even on your post actions, you can choose which node does the job.
Considering you run a stage on a specific node you could:
pipeline {
stages {
stage ('Build') {
agent { label "SLAVE1" }
steps {
// Stuff to do
}
post {
always {
// stuff
}
}
}
}
}
Or at end end of your pipeline in a post block:
pipeline {
stages {
stage ("Build") {
agent { label "SLAVE" }
steps {
// stuff
}
}
}
post {
// Or failure, unstable, success...
always {
node('SLAVE1'){
// stuff
}
}
}
}

Related

Job DSL plugin | Shared Library | Pipeline jobs | Github Hook not working

Please bear with me the description might be long but it might give a clean picture of the intent and issue.
I have used Job DSL Plugin to create a seeder job, which in turns creates two new Jobs. I have 2 separate repositories
For maintaining jenkins pipeline scripts.
For actual code to build.
First I have created a pipeline job in jenkins which in turns creates view and 2 jobs. Config shown below:
The Jenkinsfile given below uses Job DSL plugin api, reads the groovy script and creates the required 2 jobs.
node('master') {
checkout scm
jobDsl targets: ['dsl/seedJobBuilder.groovy'].join('\n'),
removedJobAction: 'IGNORE',
removedViewAction: 'IGNORE',
lookupStrategy: 'SEED_JOB'
}
seedJobBuilder.groovy creates a dsl pipeline job whose task would be to build the actual codebase.
listView('Build Pipelines') {
description('All build and deploy jobs')
jobs {
names(
'build',
'deploy',
)
}
columns {
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}") // url of actual code repo not the jenkins script repo
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}") // jenkins script repo url
credentials("somecredentials")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPush()
}
}
Config of the above job created by Job DSL:
This job reads the pipelineBuildEveryDay groovy script, checkout the actual codebase and build and deploy.
The place where I am struggling is how do we trigger build on this second job through github hook or through ghprb. Since I don't want to manipulate manually the second job and the git url of the job is the script repo URL not the codebase URL. Is it possible to do this even? If yes what am I missing?
I have the webhook configured
pipelineBuildEveryDay.groovy
pipeline {
libraries {
lib("shared-library#${params.JENKINS_SCRIPT_BRANCH}")
}
agent {
node {
label 'master'
}
}
options {
skipDefaultCheckout(true) // No more 'Declarative: Checkout' stage
}
stages {
stage('Crazy Build Pipeline') {
tools {
jdk 'java11'
}
stages {
stage('Prepare build name') {
steps{
script{
currentBuild.displayName = "${currentBuild.number}-build"
}
}
}
stage('Checkout') {
steps {
cleanWs()
script {
checkoutRepository("${projectDir}", "${params.PROJECT_TAG}", "${params.PROJECT_REPO}")
}
}
}
stage('Run Tests') {
steps {
echo "Running test coming soon..."
}
}
}
}
}
// post build actions
post {
success {
echo "success"
}
failure {
echo "failure"
}
}
}
Well the suffering comes to an end. Posting this answer for anyone struggling with similar sort of issues.
Make sure you uncheck all other types of trigger, the only checked one should be pull request builder.
The part which screwed me was the Project URL. In my case in SCM part the github url was of the Jenkins-scripts repository URL not the URL of the codebase I want to build. So I tried to use my codebase repository URL in Github Project URL textbox.
But the real problem was using repository URL in the format 'https://code-base-repo-url.git' instead it should be 'https://code-base-repo-url'. Sounds stupid? Yeah I know!
Finally the complete Job config pipeline script if it helps:
def pipelineRepo = 'https://jenkins-script-repo'
def projectRepo = 'https://code-base-repo-url'
def projectTag = '${GIT_BRANCH}'
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}")
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}")
credentials("use-your-own-user-pass-cred")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPullRequest {
admin('use_your_own_admin')
triggerPhrase('build please')
useGitHubHooks()
permitAll()
displayBuildErrorsOnDownstreamBuilds()
extensions {
commitStatus {
context('Jenkins')
completedStatus('SUCCESS', 'All is well')
completedStatus('FAILURE', 'Something went wrong. Investigate!')
completedStatus('ERROR', 'Something went really wrong. Investigate!')
}
}
}
}
}

Running script (stash) prior to parallel stages being invoked

I have a parallel stage setup, and would like to know if it's possible to run a script prior to the nested stages, so something like this:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
I know the above is wrong, I get the error Only one of "matrix", "parallel", "stages", or "steps" allowed for stage "E2E-PR-CYPRESS"
I already have the stash script in a setup stage at the beginning of my pipeline, but I'd like to be able to restart from this stage above on Jenkins, and so need the stash part in this stage as the parallel stages need to unstash the contents.
Updated Answer:
After playing a bit with the Restart from a Stage option there is seems to be a nice feature designed exactly for your needs called Preserving stashes for Use with Restarted Stages:
Normally, when you run the stash step in your Pipeline, the resulting
stash of artifacts is cleared when the Pipeline completes, regardless
of the result of the Pipeline. Since stash artifacts aren’t accessible
outside of the Pipeline run that created them, this has not created
any limitations on usage. But with Declarative stage restarting, you
may want to be able to unstash artifacts from a stage which ran before
the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a
maximum number of completed runs whose stash artifacts should be
preserved for reuse in a restarted run. You can specify anywhere from
1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
options {
preserveStashes()
// or
preserveStashes(buildCount: 5)
}
This built in feature is exactly what you need to solve your issue without any special modifications to your code, as it will allow you to rerun the pipeline from any stage and still use the existing file that were previously stashed.
Original Answer:
You can actually achieve this quite simply using the scripted syntax for the parallel command, and it will also allow you to avoid the duplicate code in the parallel stages.
parallel: Execute in parallel
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
In your case it can look like:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
// Define the parallel execution stages
def stages = ['Cypress Tests 1', 'Cypress Tests 2']
// Create the parallel executions and run them
parallel stages.collectEntries {
["Running ${it}": {
node('aws_micro_slave_e2e') {
skipDefaultCheckout()
runE2eTests()
}
}]
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This way you can easily add more parallel steps by updating the stages list, or even receive it as an input parameter. In addition you can create the parallel executions by different labels or tests suits, instead of the stage name.
You can add a Prepare stage at the top like this:
stages{
stage('Preperation'){
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
}
stage('E2E-PR-CYPRESS') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
An out of the box concept
Propose splitting the job into 2 parts taking the following into consideration:
Currently use an EC2 plugin, as the current agents are EC2
Running the parallel stages with the same stashed content ready to unstash
Create jenkins pipeline job 1:
This job will checkout the workspace with any type of agent
Create a packer json to create a customised AMI for the EC2
The customised AMI will stash the contents and move to a directory that will appear on the EC2 when the agent is built
Output the AMI ID, run a groovy job to update the EC2 plugin AMI ID with the customised AMI ID to temporarily set the AMI in memory on Jenkins
pipeline {
agent {
docker {
test-container
}
}
options {
buildDiscarder(
logRotator(
numToKeepStr: '10',
artifactNumToKeepStr: '10'
)
)
ansiColor('xterm')
gitConnection("git")
}
stages {
stage('Run Stash Cypress Functional Test') {
steps {
dir('functional-test') {
// develop branch is canary build, all other branches are stable builds
script {
sh """
# script to stash cypress tests
"""
}
}
}
}
stage('Functional Test AMI Build') {
steps {
dir('functional-test/packer') {
withAWS(role: 'PackerBuild', roleAccount: '123456789012', roleSessionName: 'Jenkins-Workflow-FunctionalTest-Packer') {
script {
sh """
# packer json script will require to copy contents from workspace, run the script to stash content
# packer json script will require to capture new AMI ID
# https://discuss.devopscube.com/t/how-to-get-the-ami-id-after-a-packer-build/36
# https://www.packer.io/docs/post-processors/manifest
packer validate FunctionalTestPacker.json
packer build -debug FunctionalTestPacker.json
# grab AMI ID and export as jenkins env variable
"""
}
}
}
}
}
stage('run groovy script to update AMI ID on EC2 plugin') {
steps {
dir(groovy job dir) {
script {
sh """
# run groovy job to update AMI on Jenkins EC2 plugin
# https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
"""
}
}
}
}
stage('Kickoff Functional Test Deploy') {
// pipeline checkbox parameter, when ticked it will automatically kick off the functional test pipeline
when {
expression {params.RUN_TESTS.toBoolean()}
}
steps {
script{
env.branch = params.BRANCH
sh """
echo "Branch is ${branch}"
"""
}
build job: 'workflow/CypressFunctionaTestDeployAndRun',
parameters: [
string(name: 'BRANCH', value: env.branch)
],
wait : false
}
}
}
post {
always {
cleanWs()
}
}
}
Create jenkins pipeline job 2:
This job will create the EC2 agents via the plugin from the customised AMI from pipeline job 1
This means your agents will have the same workspace ready to unstash - so you can execute a parallel run
Also you could move a lot of your user data script that is in the EC2 plugin as part of the customised AMI build, thus cut down the time for each EC2 agent to get ready to carry out execution
pipeline {
stages {
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}

Copying files between jenkins slaves In pipeline

Our goal is to divide our pipeline with multiple agents.
We have a slave called slave1, that has a sole purpose of checkout out on git,
and building executables.
Eventually when slave1 finishes, we'd like to pass it's output to slave2,
which has a sole purpose of testing slave1's executable.
Please notice that the idea here is not to split jobs, but achieving the
files in the same pipeline.
Heres an example of a Jenkinsfile that'll make more sense:
pipeline
{
agent
{
label 'slave1'
}
stages
{
stage("Initialize & Build")
{
steps
{
script
{
println("Im starting the pipeline with slave1!")
// Builds Files
// ....
// Has many files that needs to pass to slave2
}
}
}
stage("Execute & Test")
{
agent
{
label 'slave2'
}
steps
{
script
{
println("Im in the new slave - slave2!")
// How does this slave get the files?
}
}
}
}
}
How is it possible to pass these files between the agents?
I read about artifacts, but it seems like it's goal is to return the objects from a job, this isn't necessarily what is needed.
If both of your agents are on linux servers, you can simply scp the build output of agent 1 to agent 2.
Also, first you will need to establish passwordless SSH connection between these 2 agents.
Here's an example.
pipeline
{
agent
{
label 'slave1'
}
stages
{
stage("Initialize & Build")
{
steps
{
script
{
println("Im starting the pipeline with slave1!")
// Builds Files
// ....
// option 1: copy files in the workspace of agent 2.
scp $WORKSPACE/build_output/* <user>#agent2:/home/<user>/workspace/<job_name>/
// option 2: copy files to any known location of agent 2
scp $WORKSPACE/build_output/* <user>#agent2:/<destination_path>
}
}
}
stage("Execute & Test")
{
agent
{
label 'slave2'
}
steps
{
script
{
println("Im in the new slave - slave2!")
//option 1
dir($WORKSPACE) {
// Test execution steps
}
// option 2
dir(<destination_path>) {
// Test execution steps
}
}
}
}
}
}

Jenkins. Use a shared library on the options phase

So I have created a shared library in jenkins with a listener that gets triggered each time the pipelines reads a FlowNode so I can run groovy code before and after each stage, step, etc...
I'm able to call the shared library in a step phase like this:
pipeline {
agent any
stages {
stage('prepare') {
steps{
prepareStepsWrapper()
}
}
stage('step1') {
steps {
echo 'step1'
}
}
stage('step2') {
steps {
echo 'step2'
}
}
stage('step3') {
steps {
echo 'step3'
// fail on purpose
sh 'notfoundexecutablelol'
}
}
stage('step4') {
steps {
echo 'step4'
}
}
}
post{
always{
println env.getEnvironment()
}
}
}
And works pretty great!
With this approach the 'prepare' stage needs to be filtered out so I've switched to the options directive:
pipeline {
agent any
options {
prepareStepsWrapper()
}
stages {
stage('step1') {
steps {
echo 'step1'
}
}
...
}
}
But the pipeline fails with
WorkflowScript: 4: Invalid option type "prepareStepsWrapper"
tl;dr; How can I load a shared library within the options directive?
What does the option-stage do?
The options directive allows configuring Pipeline-specific options
from within the Pipeline itself.
You can't call the shared-library in the options-stage. This stage should not be used for execute any logic, rather it sets configurations for the pipeline. All availables options and the documentation can be found here.
You could try to create a stage that simply calls your prepareStepsWrapper() and use locks to avoid that other stages are executed before this stage.

Jenkins DSL Pipeline: delete a job from its pipeline

I have a Jenkins pipeline job that (among other things) creates another pipelineJob (to cleanup everything afterwards) using Job DSL plugin.
pipeline {
agent { label 'Deployment' }
stages {
stage('Clean working directory and Checkout') {
steps {
deleteDir()
checkout scm
}
}
// Complex logic omitted
stage('Generate cleanup job') {
steps {
build job: 'cleanup-job-template',
parameters: [
string(name: 'REGION', value: "${REGION}"),
string(name: 'DEPLOYMENT_TYPE', value: "${DEPLOYMENT_TYPE}")
]
}
}
}
}
The thing is that I need this newly generated job to be built only once and then, if the build was successful, the job should be deleted.
pipeline {
stages {
stage('Cleanup afterwards') {
// cleanup logic
}
}
post {
success {
// delete this job?
}
}
}
I thought, that this can be done using Pipeline Post Action, but, unfortunately, I couldn't find any out-of-the-box solution for this.
Is it possible to achieve this at all?
You can achieve this using the post Groovy and then you will need to write some groovy code in order to delete the job:
#!/usr/bin/env groovy
import hudson.model.*
pipeline {
agent none
stages {
stage('Cleanup afterwards') {
// cleanup logic
steps {
node('worker') {
sh 'ls -la'
}
}
}
}
post {
success {
script {
jobsToDelete = ["<JOB_TO_DELETE"]
deleteJob(Hudson.instance.items, jobsToDelete)
}
}
}
}
def deleteJob(items, jobsToDelete) {
items.each { item ->
if (item.class.canonicalName != 'com.cloudbees.hudson.plugins.folder.Folder') {
if (jobsToDelete.contains(item.fullName)) {
manager.listener.logger.println(item.fullName)
item.delete()
}
}
}
}
Tested both cases and work on Jenkins 2.89.4
You should do that all in one job instead of creating and deleting jobs. Use multiple stages for that, e.g. deploy test system, run tests / wait for tests to be finished, undeploy. No need for extra jobs. Example posted here: Can a Jenkins pipeline have an optional input step?

Resources