Run jenkins post build step on the slave node instead of Master - jenkins

I have created a Jenkins job and am able to assign it to run on the master/slave using their label name in Restrict where this project can be run. My job needs to do this
Copy test data to a target folder (not Jenkins workspace)
Run the test
Summarize results
Cleanup the folder with data - Yet to be implemented
Regarding step4, I have to delete the data before marking the job as complete. I have considered a Conditional Build step and it looks to be working in all cases except when the job is aborted.
I am considering a Post Build step using PostBuildTask/GroovyPostBuild and it only works when the job is assigned to run on Master. The issue here is when I try to run the job on Slave1/Slave2, the same task doesn't seem to work and I realized that its being executed on Master instead of Slave1/2.
Would appreciate any guidance on how I can solve this issue.
Thanks

Yes, Post build steps run on Master by default. So, you need another plugin allow you to choose which node you want to run Post build step. In my system, I use "Flexible Publish" plugin that I see it can solve you issue
Flexible plugin example

Related

Execute Build Jobs/Pipelines not on Master but only on Build Agent

Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.

What happens to data on a CI pipeline

I've been asked to create a CI pipeline for a project at my work, I'm creating a load test with JMeter and Taurus so I plan to integrate it with Jenkins to build all the pipeline. I'm just starting on this field and a question that came to my mind is:
What happens to all the data created by the Load Test? does it goes to the deploy phase or it gets deleted once the test is done, should I clean after the tests end?
The data is being kept in the Jenkins workspace and by default it will be kept in the file system forever.
If you decide to publish the artifacts they will be available at Jenkins build dashboard via the web interface.
You might also be interested in Jenkins Performance Plugin which allows plotting performance trend charts and conditionally marking builds as unstable or failed depending on pass/fail thresholds
Example configuration can be found in the How to Run a Taurus Test through Jenkins Pipelines article
I am not completely familiar with your setup but as far as I can see from a quick research, JMeter does the same as every other testing framework and generates HTML reports. Jenkins wont delete them, unless you explicitly delete them (rm file.html) or call cleanWs (clean workspace). If the job is deleted so are the files.
So the test result file should still be present in the deploy phase. You can use a plugin to collect the result. Or just archive it. Or do whatever fits your workflow.
There is generally no need to clean it up (you usually configure Jenkins to delete old builds which takes care of that)

Jenkins Parameterized Build with Jenkinsfile from SCM

We have some jenkins pipeline jobs defined with Jenkinsfile's located in bitbucket server, as described here. These builds are parameterized, and we'd like to be able to manually run them with non-default parameters.
The problem is, since the Jenkinsfile isn't checked out until we run it, the first time we run the build the build button is just "Build Now" instead of "Build with Parameters". Currently we are running it once with the default values so that it fails, and then running it again with the "Build with Parameters" button so we can pass in what we want.
Obviously not ideal. What is the right way to do this so we can run it with custom parameters the first time?
This is not possible currently, as they are post-processed they need to be executed in the 1st run before being known to jenkins and being available as 'Build with parameters'. Issue tracked here: https://issues.jenkins-ci.org/browse/JENKINS-41929
There are various ways to handle this,
The first is as you have alluded to, run it automatically/manually and let it fail, though maybe if you could set working defaults so it at least succeeds?
Another option, is to evaluate if this is the first run or not, and if so, execute the Jenkins job skipping all steps and purely processing the parameters.

Build and run unit-tests with two distinct jobs

I have a basic setup: some source files stored in GitHub that are pulled and built by a Jenkins job.
Now I'd like to run the unit-tests automatically when the build is done (I'm using NUnit if it can help).
I could add another build step to the "build" job to run nunit-console but I'd like to separate the build task from the unit-testing task, so that in the Jenkins dashboard I can directly see what is broken: the build or "only" the tests.
I could create another job that would pull the code-source too but it would duplicate the first job.
What's the simplest way to run the unit-tests directly on the binaries produced by the first job (run second job in the same workspace? copy the binaries? ...) ?
Thanks for any input.
You could use the Copy Artifact Plugin to copy the artefacts to another job and then run the unit tests but this may not work, depending on how C# handles packaging and the project is structured.
It look like you can use the NUint Plugin to publish the results of your tests so you may be able to use a single job as I don't think that the tests will run if the previous build step fails as they don't for JUnit tests

Jenkins - Running install tests on remote machine and reporting results back to Jenkins

I am looking to add some automated tests to run nightly on a project. Currently the project has a few jobs that create multiple builds of various components of the project.
The builds create rpm files, there are multiple jobs creating multiple rpms, I want to grab all of the rpms and install them and test them all under a single test job, there are lots of dependencies on each other. I can install via the command line but these rpms are stored on the Jenkins master machine.
This is as far as I have got;
I have set up the job in Jenkins
I have created a slave for the job to run on
I have used Jenkins to run a bash script on the slave (works)
What I want to do is the following;
At regular intervals (lets say once per day when I know builds have all completed) fetch the most recent passed builds of all the projects and copy them to the slave machine
Install the rpms using a script.
The script performs certain tests during the install (looking at logs etc...) so I want to collect these all and send the results back to Jenkins (may eventually perform other tests here too)
I want the status of the last build image to be determined by my own tests
I also want the test results, logs, etc... to be stored in the Jenkins test job so that we can view them and marvel at their awesomeness!
What I don't know how to do is;
How to copy the files to the slave? Should this be handled on the slave itself using wget or something, or does Jenkins have the functionality (plugin maybe) that handles this all for me?
How do I report my custom results back to the Jenkins job?
I only started working with Jenkins three days ago (the project and Jenkins build jobs are a lot older than that), apologies if I'm missing anything obvious.
UPDATE
I'm thinking a combination of these plugins might do the trick, I've not yet looked into these too much yet though.
Copy artifact plugin to copy the rpms from the latest stable builds of the other jobs
xUnit plugin to interpret some xml files that I generate during the test process
I didn't actually need any plugins for this. I simply set up the job to run on the slave, had a build step that ran some tests and generated an xml file (similar to the jUnit xml results) and then added a post build step to look at the jUnit results (even though the tests weren't jUnit tests).
This worked a charm and I have builds being marked as unstable if they fail tests that I specify, like did they install an rpm and did such and such happen.
I was able to get the latest stable builds as the latest stable builds are coppied to a file server anyway, any failed builds don't go there so that was simple.

Resources