I want to trigger a command on master before the job runs on the slave and one after something like this:
Master setup
Slave build
Master teardown
I have tried to search for a while and browse all plugins but so far I didn't find anything. Is it possible ?
I found that someone was looking for the same here but got no answer.
You can setup this behavior with the Flow plugin.
Create a Flow with your 3 steps as Jenkins jobs in sequence. Restrict the machines where specific builds will be executed.
setup on master
build on slave
teardown on master
You can pass build parameters across builds with the Flow DSL.
Related
I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it
Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.
I have automated Jenkins master and slaves deployment and redeployment successfully.
I know how to manually create pipeline jobs and add github repos to use their Jenkinsfiles for the steps.
my issue is how can I automate the pipeline jobs addition to jenkins after its been destroyed and redeployed without having to manually create the pipeline jobs and point to Jenkinsfile each time.
I have seen this done before in a container environment with chef and docker when redeployed or updated it re-adds all the pipelines automatically again.
I want to not use the UI at all only to confirm job status progress and verify settings.
I would recommend looking at the JobDSL Plugin to create jobs, using a seed job to create them on initial Jenkins startup. The Jenkins Configuration-as-Code plugin can be used to setup any other configuration outside the jobs.
We got a requirement to implement CICD using Jenkins.
Here, Jenkins is running in windows machine and application server running in linux machine and build activity should happen in Linux system. So, We are connecting to linux machine using Jenkins's SSH plugin and executing jobs.
I have created list of freestyle jobs to checkout code from CVS, cleanup activity, Build , stop server, start server, Run Junit, run sonar. all these jobs are chained using 'build other projects' option in post build Action section.
Here, all jobs executes in sequential manner. But, sometimes I need to execute only few jobs like stop and start server.
So, please help me how we can randomly pick jobs which need to be run before triggering build.
Thanks,
Ganesha
Looking for some advice on our Jenkins slave set up. Up until now we've just had a master box. All of the jobs are pipelines that are run from groovy Jenkinsfile stored in svn. These scripts variously refer to other scripts that are in the same directory as the Jenkinsfile and come as part of the checkout. We've decided to add a slave to our setup but are finding that Jenkins behaves differently when jobs are run on the slave.
When a job is run on the master the scripts are checked out into a location like:
<JenkinsHome>/<Workspaces>/<JobName>#Script/
However, when run on the slave, initially there is no checkout so there are no scripts available. We've forced the checkout in the Jenkinsfile by adding a
checkout scm
at the start of the script but this will checkout the scripts to a location like:
<JenkinsHome>/<Workspaces>/<JobName>/
Note the lack of #Script
We can work around this by having the script look in a number of places for the files it needs but I was wondering if anyone else had come accross a more elegant solution.
Close this off. Down to our fundamental misunderstanding of how the master and slave work with each other.