jenkins plugin is running on master node instead of on executor - jenkins

I wrote a jenkins plugin, and Im trying to use it in a pipeline.
I noticed that when im activating the pipeline, the plugin execution is running on the master node, instead of on the agent itself. (the rest of the steps executed on the agent as should be).
it's important to me that the plugin will run from the executor and not from the master node.
can I do something about it?

You should restrict your build to run in the specific node
And be sure that this node does not have any warning in the "manage nodes" section...

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

Is the Jenkins workspace on the master or the worker?

Who does the actual cloning of the project, is it the master or the agent node? If it is the master, then how does the agent node actually execute the job. If it is the agent node, how can we view the workspace in the browser?
When people ask "where is the workspace" the answer is usually a path, but I am more interested in where that path is, on the master or the agent node? Or maybe it is both?
Edit1
Aligned terminology to this: https://jenkins.io/doc/book/glossary/ in order to avoid confusion.
In a Jenkins set up all the machines are considered nodes. The master node connects to one or more agent nodes. Executors can run both on the master or agent nodes.
In my scenario, no executors run on the master. They are run only on the agent nodes.
The answer is: it depends !
First of all, although it is not a good practice IMO, some installation let the master be an actual worker and run jobs. In this case, the workspace will be on the master.
If you configured the master not to accept jobs, there are still occasion when a workspace can be created on the master. A good example is when your job is a "pipeline script from SCM". In this case, the master will create a workspace for the job, clone the target repo, read the pipeline, and start needed jobs on whatever slave is targeted, creating a workspace to run the actions themselves. If the pipeline targets multiple slaves, there will be a workspace on each of them.
In simple situation (e.g. maven or freestyle job), the workspace will only be on the targeted slave.
I needed to dig a bit deeper to understand this.
I ran a brand new instance of Jenkins and I attached a single agent node. I used SSH and I set the remote (agent) root directory to: /home/igorski/jenkins
As soon as I attached the node the remoting folder and remoting.jar showed up in that root directory.
I ran a basic Gradle Java pipeline job (Jenkinsfile in the project).
The workspace showed up on the slave. Not on the master.
From the Jenkins GUI I can access the workspace and see it's contents.
At the moment I kill the agent machine I can no longer view the workspace in Jenkins.
My guess is that the remoting.jar somehow does a live sync.
I also ran a freestyle project and I can confirm the same. As soon as the agent is killed I can no longer open the Workspace and I get an error stack trace:
hudson.remoting.Channel$CallSiteStackTrace: Remote call to JenkoOne
This was much more obvious with the Pipeline job though. There you get a link to the agent that you need to click in order to see the contents. As soon as the agent is gone the link is disabled. And you know exactly on which agent the node is. With freestyle jobs, you just get a Workspace link. There is no indication on what agent it is or if the agent is accessible at the moment.
So, both Zeitounator and fabian were correct.

How to checkout and run pipeline file from TFS on specific node in Jenkins?

I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.

Jenkins Pipeline & Docker Plugin - concurrent builds on unique agents

I'm using Jenkins version 2.7.1 with the Pipeline suite of plugins to implement a pipeline in a Jenkinsfile, together with the Docker Plugin. My goal is to execute multiple project builds in parallel, with each project build running inside its own dedicated container. My Jenkinsfile looks like:
node('docker-agent') {
stage('Checkout') {
checkout scm
}
stage('Setup') {
build(job: 'Some External Job', node: env.NODE_NAME, workspace: env.WORKSPACE)
}
}
I have a requirement to call an external job, but I need this to execute on the same workspace where the checkout scm step has checked out the code, hence the node and workspace parameters. I understand that by wrapping a build call inside a node block effectively wastes an executor, but I'm fine with that since the agent is a container on a Docker Cloud and isn't really wasting any resources.
The one problem with my approach is that another instance of this project build could steal the executors from a different running instance in the time gap between the 2 stages.
How can I essentially ensure that (1) project builds can run concurrently, but (2) each build runs on a new instance of an agent labelled by docker-agent?
I've tried the Locking plugin, but a new build will simply wait to acquire the lock on existing agent rather than spinning up its own agent.
To prevent other builds running on the same agent, limit the number of executors to 1 for each agent in your docker cloud environment (that's a setting when configuring docker for that label). That will require a new container to start per executor.
That said, I wouldn't design a pipeline like this. Instead, I'd use stash and unstash to copy your checkout and any other small artifacts between the nodes so that you can pause execution without holding a node running.

Run command on master before/after slave

I want to trigger a command on master before the job runs on the slave and one after something like this:
Master setup
Slave build
Master teardown
I have tried to search for a while and browse all plugins but so far I didn't find anything. Is it possible ?
I found that someone was looking for the same here but got no answer.
You can setup this behavior with the Flow plugin.
Create a Flow with your 3 steps as Jenkins jobs in sequence. Restrict the machines where specific builds will be executed.
setup on master
build on slave
teardown on master
You can pass build parameters across builds with the Flow DSL.

Resources