Agent field not being populated in Jenkins Pipeline jobs - jenkins

Trying to investigate Jenkins build time by node I noticed that when we switched to using jenkins-timeline our jobs stopped filling in the "Agent" field on the "Build Time Trend" page.
Old pre-pipeline jobs show the agent name in that list, but new pipeline jobs just show a blank agent field.
If I go into an individual pipeline build, I can look in the Console Output and find the Running on line to work out which agent was used, just as I can see the Building remotely on line in the console output of non pipeline builds.
Is there a way to get pipeline builds to fill in the Agent field with the machine the job was actually run on?

I believe Jenkins does not collect this information when running Pipeline jobs, as Pipeline job may run on many agents in parallel.
We wanted to have that information so we run an instance of InfluxDB and send metrics there. These metrics include the agent name, so this is available for analysis later.

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

How to checkout and run pipeline file from TFS on specific node in Jenkins?

I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.

Jenkins skip some jobs in chain of freestyle jobs

We got a requirement to implement CICD using Jenkins.
Here, Jenkins is running in windows machine and application server running in linux machine and build activity should happen in Linux system. So, We are connecting to linux machine using Jenkins's SSH plugin and executing jobs.
I have created list of freestyle jobs to checkout code from CVS, cleanup activity, Build , stop server, start server, Run Junit, run sonar. all these jobs are chained using 'build other projects' option in post build Action section.
Here, all jobs executes in sequential manner. But, sometimes I need to execute only few jobs like stop and start server.
So, please help me how we can randomly pick jobs which need to be run before triggering build.
Thanks,
Ganesha

Manual Build Step in Jenkins Declarative Pipeline?

This is a follow-up question to this previous post that doesn't seem like it was ever truly answered with more than a "this looks promising":
Jenkins how to create pipeline manual step.
This is a major functionality gap for CICD pipelines. The current "input step" of declarative (1.2.9) requires the whole pipeline to have to wait for the input step before the pipeline is completed (or have a time-out that won't allow you to re-trigger later). Depending on how agents are scoped it can also hold up an executor or require you to have to start up a new slave for every build step.
This is the closest I've come to a solution that doesn't eat up an executor (pipeline level "agent none" with agents defined in all stages described here: https://jenkins.io/blog/2018/04/09/whats-in-declarative/) but starting a new slave for every build step seems time wasteful and requires additional considerations for persisting your workspace. The final solution offered was to throw a "time-out" for the input, but this still doesn't work because then you can never move that build to stage later and will need to re-build.
Any solutions or suggestions here would be very appreciated.
If you are using Kubernetes Plugin for Jenkins Agent to run as containers in Kubernetes cluster, then there is a setting call idleMinutes.
idleMinutes Allows the pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it. Use this only when defining a pod template in the user interface.
There you can define your agent at pipeline level without defining it at all stages. (given your agent is designed to have the capabilities run in all stages). When it comes to user input stage, set agent to none at stage level so that it is not holding up the executor.

How to trigger Jenkins downstream job from script but not manually through Jenskins?

From Jenkins plugin (ie. Delivery Pipeline, Parameterized Trigger), we could setup a pipeline which contains multiple jobs in sequence, for instance: Build -> Unit Test -> Deploy To DEV.
Now, for each pipeline, we want to stop after "Unit Test" because we need to wait for someone to approve the deployment before it can go on (We use JIRA as the tracking system for this).
Say when, someone approves the deployment ticket in JIRA, we already setup a post action to fire and trigger a job in Jenkins, say "Deploy To Dev" in this case. However, this job will run independently outside the pipeline.
Is there a way, we can trigger the down stream job from script within an instance of a pipeline so it can carry over all the parameters from upstream and shown as part of the pipeline?
Thx

Resources