Why is Jenkins changing node to do another build when it doesn't need to?
We have a Jenkins setup with 3 Mac and 3 Windows nodes, building free style projects. We do not use the master for builds. The projects are set to run on any of the 3 nodes suitable for their platform. We are using labels for this purpose.
Some of the time, when we do a build it will do the build on the same node as last time.
But sometimes, without any obvious pattern, it will change to a different node, even though the previously used node is available and not busy. This wastes a lot of time potentially as incremental builds on the same node are much faster than pulling and building everything from scratch.
Jenkins claims it should allocate jobs to the same node if possible, when multiple possibilities exist.
As a result, from the user’s point of view, it looks as if Jenkins tries to always use the same node for the same job, unless it’s not available, in which case it’ll build elsewhere. But as soon as the preferred node is available, the build comes back to it. Reference
Jenkins version is 2.289.2 presently.
The jobs are freestyle builds with shell script/command prompt steps.
Repositories are Git and Mercurial.
I think if you read the Restrict where this project can be run that might provide you answer to your question.
As its definition tells By default, builds of this project may be executed on any agents that are available and configured to accept new builds.
So if you don't specify the agent and restrict the build to be run a particular node, it will randomly choose.
This wastes a lot of time potentially as incremental builds on the
same node are much faster than pulling and building everything from
scratch.
To solve the above problem only, we have the Restrict where this project can be run feature in Jenkins. Please look to the below screenshot for detailed help that we can get in Jenkins .
Related
I'm aware I'm lacking basic Jenkins concepts but with my current knowledge it's hard for to research successfully - maybe you can give me some hints I can use to re-word my question if needed.
Currently I'm facing a situation in which in a setup with several build nodes the Jenkins master machine is running out of disk space because Jenkins clones git repositories on both, the master and build nodes (and the master only has limited space). This question explains why.
Note: the master node itself does not build anything - it just clones the repo to a local workspace folder (I guess it just needs the Jenkinsfiles).
Going through the job configurations and googling this issue I find options regarding shallow and sparse clones or cleaning up the workspace before or after the build using the Cleanup Plugin. But those settings and plugins only care about the checkout done with checkout(SCM) on the build nodes, not the master.
But in case I want to leave the situation as is on the build nodes but keep the workspace folders on the Jenkins master machine slim, how do I approach this? What do I have to search for?
And as a side question - isn't it possible to have something like "git exports"? I.e. having the .git folders removed after checking out the commit I need?
In case it depends on the kind of job I use, I'm using scripted pipeline jobs.
I've got a similar setup: A master node, multiple build nodes.
Simply, I set the number of executors=0 on the master node (from Manage Jenkins -> Manage Nodes), so every job will land on build nodes.
The only repo cloned on the master is the shared library.
Running Jenkins builds in the master node is discouraged for two main reasons:
First of all, the usability of the Jenkins platform might be affected by many ongoing builds, for example showing delays on certain operations.
It is a well-known security problem, as pointed out by the documentation:
Any builds running on the built-in node have the same level of access to the controller file system as the Jenkins process.
It is therefore highly advisable to not run any builds on the built-in node, instead using agents (statically configured or provided by clouds) to run builds.
Always in that wiki page you can find details on this security problems, like what an attacker can do and an alternative that lets you use the master node to build, but patching some of the listed security problems. The solution is based on a plugin called Job Restrictions Plugin.
By the way, the most popular decision is to let slave nodes do the build:
To prevent builds from running on the built-in node directly, navigate to Manage Jenkins » Manage Nodes and Clouds. Select master in the list, then select Configure in the menu. Set the number of executors to 0 and save. Make sure to also set up clouds or build agents to run builds on, otherwise builds won’t be able to start.
If you really have strong reasons to build on the master node, you can always apply a different git clone strategy based on the value of the env.NODE_NAME environment variable. It is set to master if the pipeline job is run on the master node, otherwise it is filled with the node name (of course). Nonetheless, I have never seen anyone customizing the git clone command based on the node used, so... Don't do it 😉
About the sparse checkout and the sparse/shallow clone:
The former creates an incomplete working directory, avoiding to map all the trees and blobs present in the current commit, but only those you specify. Do you save that much space? Or better, is your project tree that heavy that you would need to do something like this? The sparse-checkout is generally used when you want a clean working tree, without unnecessary files.
sparse/shallow clone can be useful sometimes to reduce the download time, especially when you have a huge history. The most common option is --depth=1 that instructs git to retrieve only the most recent commit. As far as I know, Jenkins already applies some optimizations to speed the clone process but it generally keeps the entire history. Again, I am not sure you would gain a lot more space.
A valid (at least for me) alternative to space-optimizations on git files, is to build on Docker containers. Jenkins has reached a good level of integration with Docker and there are a lot of advantages using it, among which the disposal of the workspace after the job finished.
I didn't use the pipeline feature myself so far -- but conceptually it is clear that the master requires initial access to the Jenkinsfile. It will therefore be difficult to avoid this step entirely.
If Jenkins itself does not provide an option to fine-tune the clone/checkout behavior on the master side, then I'd see these options:
Create a custom version of Jenkins (or of the corresponding plugin) which hard-codes the behavior that you need (like, shallow/sparse clone). Modifying and building both Jenkins and its plugins is surprisingly simple; often, the most difficult part is to locate the code that you need to touch.
Tune the master's clone in-place. Shallowness and sparse-checkout properties can be set for existing clones. If you set these properties after the initial clone (possibly in the Jenkinsfile itself or in a post-build step), then Jenkins may possibly maintain those properties.
Constantly re-cloning and deleting the repo on master side increases the load both on the Jenkins master and on your Git server, so better be careful with that (especially since your repository has a size where disk space matters already). If you really want to go that way, you could try to force-remove the clone on the master in a post-build step -- this should be relatively easy to implement. You need to check that this hack will not interfere with Jenkins' access to the Jenkinsfile.
Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.
I have a repository that I can create a release for. I have jenkins setup and since the jenkins is hosted inside a firewall that restricts any communication from outside the network, github-webhook doesnt work. Also getting the reverse proxy to work is a bit of a challenge for me. I understand that the github webhook sends out a json payload and I can qualify it based on release. But as I previously mentioned, this won't work because jenkins and github cannot talk with each other.
Therefore, I tried this solution; Filtering the branches or tags that the jenkins will build on. The following are the things I tried and they all didnt work. Everytime I run a build, jenkins just builds it.
I also tried the below mentioned regex,
:refs\/tags\/(\d+\.\d+\.\d+)
I also tried [0-9] instead of d. It build it every single time.
Am I missing something ? Or is that how jenkins work ? Even though we qualify the builds to run only on certain tags or releases, if we click on build now, it just runs it every single time ?
My requirement is very simple. I want jenkins build to run only on the release I created even if the release is 'n' commits behind the master. How can I achieve this ?
Use case:
CI server polls some VSC repository and runs test suite for each revision. And if two or more revisions were commited, even in a relatively small time interval, I want the CI server to put each of them in queue, run tests for each, store the results, and never run tests again for those commits. And I don't want the CI server to launch jobs in parallel, to avoid performance issues and crashes in case of many simultaneous jobs.
Which CI server is able to handle this?
My additional, less important requirement is that I use Python and it is desirable to use software written in Python, so I looked at the Buildbot project, and I especially want to see reviews for this tool in the matter of is it usable in general and is it capable of replacing most popular solutions like Travis or Jenkins.
I have used jenkins to do this. (with subversion mainly, c/c++ build and also bash/python scripted jobs)
The easiest and default handling of VCS/SCM changes in jenkins is to poll for changes on a set time. A build is triggered if there is any change. More than one commit may be included in build (e.g. if 2 commits are done close together) when using this method. Jenkins shows links back to scm and scm update done as well as showing build logs and you can easily configure build outputs and test result presentation.
https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-Buildsbysourcechanges
What VCS/SCM are you using? Jenkins interfaces to a good few VCS/SCM:
https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Sourcecodemanagement
This question answers how to make Jenkins build on every subversion commit:
Jenkins CI: How to trigger builds on SVN commit
TeamCity is free (up to a number of builds and build agents) and feature-rich. It's very easy to install and configure, although it may take some time to find your way through the wealth of options. It is extremely well documented: http://www.jetbrains.com/teamcity/documentation/
It is written in Java but supports many tools natively and others through command-line execution, so you can build anything with it that you want. (I use it mostly for Ruby.) It understands the output of many testing tools; if you're not using one of them maybe yours can emulate their output. It's quite extensible; it has a REST API and a plugin API.
It can be configured to build on each commit, or to build all of the commits that arrived in a given time period, or to trigger in other ways. Docs here: http://confluence.jetbrains.com/display/TCD8/Configuring+VCS+Triggers
By default it starts a single build agent and runs one build at a time on that build agent. You can run more build agents for speed. If you don't want to run more than one build on a machine, only start one build agent on each machine.
I dont want that CI server would launch jobs in parallel to avoid
performance issues and crashes in cases of many simultanious jobs.
In buildbot you can limit the number of running jobs in a salve with max_build parameter or locks
As for Buildbot and Python, you may coordinate parallel builds by configuration, for example:
Modeling Parallel Processes: Steps
svn up
configure
make
make test
make dist
In addition, you can also try using a Triggerable scheduler for your builder which performs steps U,V,W.
From the docs:
The Triggerable scheduler waits to be triggered by a Trigger step (see
Triggering Schedulers) in another build. That step can optionally wait
for the scheduler's builds to complete. This provides two advantages
over Dependent schedulers.
References:
how to lock steps in buildbot
Coordinating Parallel Builds with
Buildbot
There is a Throttle Concurrent Builds Plugin for Jenkins and Hudson. It allows you to specify the number of concurrent builds per job. This is what it says on the plugin page:
It should be noted that Jenkins, by default, never executes the same Job in parallel, so you do not need to actually throttle anything if you go with the default. However, there is the option Execute concurrent builds if necessary, which allows for running the same Job multiple time in parallel, and of course if you use the categories below, you will also be able to restrict multiple Jobs.)
There is also Gitlab CI, a very nice modern Ruby project that uses runners to distribute builds so you could, I guess, limit the number of runners to 1 to get the effect you are after. It's tightly integrated with Gitlab so I don't know how hard it would be to use it as a standalone service.
www.gitlab.com
www.gitlab.com/gitlab-ci
To only run tests once for every revision you can do something like this:
build
post-build
check if the revision of the build is in /tmp/jenkins-test-run
if the revision is in the file skip tests
if the revision is NOT in the file run tests
if we ran the tests then write the ID in /tmp/jenkins-test-run
I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.