Restrict Jenkins declarative pipeline agents at the server level? - docker

I'm currently looking into replacing a number of existing Jenkins builds with declarative pipeline builds.
The Jenkinsfile allows me to specify agents in a number of ways (name, labels, docker image), but what I really need is a way to restrict this at the server level.
The goal is to force all pipeline-based builds to run inside Docker containers, and not directly on the node.
Is this possible? If so: how?

Related

Is there a way to add multiple agents in the agent step(declarative pipeline)so that I can be able to run the build on the multiple agents parallelly

I'm using Jenkins (2.332.2) on a ubuntu 20.04 machine. Currently I'm using parallel step to achieve this, but the script looks a little big as I have to specify agent none in the beginning and have to mention the different agent names for every stage.

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

Jenkins Docker image building for Different Tenant from same code repository

I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.

Jenkins and gitlab sharing build slaves

Let's say you have a gitlab instance and it already uses Jenkins for all its CI builds via the gitlab Jenkins plugin, etc. The Jenkins setup has a modest collection of build slaves providing a variety of platforms, etc. and each slave is set up to run just one job at a time (i.e. a Jenkins job gets exclusive access to the build slave, which is important for reasons I won't go into here).
Now let's say you want to consider using gitlab's own native CI support, moving one or more projects over to gitlab instead of Jenkins. The gitlab CI would need to use the same set of build slaves, but it needs to play nice with Jenkins and the two need to cooperate so that if one runs a job on a particular slave, the other won't submit a job to that same slave until the first finishes. In effect, while Jenkins is running a job on a slave, gitlab should see that slave as unavailable and vice versa.
Anyone have working methods for getting gitlab to tell Jenkins it is using a slave while it runs a CI job on there and vice versa? The method doesn't have to be 100% bullet proof, it would potentially be okay if both gitlab and Jenkins run a job on the same slave at the same time if it is a rare event (i.e. race conditions could potentially be tolerated if the frequency of occurrence is likely to be low).
Additional info:
Build slaves include Linux, Windows and Apple.
Docker is not used and would not be permitted at this time.
We have full admin access to everything, but changing code in gitlab or Jenkins themselves would be rejected. Adding scripts or plugins would be okay.

How can I ensure that only one if a kind of Jenkins job is run?

I have several integration tests within my Jenkins jobs. They run on several application servers, and I want to make sure that only one integration test job is run at the same time on one application server.
I would need something like a tag or variable within my jobs which create a group of jobs and then configure the logic that within that group, only one job may run at the same time.
Could I use the Exclusion plugin for that? Does anyone have experience with it?
Use the Throttle Concurrent Builds Plugin. It replaces the Locks and Latches plugin, and provides the capability to restrict the number of jobs running for specific labels.
For example: you create a project category 'Integration Test Server A' and tie jobs to it with a maximum concurrent count of 1, and a second 'Integration Test Server B' label and tie other jobs to it, both categories will only run a single concurrent build (assuming you've set a max job count of 1), and the other jobs in that category will queue until the 'lock' has cleared.
Using this method, you don't have to restrict the number of executors available on any specific Jenkins instance, and can easily add further slaves in the future without having to reconfigure all your jobs.
If I understand you right, you have a pool of application servers and it doesn't matter on what server your tests run. They only need to be the only test on that server.
I haven't seen a plugin that can do that. However, you can get easily around it. You need to configure a slave for each application server. (1 slave = 1 app server) You need to assign the same label to all slaves and every slave can only have one executor. Then you assign the jobs that run the integration tests, to run on that label. Jenkins will assign the jobs then to the next available slave (or node) that has that label.
Bare in mind that you can have more than one slave running on the same piece of hardware and even a master and a slave can coexist on the same server.
Did you check below parameter in the Jenkins -> Manage Jenkins -> Configure system
# of executors
The above parameter helps you restrict the number of jobs to be executed at a time.
A Jenkins executor is one of the basic building blocks which allow a build to run on a node/agent (e.g. build server). Think of an executor as a single “process ID”, or as the basic unit of resource that Jenkins executes on your machine to run a build. Please see Jenkins Terminology for more details regarding executors, nodes/agents, as well as other foundational pieces of Jenkins.
You can find information on how to set the number of Jenkins executors for a given agent on the Remoting Best Practices page, section Number of executors.
Source - https://support.cloudbees.com/hc/en-us/articles/216456477-What-is-a-Jenkins-Executor-and-how-can-I-best-utilize-my-executors

Resources