How to build on all agents in Jenkins pipeline? - jenkins

I'm trying to build a Jenkins declarative pipeline that will build on all agents in parallel.
How can I do this without disabling sandbox?
I have come across this page: https://jenkins.io/blog/2017/09/25/declarative-1/ but it seems repetitive, especially when padded out with my code as nearly all operations are performed almost the same on every node. Is there a way to do this and avoid repeating code?

I suggest that you follow the common pattern described in the referenced article.
By assigning labels identifying the node's operating system and allocating nodes based on these labels, you ensure that the job runs exactly once in each of the different build environments.
A severe drawback of your suggestion to build on all of the available agents (as said, I don't know anything how to actually do that)) would be in the case of one or multiple build agents being offline. So you don't run on Windows, because the server was just rebooting, but your build result is green as nothing failed? Not a good idea, isn't it?
Another benefit of the label-based approach is that you can easily add additional build agents to cope with increased number of builds, e.g., as your team grows. You don't want to build twice on Windows, when you add another build agent with Windows, right?
So I strongly recommend: Assign labels to your build agents and then specify, on which agents your job needs to run.

Related

How do I structure Jobs in Jenkins?

I have been tasked with setting up automated deployment and, after some research, settled on Jenkins to get the job done. Prior to this I had approximately zero knowledge of Jenkins, beyond hearing the name. I have no real knowledge of Devops beyond what I have learnt in the last couple of weeks; no formal training, no actual books, just Google searches.
We are not running a full blown/classic CI/CD process; this is a business decision. The basic requirements are:
Source code will be stored in GitHub.
Pull requests must be peer approved.
Pull requests must pass build/unit/db deploy tests.
Commits to specific branches must trigger a deployment to a related specific environment (Production, Staging or Development).
The basic functionality that I am attempting to support covers (what I currently see as) two separate processes:
On creation of a pull request, application is built, unit tests run, and db deploy tested. Status info must be passed to GitHub.
On commit to one of three specific branches (master, staging and dev) the application should be built, and deployed to one of three environments (production, staging and dev).
I have managed to cobble together a pipeline that does the first task rather well. I am using the generic web hook trigger, and manually handling all steps using a declarative pipeline stored in source control. This works rather well so far and, after much hacking, I am quite happy with the shape of it.
I am now starting work on the next bit, automated deployment.
On to my actual question(s).
In short, how do I split this up into Jobs in Jenkins?
To my mind, there are 1, 2 or 4 Jobs to be created:
One Job to Rule them All
This seems sub-optimal to me, as the pipeline will include relatively complex conditional logic and, depending on whether the Job is triggered by a Pull Request or a Commit, different stages will be run. The historical data will be so polluted as to be near useless.
OR
One job for handling pull requests
One job for handling commits
Historical data for deployments across all environments will be intermixed. I am a little concerned that I will end up with >1 Jenkinsfile in my repository. Although I see no technical reason why I can't have >1 Jenkinsfile, every example I see uses a single file. Is it OK to have >1 Jenkinsfile (Jenkinsfile_Test & Jenkinsfile_Deploy) in the repository?
OR
One job for handling pull requests
One job for handling commits to Development
One job for handling commits to Staging
One job for handling commits to Production
This seems to have some benefit over the previous option, because historical data for deployments into each environment will not be cross polluting each other. But now we're well over the >1 Jenkinsfile (perceived) limit, and I will end up with (Jenkinsfile_Test, Jenkinsfile_Deploy_Development, Jenkinsfile_Deploy_Staging and Jenkinsfile_Deploy_Production). This method also brings either extra complexity (common code in a shared library) or copy/paste code reuse, which I certainly want to avoid.
My primary objective is for this to be maintainable by someone other than myself, because Bus Factor. A real Devops/Jenkins person will have to update/manage all of this one day, and I would strongly prefer them not to suffer from my ignorance.
I have done countless searches, but I haven't found anything that provides the direction I need here. Searches for best practices make no mention on handling >1 Jenkinsfile, instead focusing on the contents of a single pipeline.
After further research, I have found an answer to my core question. This might not be the absolute correct answer, but it makes sense to me, and serves my needs.
While it is technically possible to have >1 Jenkinsfile in a project, that does not appear to align with best practices.
The best practice appears to be to create a separate repository for each Jenkinsfile, which maps 1:1 with a Job in Jenkins.
To support my specific use case I have removed the Jenkinsfile from my main source code repository. I then create 4 new repositories:
Project_Jenkinsfile_Test
Project_Jenkinsfile_Deploy_Development
Project_Jenkinsfile_Deploy_Staging
Project_Jenkinsfile_Deploy_Production
Each repository contains a single Jenkinsfile and a readme.md that, in theory, contains useful information.
This separation gives me a nice view of the historical success/failure of the Test runs as a whole, and Deployments to each environment separately.
It is highly likely that I will later create a fifth repository:
Project_Jenkinsfile_Deploy_SharedLibrary
This last repository would contain pipeline code that is shared amongst the four 'core' pipelines. Once I have the 'core' pipelines up and running properly, I will consider refactoring what I can into this shared library.
I will not accept my own answer at this point, in the hope that more answers are forthcoming.
Here's a proposal I would try for your requirements based on the experience at my last job.
Job1: builds and runs unit tests on every commit on master or whatever your main dev branch is (checks every 20 minutes or whatever suits you); this job usually finds compile and unit test issues very fast
Job2 (optional): run integration tests and various static code checks (e.g. clang-tidy, valgrind, cppcheck, etc.) every night, if the last run of Job1 was successful; this job finds usually lots of things, but probably takes lots of time, so let it run only at night
Job3: builds and tests every pull request for release branches; so you get some info in your pull requests, if its mature enough to be merged into the release branches
Job4: deploys to the appropriate environment on every commit on a release branch; on dev and staging you could probably trigger some more tests, if you have them
So Job1, Job2 and Job3 should run all the time. If pull requests to your release branches are approved by QA (i.e. reviews OK and tests successful) and merged to release branches, the deployment is done by Job4 automatically.
It depends on your requirements and your dev process, if you want to trigger Job4 only manually instead.

Maximum number of TFs agents connected to a TFS instance

On Team Foundation (TFS2017) which is the maximum number of build agents that you can have connected to your TFS instance?
There is not any official document statement the limitation of build agent numbers with TFS for now. Also didn't get any related prompt info such as: build agents have reached the maximum.
For multiple machines, you could configure as much as you require, there is no evidently limitation.
For a single machine, it depends on the hardware. If your agent server is virtual, then it is already slower as compared to the physical, you also need to allocate sufficient RAM for it.
Can I install multiple private agents on the same machine?
Yes. This approach can work well for agents that run jobs that don't
consume a lot of shared resources.
You might find that in other cases you don't gain much efficiency
by running multiple agents on the same machine. For example, it might
not be worthwhile for agents that run builds that consume a lot of
disk and I/O resources.
You might also run into problems if concurrent build processes are
using the same singleton tool deployment, such as NPM packages. For
example, one build might update a dependency while another build is in
the middle of using it, which could cause unreliable results and
errors.
Source Link
It depends on how many cores agent server has. One Agent will take up one core.

Ansible or Jenkins pipelines for bigger jobs

At the moment we are using combination of Jenkins pipelines and Ansible playbooks. Usually we end up with short ansible playbooks that are run either inside of Jenkins pipeline or just as a jenkins job.
What would be be better approach for more complicated, multi-step jobs?
For example one job consists of:
Start ec2 instance from AMI
Run migrations
Pull latest code, compile and restart
Create new AMI from temporary instance
Terminate temporary instance
I do like the fact that I can handle user input in Jenkins pipelines as well as the graphic representation of every step in the pipeline. In example above each step would be probably its own little ansible playbook. Passing parameters from playbook to playbook isnt that straight forward, but we know how to do it.
I am not 100% sure if I am doing this up to best standard as during creating these pipelines I am thinking that this should probably be ansible and the other way around.
Is there any sweet spot how to use these two together?
Well, you are indeed well aware of the limitations that each tool brings to the table,
The sweet spot would be what works the best for you and your company, now, ask yourself which approach would be easier to manage? which one would become too complicated when scaling?
I've done both approaches and found that the pipeline tools from jenkins seem to have the best effect in terms of "readability" and ease of management, this is specially evident when we had the chance to bring new members on the team and they can get a quick overview of the processes just by looking at our pipelines on jenkins,
Now, we have also used a combination of jenkins (just ci) + nexus (artifact management) + octopus (just cd) + ansible (provisioning) to handle everything on complicated pipelines,
Again, ask yourself what would be easier to manage, and what is most likely to grow over time (number of steps on the pipeline, number of pipelines or jobs, number of servers to manage, etc...) and take a decision based on that,
Best Regards,

Randomise slave load on Mesos

trying to solve some problem with Mesos. I have three build servers for Jenkins. Jenkins schedules jobs on them through Mesos.
For now, Mesos loads one agent(slave) as hard as possible, but I want it to spread jobs across all agents..
As I see, it's better to run three jobs on three agents, than on one.
Is it possible to randomise job scheduling?
Or perhaps, I have such scenario. 2 large servers and one mini. I want to schedule Jobs on mini by default, and if it's not enough resources, then proceed to large servers. How can I achieve this goal? Is it possible to set priority for agents(slaves) to specify on which agent I want job to run at first?
The Mesos plugin for Jenkins attempts to build on the most recently built slave (see this method). This means that once it builds on that machine once, as long as that machine still has available spare resources - it'll schedule additional jobs on that machine until it is full. Right now it looks like that isn't optional (I have filed it as a feature request).

When do you have to make a new job in Jenkins

So I want to make a build-test-deploy environment in Jenkins:
I want to do:
- a build
- a karma test
- a protractor test
- a deploy
Now a very simple but important question: Do I have to do everything in one job (what's possible) or do I have to split it up in several jobs (and cd (going) to the build directory?). So it isn't clear when I have to make a new job.
It is really a matter of taste and your exact needs.
If you do not plan running build steps individually time after time (that is, if you only care about the build as an atomic piece), or if your build flow is simple and linear, it would make more sense to stick to a single job - this way you will keep all the configuration in one place and have a good overview of build results.
If, however, there are different paths that the build process may take, or the steps themselves involve more complex logic, or, for instance, there is a need for collecting statistics on each of them, then it might be more beneficial to extract some of the steps to separate jobs and chain them together according to your rules. Jenkins is super-flexible and does not enforce any particular approach upon you.

Resources