We have different jobs running on our jenkins. Some jobs are heavy and taking a lot of CPU and RAM, some are not. So I would like to have some plugins to help me set the weight for those jobs, just like https://wiki.jenkins-ci.org/display/JENKINS/Heavy+Job+Plugin.
But since we are using Jenkins pipeline which is not supported by Heavy Job plugin (See https://issues.jenkins-ci.org/browse/JENKINS-41940). Is there any other equivalent for pipeline jobs just like that?
Not as flexible as a dynamic weight, but to avoid overload you can create several executors with a different label (such as many with label light and only a few with label heavy) and then use node to target these labels.
This is not a solution to the packing problem, only prevents too many jobs with a heavy class from running at the same time.
Related
We are currently struggling with the following task. We need to run a windows application (single instance only working) 1000 times with different input parameters. One run of this application can take up to multiple hours. It feels like we have the same problem like any video rendering farm – each picture of a video should be calculated independently and parallel – but it is not rendering.
Currently we tried to execute it with Jenkins and Pipeline jobs. We used the parallel steps in pipeline and lets Jenkins queue and execute the application. We use the Jenkins Label Expression to lets Jenkins choose which job can be run on which node.
The limitation in Jenkins is currently with massive parallel jobs (https://issues.jenkins-ci.org/browse/JENKINS-47724). When the queue contains multiple hundred jobs adding new jobs took much longer – will become even worse by increasing queue. And main problem: Jenkins will start the execution of parallel pipeline part-jobs only after finishing adding all to the queue.
We already investigated ideas how to solve this problem:
Python Distributed: https://distributed.readthedocs.io/en/latest/
a. For single functions it looks great, but for the complete run like we have in Jenkins => Deploy and collect results looks complex
b. Client->Server bidirectional communication needed – no chance to bring it online through a NAT (VM Server)
BOINC: https://boinc.berkeley.edu/
a. for our understanding we had to extend the backend in a massive way to bring our jobs working => to configure the jobs in BOINC we had to write a lot of new automating code
b. currently we need a predeployed application which can differ between different inputs => no equivalent of Jenkins Label Expression
Any ideas how to solve it?
Thanks in advance
I'm trying to build a Jenkins declarative pipeline that will build on all agents in parallel.
How can I do this without disabling sandbox?
I have come across this page: https://jenkins.io/blog/2017/09/25/declarative-1/ but it seems repetitive, especially when padded out with my code as nearly all operations are performed almost the same on every node. Is there a way to do this and avoid repeating code?
I suggest that you follow the common pattern described in the referenced article.
By assigning labels identifying the node's operating system and allocating nodes based on these labels, you ensure that the job runs exactly once in each of the different build environments.
A severe drawback of your suggestion to build on all of the available agents (as said, I don't know anything how to actually do that)) would be in the case of one or multiple build agents being offline. So you don't run on Windows, because the server was just rebooting, but your build result is green as nothing failed? Not a good idea, isn't it?
Another benefit of the label-based approach is that you can easily add additional build agents to cope with increased number of builds, e.g., as your team grows. You don't want to build twice on Windows, when you add another build agent with Windows, right?
So I strongly recommend: Assign labels to your build agents and then specify, on which agents your job needs to run.
At the moment we are using combination of Jenkins pipelines and Ansible playbooks. Usually we end up with short ansible playbooks that are run either inside of Jenkins pipeline or just as a jenkins job.
What would be be better approach for more complicated, multi-step jobs?
For example one job consists of:
Start ec2 instance from AMI
Run migrations
Pull latest code, compile and restart
Create new AMI from temporary instance
Terminate temporary instance
I do like the fact that I can handle user input in Jenkins pipelines as well as the graphic representation of every step in the pipeline. In example above each step would be probably its own little ansible playbook. Passing parameters from playbook to playbook isnt that straight forward, but we know how to do it.
I am not 100% sure if I am doing this up to best standard as during creating these pipelines I am thinking that this should probably be ansible and the other way around.
Is there any sweet spot how to use these two together?
Well, you are indeed well aware of the limitations that each tool brings to the table,
The sweet spot would be what works the best for you and your company, now, ask yourself which approach would be easier to manage? which one would become too complicated when scaling?
I've done both approaches and found that the pipeline tools from jenkins seem to have the best effect in terms of "readability" and ease of management, this is specially evident when we had the chance to bring new members on the team and they can get a quick overview of the processes just by looking at our pipelines on jenkins,
Now, we have also used a combination of jenkins (just ci) + nexus (artifact management) + octopus (just cd) + ansible (provisioning) to handle everything on complicated pipelines,
Again, ask yourself what would be easier to manage, and what is most likely to grow over time (number of steps on the pipeline, number of pipelines or jobs, number of servers to manage, etc...) and take a decision based on that,
Best Regards,
I am fairly new to Jenkins pipeline and am considering migrating an existing Jenkins batch to use pipeline script.
This may be an obvious question to those in the know but I have not been able to find any discussion of it anywhere. If you have a fairly complex set of jobs, say a few hundred, is it best practice to end up with one job with a fairly large script or a small number of jobs, probably parameterized, say 5 to 10, with smaller pipeline scripts that call each other.
Having one huge job has the severe disadvantage that you cannot easily execute the single stages anymore. On the other hand, splitting everything into different jobs has the disadvantage that many of the nice pipeline features (shared variables, shared code) cannot be used anymore. I do not think that there is a unique answer to this.
Have a look at the following two related questions:
Jenkins Build Pipeline - Restart At Stage
Run Parts of a Pipeline as Separate Job
trying to solve some problem with Mesos. I have three build servers for Jenkins. Jenkins schedules jobs on them through Mesos.
For now, Mesos loads one agent(slave) as hard as possible, but I want it to spread jobs across all agents..
As I see, it's better to run three jobs on three agents, than on one.
Is it possible to randomise job scheduling?
Or perhaps, I have such scenario. 2 large servers and one mini. I want to schedule Jobs on mini by default, and if it's not enough resources, then proceed to large servers. How can I achieve this goal? Is it possible to set priority for agents(slaves) to specify on which agent I want job to run at first?
The Mesos plugin for Jenkins attempts to build on the most recently built slave (see this method). This means that once it builds on that machine once, as long as that machine still has available spare resources - it'll schedule additional jobs on that machine until it is full. Right now it looks like that isn't optional (I have filed it as a feature request).