I am aware of https://wiki.jenkins-ci.org/display/JENKINS/Consideration+for+Large+Scale+Jenkins+Deployment which talks about Large scale deployment and I plan to use Jenkins slaves to run performance benchmarking jobs on machines. But I was wondering how much impact will running the Jenkins Slave have on these machines have on the performance numbers, since the process for which the performance benchmark is to be done is highly CPU intensive. Is it a bad idea to use Jenkins to manage the runs? Should I keep doing these things manually, instead?
Related
I have Jenkins controller running on prem at the company I work for. We are moving to the cloud and have started by migrating our jenkins agents to run on AWS EKS (kubernetes). We use quite powerfull intsances which should have plenty of CPU horsepower and RAM and even if I assign every agent 4Cpus and >8 gigs of RAM the jobs take more than double the time that it took with agents on the same host as the controller which had less CPU and RAM.
First we thought that the issue is network related but we introduced package caches, jenkins tool caches and anything else we could cache.
Also weirdly even executing tests takes more than twice as long on the cloud instances. Tests run locally without fetching any artifacts from the web or from our company repository.
Does anyone have an idea why everything is so slow? Is it maybe because the controller and agent have to communicate a lot? Is there any way to make things faster? How can I find the reason why things take so long?
I am running builds on openshift jenkins and due to swap memory issue jenkins is running very slow and as result builds are taking longer time or failing. Is there a possible way to increase the efficiency of jenkins or improving the speed of jenkins??
I'm doing a POC of moving Jenkins to kubernetes on GKE all is fine and well but now that I look at the time each JOB is executing that's kinda unacceptable
on VM timer for some jobs were 1.5min, 10.5min, 3.5min on kubernetes with jenkins agent (with 8cpus and 14gb ram to which he never gets to) it's 5min, 35min, 6.5min any thought on how to improve the performance or why is it happening? It's just as normal computing space as the previous VM, cpu/ram ain't the issue cause agent have plenty of it.
We have a setup of 200 VM's as Build Agents, working with TeamCity. We are thinking of saving the VMware License cost moving to Docker. Anyone having any prior experience on which would provide better performance?
My goal is not to compromise with performance, but if Docker gives even same performance as VMware, we'll switch to docker.
My build agent VM's runs on either windows or ubuntu. Builds on linux uses mainly Python, and windows system mainly uses Visual Studio (different versions). We'll be doing performance test ourselves, but I want to know if someone has done this before and experienced any benefits.
I've recently built my own ephemeral docker container build agents. I've been using them for about 4 months building for 25+ different projects. The dependency management is so much nicer than having different VM's running your build agent. You also have the option to have many build agents running on a single VM. I did not see a performance decrease when I switched from VM's to docker build agents. Using a swarm manager it is very easy to spin up more or less agents depending on your need.
If you are interested I also have a helpful script that automates authorizing a new agent from Docker. TeamCity has no way to automatically authorize an agent, it seems to have to be done through the API.
TL;DR Docker build agents are much easier to use, less overhead and ephemeral with no (visible) performance decrease.
How many builds/Jobs we can run parallely at a time on jenkins.
Jenkins can run as many jobs as you have available "executors". You can change the number of executors at will in the configuration.
The real question is: how many jobs can your hardware handle. This depends entirely on your hardware specs and and type of jobs you are running (maven compilation, xcode build, script jobs).
Plus, each job goes through different stages: there is the SCM checkout stage, which will be harddisk IO and network IO heavy, there is the build stage which will be CPU and/or Memory heavy, there is the archiving stage, which will be harddisk (but not network) IO heavy [and this depends if you are archiving locally or over the network].
All these stages in parallel jobs rarely happen at the same time. For example: your hardware may only be able to support 5 parallel SCM checkout stages due to network bandwidth limitation, but since this would rarely happen at the same time, you will be safe running 10 jobs in parallel.
Finally, it is very unlikely that all your jobs are exactly the same with the same load profile.
So, unless every single one of your jobs is exactly the same, with exactly the same load profile, and you specify your hardware specs, and your describe the load profile for this job (bandwidth required, CPU utilization, etc), nobody could answer your question.
It depends on your servers specifications but try not to run more