I have Jenkins controller running on prem at the company I work for. We are moving to the cloud and have started by migrating our jenkins agents to run on AWS EKS (kubernetes). We use quite powerfull intsances which should have plenty of CPU horsepower and RAM and even if I assign every agent 4Cpus and >8 gigs of RAM the jobs take more than double the time that it took with agents on the same host as the controller which had less CPU and RAM.
First we thought that the issue is network related but we introduced package caches, jenkins tool caches and anything else we could cache.
Also weirdly even executing tests takes more than twice as long on the cloud instances. Tests run locally without fetching any artifacts from the web or from our company repository.
Does anyone have an idea why everything is so slow? Is it maybe because the controller and agent have to communicate a lot? Is there any way to make things faster? How can I find the reason why things take so long?
Related
I'm doing a POC of moving Jenkins to kubernetes on GKE all is fine and well but now that I look at the time each JOB is executing that's kinda unacceptable
on VM timer for some jobs were 1.5min, 10.5min, 3.5min on kubernetes with jenkins agent (with 8cpus and 14gb ram to which he never gets to) it's 5min, 35min, 6.5min any thought on how to improve the performance or why is it happening? It's just as normal computing space as the previous VM, cpu/ram ain't the issue cause agent have plenty of it.
We have a setup of 200 VM's as Build Agents, working with TeamCity. We are thinking of saving the VMware License cost moving to Docker. Anyone having any prior experience on which would provide better performance?
My goal is not to compromise with performance, but if Docker gives even same performance as VMware, we'll switch to docker.
My build agent VM's runs on either windows or ubuntu. Builds on linux uses mainly Python, and windows system mainly uses Visual Studio (different versions). We'll be doing performance test ourselves, but I want to know if someone has done this before and experienced any benefits.
I've recently built my own ephemeral docker container build agents. I've been using them for about 4 months building for 25+ different projects. The dependency management is so much nicer than having different VM's running your build agent. You also have the option to have many build agents running on a single VM. I did not see a performance decrease when I switched from VM's to docker build agents. Using a swarm manager it is very easy to spin up more or less agents depending on your need.
If you are interested I also have a helpful script that automates authorizing a new agent from Docker. TeamCity has no way to automatically authorize an agent, it seems to have to be done through the API.
TL;DR Docker build agents are much easier to use, less overhead and ephemeral with no (visible) performance decrease.
Let's start by agreeing that we want to adhere to typical Docker/DevOps principles. Therefore, we want to keep tasks isolated, configurations versions controlled, and overall customization to a minimum.
The Landscape:
Jenkins is being used as the CI/CD tool on your cloud instance of choice.
The Plan:
Create separate instances for test/staging/prod, each with Docker installed
Spin up Jenkins slave containers on each instance, which are controlled by Jenkins master
When a commit is sent to 'test' branch, Jenkins master sends task to 'Test' slave which ultimately spins up version of application
Similarly, after tests are successfully run and code is pushed to staging or prod branches, Jenkins will have branch-respective slave build application.
The Question(s):
What is wrong with this approach?
What can be improved by this approach?
There are a few questions you should ask yourself when taking on this approach, a lot of those are covered in this blogpost.
The final paragraph suggests exposing the docker socket to the CI container, allowing you to build images on the host machine, instead of inside the CI container, saving you from a lot of pains that come from running Docker in Docker.
Other questions you should probably ask are what would be the orchestration service used for controlling the master and slave containers. I had a great time following this blog post by Stelligent to quickly create all I needed on AWS ECS using a Cloudformation stack, but other solutions are obviously an option.
So all in all, I don't see anything wrong with your approach, as long as you exercise caution and follow best practices.
Good luck.
I have more than 30 rake tasks added to Jenkins for scheduling jobs. (Rails project)
But the jenkins server goes down frequently and uses 100% of CPU at most of the time.
Please suggest me a better job scheduler instead of Jenkins, which is also capable of
doing steps like
Notify an email when jobs fail
Log the jobs terminal output
Add dependency to jobs
Your question seems to come out as "recommend me a CI server".
But - why does Jenkins fall over and/or use 100% CPU most of the time? I'd be looking at why this is. My experience of Jenkins is that it is pretty stable and low overhead. If your hardware / OS / something else is flaky or just under provisioned for the task then swapping Jenkins out isn't going to fix that.
I am using Jenkins for CI,
I've heard that I should have a dedicated server and slave for running Jenkins and building tasks, respectively -
is this true?
I can understand this as the server may not be powerful enough to handle the server itself and running build tasks,
but is there any defined technical reason for this?
Best practice is to have a separate machine for Jenkins-Server,
and not to use it for builds at all.
This has nothing to do with CPU-power or memory-resources -
A build-machine should have a predefined configuration,
and Jenkins should not be part of it.
(Jenkins requirements may even conflict with those of the build-machine)
You should be able to boot / clone / upgrade / restore / trash the build-machine
without any impact on Jenkins.
Of course you can settle for a single machine, if your resources are limited,
but if you are serious about build-automation - Jenkins should have its own server.
You probably don't need dedicated hardware/VM to run a Jenkins server because the actual Jenkins process (no builds running) uses very little resources. But it all depends on what you want to accomplish with your Jenkins setup.
Do you want to run continuous builds across multiple platforms for multiple projects? Then using a master with slaves is the only way to go. If, on the other hand, you're running fairly simple builds for just a few projects, then you only need one machine to run the builds and the Jenkins process.
You can configure Jenkins to have multiple builds running concurrently so if you have a quad-core machine, you can safely run 2 builds and possibly a third once you analyze resource usage.
At my last gig, I used a quad-core machine with 8GB RAM to run:
Jenkins running Selenium builds
VirtualBox VM with Windows XP
Two instances of Tomcat each with two applications deployed.
And the machine still had more to spare.