Swap memory issue in openshift jenkins making it slow - jenkins

I am running builds on openshift jenkins and due to swap memory issue jenkins is running very slow and as result builds are taking longer time or failing. Is there a possible way to increase the efficiency of jenkins or improving the speed of jenkins??

Related

Jenkins controller on prem with agents on EKS very slow

I have Jenkins controller running on prem at the company I work for. We are moving to the cloud and have started by migrating our jenkins agents to run on AWS EKS (kubernetes). We use quite powerfull intsances which should have plenty of CPU horsepower and RAM and even if I assign every agent 4Cpus and >8 gigs of RAM the jobs take more than double the time that it took with agents on the same host as the controller which had less CPU and RAM.
First we thought that the issue is network related but we introduced package caches, jenkins tool caches and anything else we could cache.
Also weirdly even executing tests takes more than twice as long on the cloud instances. Tests run locally without fetching any artifacts from the web or from our company repository.
Does anyone have an idea why everything is so slow? Is it maybe because the controller and agent have to communicate a lot? Is there any way to make things faster? How can I find the reason why things take so long?

Jenkins decrease in performance when moves to kubernetes

I'm doing a POC of moving Jenkins to kubernetes on GKE all is fine and well but now that I look at the time each JOB is executing that's kinda unacceptable
on VM timer for some jobs were 1.5min, 10.5min, 3.5min on kubernetes with jenkins agent (with 8cpus and 14gb ram to which he never gets to) it's 5min, 35min, 6.5min any thought on how to improve the performance or why is it happening? It's just as normal computing space as the previous VM, cpu/ram ain't the issue cause agent have plenty of it.

Jenkins not able to allow to save the configuration

I am running jenkins multi branch job, suddenly it not allow me to change the configuration changes, its keep on loading without any timeout issue.
Can you please some one help me on this ?
You could have a look at the Jenkins master machine CPU and memory. Look what is consuming them. I have seen this happening when the CPU is nearly 100 %. In this case, restarting the Jenkins process or Jenkins master machine could help.
Try to remember/ask colleagues if there are any recent changes to Jenkins master machine. We had similar issues after installing plugins.
Avoid executing jobs on Jenkins master, use slave agents.
You may need to clean up old builds if you are not doing this already.
in my case, after disabling / enabling all plugins one by one, it was the "AWS SQS Build Trigger Plugin", causing the "save / apply" buttons to move, and not be functional

What is the Jenkins' slave memory footprint?

I am aware of https://wiki.jenkins-ci.org/display/JENKINS/Consideration+for+Large+Scale+Jenkins+Deployment which talks about Large scale deployment and I plan to use Jenkins slaves to run performance benchmarking jobs on machines. But I was wondering how much impact will running the Jenkins Slave have on these machines have on the performance numbers, since the process for which the performance benchmark is to be done is highly CPU intensive. Is it a bad idea to use Jenkins to manage the runs? Should I keep doing these things manually, instead?

Jenkins Executor Starvation on CloudBees

I have setup jobs correctly using Jenkins on Cloudbees, Janky, and Hubot. Hubot and Janky work and are pushing the jobs to the Jenkins server.
The job has been sitting in the Jenkins queue for over an hour now. I don't see anywhere to configure the # of executors and this is a completely default instance from Cloudbees.
Is the CloudBees service just taking a while or is something misconfigured?
This was a problem in early March caused by the build containers failing to start cleanly at times.
The root cause was a kernel oops that was occurring in the build container as it launched.
This has since been resolved, and you should not experience these long pauses waiting for an executor.
Anything more than 10 minutes is definitely a bug, and typically more than about 5s is unusual (although when a lot of jobs are launched simultaneously the time to get a container can be on the order of around 3 minutes).

Resources