Jenkins' webpage does speak about server specifications. The matter is that I have to ask to systems a server for CI, and I have to specify those requirements, and of course justify them.
I have to decide the following things:
Hard disk capacity, for the whole server, considering the OS. This spec is considered the more critical for the hardware providers.
RAM.
Number of cores.
And these are the things to take into account:
The OS they'll provide me will probably be Ubuntu Server.
I'm not going to run more than 1 build simultaneously, in the 99'9% of the cases.
I'm going to work with Moodle, so the source code size will be quite much (the whole repo is of about 700Mb).
Regarding my experience with Jenkins and Linux, I would recommend the following configuration:
A CentOS machine or VM (Ubuntu Server is OK too)
Minimum 2 CPU
Minimum 2 GB of RAM
A 30 GB partition for the OS
Another partition for Jenkins (like /jenkins)
Regarding the partition size for Jenkins, it depends of the number of jobs (and their workspaces size).
My Jenkins partition is 100 GB (I have around 100 jobs and some large Git repo to clone).
Related
The project in which I am working develops a Java service that uses MarkLogic 9 in the backend.
We are running a Jenkins build server that executes (amongst others) several tests in MarkLogic written in XQuery.
For those tests MarkLogic is running in a docker container on the Jenkins host (which is running Ubuntu Linux).
The Jenkins host has 12 GB of RAM and 8 GB of swap configured.
Recently I have noticed that the MarkLogic instance running in the container uses a huge amount of RAM (up to 10 GB).
As there are often other build jobs running in parallel, the Jenkins starts to swap, sometimes even eating up all swap
so that MarkLogic reports it cannot get more memory.
Obviously, this situation leads to failed builds quite often.
To analyse this further I made some tests on my PC running Docker for Windows and found out that the MarkLogic tests
can be run successfully with 5-6 GB RAM. The MarkLogic logs show that it sees all the host memory and wants to use everything.
But as we have other build processes running on that host this behaviour is not desirable.
My question: is there any possibility to tell the MarkLogic to not use so much memory?
We are preparing the docker image during the build, so we could modify some configuration, but it has to be scripted somehow.
The issue of the container not detecting memory limit correctly has been identified, and should be addressed in a forthcoming release.
In the meantime, you might be able to mitigate the issue by:
changing the group cache sizing from automatic to manual and setting cache sizes appropriate for the allocated resources. There area variety of ways to set these configs, whether deploying and settings configs from ml-gradle project, making your own Manage API REST calls, or programmatically:
admin:group-set-cache-sizing
admin:group-set-compressed-tree-cache-partitions
admin:group-set-compressed-tree-cache-size
admin:group-set-expanded-tree-cache-partitions
admin:group-set-expanded-tree-cache-size
admin:group-set-list-cache-partitions
admin:group-set-list-cache-size
reducing the in-memory-limit
in memory limit specifies the maximum number of fragments in an in-memory stand. An in-memory stand contains the latest version of any new or changed fragments. Periodically, in-memory stands are written to disk as a new stand in the forest. Also, if a stand accumulates a number of fragments beyond this limit, it is automatically saved to disk by a background thread.
I have Jenkins server whose GUI is super slow whenever I refresh it takes at least a few mins to respond. I am copying the entire Jenkins folder from one server to another and starting the Jenkins
Couple of scenarios I have tested
1. setting up Jenkins on high-performance server ie 64GB Ram 16Vcpus and 300GB of Harddrive
2. Increasing the Java Memory arguments of the server to min and max to 16GB and made sure it is using the G1GC Algorithm
Expecting Jenkins to be as fast as possible
I want to use Jenkins to automate tasks belonging to a web application running on tomcat on the same server.
As the application is quite critical, is it wise to install Jenkins on the same Virtual machine?
There will be 2 JVM running on the same Virtual Machine, could be a problem in term of memory,cpu, stability etc?
Should I take care of something in particular or it would be better to install Jenkins on another Virtual Server?
As the application is quite critical, is it wise to install Jenkins on the same Virtual machine?
No, it's not wise. If you are already in a virtualized environment, create a new VM for Jenkins (but at least separate it, e.g., run your prod and jenkins in two different docker containers). Why is it not wise apart from performance? Your tests may even crash the VM if you are unlucky. Or they can eat away your resources (file locks, network ports, etc.). Or Jenkins can overwrite your productive code if you did not set it up properly (e.g., deploys into the same folder where your prod is).
There will be 2 JVM running on the same Virtual Machine, could be a problem in term of memory,cpu, stability etc?
Of course it will be a performance problem, but not because of the 2 JVMs, but because of the tests themselves (if you have a bigger project with a lot of tests, that eats performance away most probably).
Should I take care of something in particular or it would be better to install Jenkins on another Virtual Server?
Just run it in another VM; even better another physical machine (if you are in a hosted environment e.g., AWS, then disregard the last point).
Edit: adding crucial information "it will just replace cron"
Yeah, in this case it should be OK. Jenkins itself does not use too many resources (otherwise it wouldn't be as well usable for a build server/scheduler), and about the worries for the additional JVM: I have seen many production environments where there are dozens of JVMs running in parallel. It all comes down to the individual scenarious: what are these production stuff doing? (Heavy I/O? Heavy networking? Just listening for occasinally serving a REST resource? Idly collecting randomness from the ether?) And again: what are your specs for the VM and the hardware on which it is running... this is a very complex question, which depends on:
the software/service
the OS (yes, it does matter whether it's Ubuntu, RedHat, SUSE, etc.)
the VM parameters (how much VCPUs does it have? How much VRAM? Is it KVM based or VMWare, something else?)
the hardware underneath (is it crafty enough? What are your over/underprovisioning ratios? Does your network bear the load?)
It's a question where all departments (Infra, DevOps, SE, etc.) has to work together.
This might be a stupid question, but a question asked in a recent interview left me pondering about how docker manages the machine configuration. When I said docker makes it possible to have the same environment for your application in production, staging and development, they asked me this question:
If the production configuration for your application is something like 64GB ram, 1TB ssd hard drive and stuff like that, and your development configuration is a much meagre 8GB RAM, 512 GB normal hard disk, how does docker makes the environment similar?
I was dumbstruck!
Docker allows you to limit resources to each Container (at least it is possible now).
But anyway, the resources are up to different circumstances and they could change. There is no reason to have a static hardware configuration for your apps.
The point of docker is to make software environment consistent not the hardware one. Docker does not want to keep you from having Vertical Scaling. Without docker you have vertical scaling, but by using docker you expand your ability to have horizontal scaling at the same time.
The whole question they asked is wrong. If you had a host of 10GB of ram and a container and it was stock on 8GB of ram, and for example your visitors where low and you had to scale down the host to 5GB of ram to lower your costs, then guess what, you could not. Why? because you container is stock on 8GB and it would crash if the real ram is lower than that (Actually in newer docker versions you set the maximum and it is not static i.e. it is not occupied the moment the container runs.)
Remember, docker is about have Horiz and Vertz and the same time!
I'm getting my slave processes killed when running more than 300 clients per node.
The machines are hosted in Rackspace and the configuration of each one is:
512MB RAM
1 CPU
Should I increase the memory of the machines?
It's totally dependent on the code of your test scripts. Generally, I would say that it's more common to be limited by CPU before RAM, however in your case it sounds like it might be the Linux OOM Killer killing the processes. The reason for this could be that you have a memory leak in the test code, but it's really hard to tell without more info.
Also, on machines with only a single CPU core, you shouldn't run more than one Locust slave process.