gerrit upgrade to 3.3.0 casues heavy load - gerrit

Recently upgraded gerrit to version 3.3.0 from version 2.15.17 and after the upgrade the load is high on the gerrit server. It is a 12 core CPU linux with 62GB RAM and all the cores are used 100% always. Also, the CPU percentage usage always is 1000% by the gerrit process .Java heap memory is set to 42Gb. Could you please let me know how can the gerrit be tuned in order to improve the performance

Related

Self-hosted GitLab server is giving major accessibility issues

Our Self-hosted GitLab server is randomly getting broken and we couldn't figure out why. This random behavior affects our deployments, it gets too slow. After restarting it, stays up for a few hours and goes down, throws 500 or 502 error. After bringing it back up, I see either the sidekiq or gitaly metrics on omnibus grafana dashboard goes down compared to other services.
What do these services do and how to debug this issue?
Sidekiq metric image
Gitaly metric image
System Info:
OS - Centos 6.10 (final)
CPU - 8 cores
Memory usage - 8.82 GB / 15.6 GB
Disk Usage -
root : 111 GB / 389 GB
boot : 169 MB / 476 MB
/mnt/resource : 8.25 GB / 62.9 GB
I've met the same kind of issue after a few months not using an instance i had setup for one of my customers. You might want to check that your instance is up to date and update it if not - some vulnerabilities might be at fault here.
In my case it was a weird process sucking all my cpu, i suspect some kind of crypto currency miner was ran using a known exploit that was fixed in a later update, all went better after i killed it and updated the version.

How to increase the memory of the TFS TEAMCITY process

We are running Teamcity connected to TFS. On our Teamcity server there are two Java processes one for running Teamcity and one for connecting to TFS. The Teamcity process, I am able to increase the amount of RAM associated with it by updating the following environment variable TEAMCITY_SERVER_MEM_OPTS.
The other process for connecting to TFS, I am not able to increase the RAM. I was able to retrieve the command line for this process and noticed there was only 1gb of memory allocated to it. As can be seen by the following command line.
C:\TeamCity\jre\bin\java -Dfile.encoding=UTF-8 -Dcom.microsoft.tfs.jni.native.base-
directory=C:\TeamCity\webapps\ROOT\WEB-INF\plugins\tfs\tfsSdk\14.119.2\native -Xmx1024M
The real issue is the second Java process is taking up 100% CPU and hopefully by increasing the memory for this process it will alleviate the issue.

gitlab (puma, sidekiq) how minimalize memory use

I have gitlab on my VPS with 4GB ram. But after upgrade debion to version 10 and upgrade gitlab, momory is not enough. Puma and Sidekiq occupied cca 13% of memory.
I reduce some parametrs in gitlab.rb, but it help only little.
Have somebody advice? Thanks
2.5gb of physical RAM + 1gb of swap should be enough to currently run Gitlab in 2022:
Running Gitlab in a memory constrained environment

Why is ruby on rails testing super slow in linux?

I have reviewed blogpost from 2008 to date. I have Inherited a ruby ​​on rails project for which I need to increase the test code.
I work on a laptop asus computer with an 8gen cpu i7U with 16gb ram and a 512gb ssd.
Initially I was running ubuntu 19.10, when I started the project and with about 1200 tests. it takes more than 1hr to run. Whereas on a 2015 macbook pro with 8gb of ram and an hdd, it takes only 2-3 min.
The log / test.log does not report errors, the tests do not hang, but waiting too long is not efficient, especially when i'll be increasing the number of tests.
So I Uninstall ubuntu, wipe off the ssd, install solus, arch and ubuntu, with the same setup for all through asdf as version manager and in no distro the time is less than 1hr.
Does anyone know why this happens in linux? The mac setup is also through asdf and it is fast enough.
Without knowing the specifics of the codebase or the tests, this question is equivalent to "how long is a piece of string."
There are many differences between linux and macOS. Cryptographic libraries may have different defaults. Memory limits for threads will be different. Memory limits for processors may be different.
Unless you can isolate specific tests which are wildly different and extrapolate from there, it's almost certainly going to come down to OS-level differences.

hardware requirements for PlasticSCM server

I'm evaluating PlasticSCM on a VMWare Machine with 4GB RAM and 4Core CPU. Since I've ported our trunk into the server (about 6GB of Data), the service ran out of memory (started swapping). I've increased the the VM RAM to 6GB This is actually more than I'd like to load the host system with, since I've also got VMs for PlasticSCM Client, TeamCity Server, TeamCity Agent.
I was trying to find a spec with details on hardware requirement for running PlasticSCM server which incorporates scaling. So far, I've only found the minimum requirement (512MB RAM etc.) and the system information of your heavy load and scale test. As far as I can see, it's all about RAM. :)
Anyway is there a detailed spec with recommendations for the hardware being used?
P.S.: Of course, in case of switching to Plastic we'd run the service on a real machine instead of VM.

Resources