gitlab (puma, sidekiq) how minimalize memory use - memory

I have gitlab on my VPS with 4GB ram. But after upgrade debion to version 10 and upgrade gitlab, momory is not enough. Puma and Sidekiq occupied cca 13% of memory.
I reduce some parametrs in gitlab.rb, but it help only little.
Have somebody advice? Thanks

2.5gb of physical RAM + 1gb of swap should be enough to currently run Gitlab in 2022:
Running Gitlab in a memory constrained environment

Related

How to limit memory usage when I run Memgraph Platform within Docker?

I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.
The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.

Self-hosted GitLab server is giving major accessibility issues

Our Self-hosted GitLab server is randomly getting broken and we couldn't figure out why. This random behavior affects our deployments, it gets too slow. After restarting it, stays up for a few hours and goes down, throws 500 or 502 error. After bringing it back up, I see either the sidekiq or gitaly metrics on omnibus grafana dashboard goes down compared to other services.
What do these services do and how to debug this issue?
Sidekiq metric image
Gitaly metric image
System Info:
OS - Centos 6.10 (final)
CPU - 8 cores
Memory usage - 8.82 GB / 15.6 GB
Disk Usage -
root : 111 GB / 389 GB
boot : 169 MB / 476 MB
/mnt/resource : 8.25 GB / 62.9 GB
I've met the same kind of issue after a few months not using an instance i had setup for one of my customers. You might want to check that your instance is up to date and update it if not - some vulnerabilities might be at fault here.
In my case it was a weird process sucking all my cpu, i suspect some kind of crypto currency miner was ran using a known exploit that was fixed in a later update, all went better after i killed it and updated the version.

Ubuntu 20.04 memory leak with Docker and Tomcat 9

My setup is as follows:
Ubuntu 20.04 server (16GB RAM) which runs Docker and Elasticsearch 6.8.16 image in a container with following env values -e JAVA_OPTS="-Xmx2g -Xms1g -XX:MaxPermSize=1g".
It also hosts two apps on Tomcat 9, and I have also set up these envs for Tomcat via setenv.sh in Tomcat's bin folder.
However, after a few hours, my remaining memory is less than 100MB and it happens every day. It stabilizes after I reboot the server, but after a few hours it falls under 100MB again.
Does anyone know how can I fix this?
If anyone needs any additional information, I am more than happy to provide it.
P.S. For some reason, my CPU always has 100% usage on one core while the other one is below 10%.
Thanks in advance!

"random: nonblocking pool" initialization taking a long time on Ubuntu 16.04 Server

On Ubuntu 16.04 Server (Kernel 4.4.0-22) it takes 2-5 minutes to initialize the "random: nonblocking pool" according to /var/log/syslog, compared to Ubuntu 14.04:
May 28 18:10:42 foo kernel: [ 277.447574] random: nonblocking pool is initialized
This happened a lot faster on Ubuntu 14.04 (Kernel 3.13.0-79):
May 27 06:28:56 foo kernel: [ 14.859194] random: nonblocking pool is initialized
I observed this on DigitalOcean VMs. It's causing trouble for Rails applications because the unicorn server seems to wait for this pool to become available before starting up.
What is a reasonable time for this initialization step?
Why would it take so much longer on Ubuntu 16.04?
Is it reasonable for an application to wait for this pool to become available or might the dependency on the pool be a bug on the application side?
"apt-get install rng-tools" which makes Ubuntu use available hardware number generators fixes this issue - the pool will be ready in 10s instead of minutes then.

Ruby on Rails VPS RAM amount

Currently I have a simpliest VPS: 1 core, 256 MB of RAM, Ubuntu 12.04 LTS. My application seems to be running fine enough (I'm using unicorn and nginx) but when I run my rake jobs:work command for my delayed_jobs, unicorn process is getting killed.
I was wondering if it is related to the RAM amount ?
When the unicorn process is up and running, free -m command shows me that around 230 MB of RAM are occupied.
I was wondering, how much RAM would I need in overall ? 512 ? 1024 ?
Which one should I go with ?
Would be very glad to receive any answers!
Thank you
Your DJ worker would run another instance of your Rails application, so you need to make sure that you have at least enough RAM for that other instance plus allowance for other processes you are running.
Check ps aux for the memory usage of your Rails app.
Run top and see how much physical memory is free (while your Rails app is running).
My guess is you'll have to bump up your RAM to 512 MB. You of course don't want your memory use to spill over to swap.
Of course, besides that, you also need to make sure that your application and database are optimized enough that there are no incredible spikes in memory usage.
You can start with
ulimit -S -a
to find out the limits of your environment

Resources