Our Self-hosted GitLab server is randomly getting broken and we couldn't figure out why. This random behavior affects our deployments, it gets too slow. After restarting it, stays up for a few hours and goes down, throws 500 or 502 error. After bringing it back up, I see either the sidekiq or gitaly metrics on omnibus grafana dashboard goes down compared to other services.
What do these services do and how to debug this issue?
Sidekiq metric image
Gitaly metric image
System Info:
OS - Centos 6.10 (final)
CPU - 8 cores
Memory usage - 8.82 GB / 15.6 GB
Disk Usage -
root : 111 GB / 389 GB
boot : 169 MB / 476 MB
/mnt/resource : 8.25 GB / 62.9 GB
I've met the same kind of issue after a few months not using an instance i had setup for one of my customers. You might want to check that your instance is up to date and update it if not - some vulnerabilities might be at fault here.
In my case it was a weird process sucking all my cpu, i suspect some kind of crypto currency miner was ran using a known exploit that was fixed in a later update, all went better after i killed it and updated the version.
Related
I'm on Windows 11, using WSL2 (Windows Subsystem for Linux). I recently upgraded my RAM from 32 GB to 64 GB.
While I can make my computer use more than 32 GB of RAM, WSL2 seems to be refusing to use more than 32 GB. For example, if I do
>>> import torch
>>> a = torch.randn(100000, 100000) # 40 GB tensor
Then I see the memory usage go up until it hit's 30-ish GB, at which point, I see "Killed", and the python process gets killed. Checking dmesg, it says that it killed the process because "Out of memory".
Any idea what the problem might be, or what the solution is?
According to this blog post, WSL2 is automatically configured to use 50% of the physical RAM of the machine. You'll need to add a memory=48GB (or your preferred setting) to a .wslconfig file that is placed in your Windows home directory (\Users\{username}\).
[wsl2]
memory=48GB
After adding this file, shut down your distribution and wait at least 8 seconds before restarting.
Assume that Windows 11 will need quite a bit of overhead to operate, so setting it to use the full 64 GB would cause the Windows OS to run out of memory.
My setup is as follows:
Ubuntu 20.04 server (16GB RAM) which runs Docker and Elasticsearch 6.8.16 image in a container with following env values -e JAVA_OPTS="-Xmx2g -Xms1g -XX:MaxPermSize=1g".
It also hosts two apps on Tomcat 9, and I have also set up these envs for Tomcat via setenv.sh in Tomcat's bin folder.
However, after a few hours, my remaining memory is less than 100MB and it happens every day. It stabilizes after I reboot the server, but after a few hours it falls under 100MB again.
Does anyone know how can I fix this?
If anyone needs any additional information, I am more than happy to provide it.
P.S. For some reason, my CPU always has 100% usage on one core while the other one is below 10%.
Thanks in advance!
I have gitlab on my VPS with 4GB ram. But after upgrade debion to version 10 and upgrade gitlab, momory is not enough. Puma and Sidekiq occupied cca 13% of memory.
I reduce some parametrs in gitlab.rb, but it help only little.
Have somebody advice? Thanks
2.5gb of physical RAM + 1gb of swap should be enough to currently run Gitlab in 2022:
Running Gitlab in a memory constrained environment
I have recently started using docker for new development work, however I am still required to switch back to working on our older on-premise offering from time to time. That is, I sometimes need to shutdown docker and spin up a an installation of our on premise server.
I find that when I do this with docker installed the performance of this server is terrible, essentially unusable, I need to uninstall docker to get it to work again.
When I have docker running I can see it using the memory (my machine has 32 GB of RAM, I am telling docker to use 16) and when I shutdown docker I can see it being released, according to the task manager anyway, and I can also see on hyper-v manager that the VM has been shutdown. However the performance of on-premise server install continues to act as the memory is in use. This is not a small performance hit, actions that should take 1 second take 20 or 30.
It would seem like docker is not actually releasing the memory on shutdown and only does so when I actually uninstall it, when I do this performance recovers completely.
Is this a known issue? Is there anything else I can try to see where the memory is going? I can find no other reports about it.
I am using windows 10 with docker version 17.03.1-ce-win5 (10743)
We had 5 applications over a linode(Ubuntu 10.04 32 bit) of 1G RAM. Recently we moved one of the applications out of that linode to another of 512M. The application is built on Java EE and was working pretty stable on the old server. On the new server however tomcat(Version 6 on both servers) crashes every now and then without any logs. The only difference on the new server is that we are using nginx as the web server against apache2 on the old and the new server uses Ubuntu 12, 64 bit. There is no reason to doubt a memory leak because the application was behaving well on the old server. Are there any tomcat optmizations to be done to prevent such kind of crashes. I doubt if the reason is load due to traffic(since the new server has lower RAM) as well, because even in the middle of the night when there are just about 10 concurrent users, tomcat still crashes. Any insight towards the problem would be appreciated.
I checked the RAM usage and tomcat constantly occupies about 60% of the memory and all of a sudden crashes and goes to 0. I have used a bash script and run it as a cron job every 5 minutes on the new server to check if tomcat is down and restart it automatically. Could that be causing the issue? The script is mentioned below
if [ "$(/etc/init.d/tomcat6 status)" == " * Tomcat servlet engine is not running." ]; then /etc/init.d/tomcat6 start; fi
Please note, I am not an expert at server configuration. I can just about configure a server to install and get required things running.
You moved your app from a 32-bit Hotspot JVM to a 64-bit Openjdk JVM. And on the new server you have less RAM.
First I would try to install the same 32bit Hotspot JVM on the new server,and see if the crashes still occur. If they do, I would start adding more memory, and adjust xmx etc' accordingly.
I upgraded the RAM to 1GB, downgraded to Ubuntu 12, 32 Bit, reinstalled JVM 32 bit and now the server works like a charm. I was unable to zero down on the root cause, but the most possible cause should be either the 64bit OS or the 64 bit JVM eating too much memory. Thanks for your help.