So we have a RHEL 7.6 workstation with 128 gigs of ram. The OS sees all the ram and 80 cors (40 HT)
We have 1 guest with 8 CPUs and 32gigs of ram running RHEL 7.6 workstation as well.
We are trying to create another guest with 64 CPUs and 80 gigs of ram.
We setup the system using virt-manager and PXE boot the system to install the OS.
All this goes without a hitch, but when we log in after the system is build with PXE and do a free -g it only shows 2 gigs instead of 80 gigs.
Any ideas?
Thanks!
Joe
The issue was related to using uefi bios and the OVMF.x86_64 package. Once we went back to a normal bios and no nvram, the issue went away.
Hope this helps someone :).
Joe
Related
I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.
The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.
I'm on Windows 11, using WSL2 (Windows Subsystem for Linux). I recently upgraded my RAM from 32 GB to 64 GB.
While I can make my computer use more than 32 GB of RAM, WSL2 seems to be refusing to use more than 32 GB. For example, if I do
>>> import torch
>>> a = torch.randn(100000, 100000) # 40 GB tensor
Then I see the memory usage go up until it hit's 30-ish GB, at which point, I see "Killed", and the python process gets killed. Checking dmesg, it says that it killed the process because "Out of memory".
Any idea what the problem might be, or what the solution is?
According to this blog post, WSL2 is automatically configured to use 50% of the physical RAM of the machine. You'll need to add a memory=48GB (or your preferred setting) to a .wslconfig file that is placed in your Windows home directory (\Users\{username}\).
[wsl2]
memory=48GB
After adding this file, shut down your distribution and wait at least 8 seconds before restarting.
Assume that Windows 11 will need quite a bit of overhead to operate, so setting it to use the full 64 GB would cause the Windows OS to run out of memory.
In Ubuntu 18 (64 bit), the running processes start/load address seemed to be randomized each time the same application is run - it no longer starts at 0x400000. May I know if this is caused the ASLR enabled? In Ubuntu 18, I need to set the ASLR to 0 in order the for start address to be fixed each time the same application is executed, but in Ubuntu 16 and below, this is not necessary.
What has changed in Ubuntu 18?
As you know, side-channel attacks due to CPU architecture issues were all over the news recently. In order to mitigate these types of attacks, the Kernel Page Table Isolation (previously called KAISER) patch set was developed and merged into the linux kernel 4.15RC6.
Ubuntu 18.04 used kernel 4.15 on initial release, which explains why ASLR is enabled by default in Ubuntu 18.04 and later.
We had 5 applications over a linode(Ubuntu 10.04 32 bit) of 1G RAM. Recently we moved one of the applications out of that linode to another of 512M. The application is built on Java EE and was working pretty stable on the old server. On the new server however tomcat(Version 6 on both servers) crashes every now and then without any logs. The only difference on the new server is that we are using nginx as the web server against apache2 on the old and the new server uses Ubuntu 12, 64 bit. There is no reason to doubt a memory leak because the application was behaving well on the old server. Are there any tomcat optmizations to be done to prevent such kind of crashes. I doubt if the reason is load due to traffic(since the new server has lower RAM) as well, because even in the middle of the night when there are just about 10 concurrent users, tomcat still crashes. Any insight towards the problem would be appreciated.
I checked the RAM usage and tomcat constantly occupies about 60% of the memory and all of a sudden crashes and goes to 0. I have used a bash script and run it as a cron job every 5 minutes on the new server to check if tomcat is down and restart it automatically. Could that be causing the issue? The script is mentioned below
if [ "$(/etc/init.d/tomcat6 status)" == " * Tomcat servlet engine is not running." ]; then /etc/init.d/tomcat6 start; fi
Please note, I am not an expert at server configuration. I can just about configure a server to install and get required things running.
You moved your app from a 32-bit Hotspot JVM to a 64-bit Openjdk JVM. And on the new server you have less RAM.
First I would try to install the same 32bit Hotspot JVM on the new server,and see if the crashes still occur. If they do, I would start adding more memory, and adjust xmx etc' accordingly.
I upgraded the RAM to 1GB, downgraded to Ubuntu 12, 32 Bit, reinstalled JVM 32 bit and now the server works like a charm. I was unable to zero down on the root cause, but the most possible cause should be either the 64bit OS or the 64 bit JVM eating too much memory. Thanks for your help.
I have successfully installed a 64 bit Fedora 11 guest os using VirtualBox on a host machine (AMD64) running 32 bit Windows XP .
At the moment the host machine has 2 Gb ram installed and I've allocated 1 Gb to the guest, which all works well.
The host machine can hold a maximum of 4 Gb ram, so I was wondering if it's worth buying an extra 2 Gb for it.
I know that 32 bit Windows XP can't use all of the 4 Gb, but can the guest os use any of the ram that the host os can't use?
No, you are limited what the host OS can see. If you open up task manager in the host OS, the guest OS's memory is mapped within there, so having memory that's mapped outside of the host OS is not possible.
That shouldn't discourage you from getting the extra ram, however. If you upgrade to 4 (or 3.5GB) then you'll still have about ~3.2GB of addressable memory to use, which is a substantial increase over 2GB especially if your memory usage is already near 2GB.