Huge memory usage in Xilinx Vivado - xilinx

Vivado consumes all of the free memory space in my machine during synthesis and for this reason, the machine either hangs or crashes after a while.
I encountered this issue when using Vivado 2018.1 on Windows 10 (w/ 8GB RAM) and Vivado 2020.1 on CentOS 7 (w/ 16GB RAM).
Is there any option in Vivado to limit its memory usage?

If this problem happens when you are synthesizing multiple out of context modules, try reducing the Number of Jobs when you start the run.

Related

How to limit memory usage when I run Memgraph Platform within Docker?

I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.
The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.

PyTorch running under WSL2 getting "Killed" for Out of memory even though I have a lot of memory left?

I'm on Windows 11, using WSL2 (Windows Subsystem for Linux). I recently upgraded my RAM from 32 GB to 64 GB.
While I can make my computer use more than 32 GB of RAM, WSL2 seems to be refusing to use more than 32 GB. For example, if I do
>>> import torch
>>> a = torch.randn(100000, 100000) # 40 GB tensor
Then I see the memory usage go up until it hit's 30-ish GB, at which point, I see "Killed", and the python process gets killed. Checking dmesg, it says that it killed the process because "Out of memory".
Any idea what the problem might be, or what the solution is?
According to this blog post, WSL2 is automatically configured to use 50% of the physical RAM of the machine. You'll need to add a memory=48GB (or your preferred setting) to a .wslconfig file that is placed in your Windows home directory (\Users\{username}\).
[wsl2]
memory=48GB
After adding this file, shut down your distribution and wait at least 8 seconds before restarting.
Assume that Windows 11 will need quite a bit of overhead to operate, so setting it to use the full 64 GB would cause the Windows OS to run out of memory.

Why is ruby on rails testing super slow in linux?

I have reviewed blogpost from 2008 to date. I have Inherited a ruby ​​on rails project for which I need to increase the test code.
I work on a laptop asus computer with an 8gen cpu i7U with 16gb ram and a 512gb ssd.
Initially I was running ubuntu 19.10, when I started the project and with about 1200 tests. it takes more than 1hr to run. Whereas on a 2015 macbook pro with 8gb of ram and an hdd, it takes only 2-3 min.
The log / test.log does not report errors, the tests do not hang, but waiting too long is not efficient, especially when i'll be increasing the number of tests.
So I Uninstall ubuntu, wipe off the ssd, install solus, arch and ubuntu, with the same setup for all through asdf as version manager and in no distro the time is less than 1hr.
Does anyone know why this happens in linux? The mac setup is also through asdf and it is fast enough.
Without knowing the specifics of the codebase or the tests, this question is equivalent to "how long is a piece of string."
There are many differences between linux and macOS. Cryptographic libraries may have different defaults. Memory limits for threads will be different. Memory limits for processors may be different.
Unless you can isolate specific tests which are wildly different and extrapolate from there, it's almost certainly going to come down to OS-level differences.

Reducing Valgrind memory usage for embedded target

I'm trying to use Valgrind to debug a crashing program on an embedded Linux target. The system has roughly 31 MB of free memory when nothing is running, and my program uses about 2 MB of memory, leaving 29 MB for Valgrind. Unfortunately, when I try to run my program under Valgrind, Valgrind reports an error:
Valgrind's memory management: out of memory:
initialiseSector(TC)'s request for 27597024 bytes failed.
50,388,992 bytes have already been mmap-ed ANONYMOUS.
Valgrind cannot continue. Sorry.
Is there any way I can cut down Valgrind's memory usage so it will run successfully in this environment? Or am I just out of luck?
valgrind can be tuned to decrease (increase) its cpu/memory usage,
with an effect to decrease (increase) the information about problems/bugs.
See e.g. https://archive.fosdem.org/2015/schedule/event/valgrind_tuning/attachments/slides/743/export/events/attachments/valgrind_tuning/slides/743/tuning_V_for_your_workload.pdf
Note however that running valgrind within 31MB (or so) seems an impossible task.

hardware requirements for PlasticSCM server

I'm evaluating PlasticSCM on a VMWare Machine with 4GB RAM and 4Core CPU. Since I've ported our trunk into the server (about 6GB of Data), the service ran out of memory (started swapping). I've increased the the VM RAM to 6GB This is actually more than I'd like to load the host system with, since I've also got VMs for PlasticSCM Client, TeamCity Server, TeamCity Agent.
I was trying to find a spec with details on hardware requirement for running PlasticSCM server which incorporates scaling. So far, I've only found the minimum requirement (512MB RAM etc.) and the system information of your heavy load and scale test. As far as I can see, it's all about RAM. :)
Anyway is there a detailed spec with recommendations for the hardware being used?
P.S.: Of course, in case of switching to Plastic we'd run the service on a real machine instead of VM.

Resources