Reducing Valgrind memory usage for embedded target - memory

I'm trying to use Valgrind to debug a crashing program on an embedded Linux target. The system has roughly 31 MB of free memory when nothing is running, and my program uses about 2 MB of memory, leaving 29 MB for Valgrind. Unfortunately, when I try to run my program under Valgrind, Valgrind reports an error:
Valgrind's memory management: out of memory:
initialiseSector(TC)'s request for 27597024 bytes failed.
50,388,992 bytes have already been mmap-ed ANONYMOUS.
Valgrind cannot continue. Sorry.
Is there any way I can cut down Valgrind's memory usage so it will run successfully in this environment? Or am I just out of luck?

valgrind can be tuned to decrease (increase) its cpu/memory usage,
with an effect to decrease (increase) the information about problems/bugs.
See e.g. https://archive.fosdem.org/2015/schedule/event/valgrind_tuning/attachments/slides/743/export/events/attachments/valgrind_tuning/slides/743/tuning_V_for_your_workload.pdf
Note however that running valgrind within 31MB (or so) seems an impossible task.

Related

PyTorch running under WSL2 getting "Killed" for Out of memory even though I have a lot of memory left?

I'm on Windows 11, using WSL2 (Windows Subsystem for Linux). I recently upgraded my RAM from 32 GB to 64 GB.
While I can make my computer use more than 32 GB of RAM, WSL2 seems to be refusing to use more than 32 GB. For example, if I do
>>> import torch
>>> a = torch.randn(100000, 100000) # 40 GB tensor
Then I see the memory usage go up until it hit's 30-ish GB, at which point, I see "Killed", and the python process gets killed. Checking dmesg, it says that it killed the process because "Out of memory".
Any idea what the problem might be, or what the solution is?
According to this blog post, WSL2 is automatically configured to use 50% of the physical RAM of the machine. You'll need to add a memory=48GB (or your preferred setting) to a .wslconfig file that is placed in your Windows home directory (\Users\{username}\).
[wsl2]
memory=48GB
After adding this file, shut down your distribution and wait at least 8 seconds before restarting.
Assume that Windows 11 will need quite a bit of overhead to operate, so setting it to use the full 64 GB would cause the Windows OS to run out of memory.

Huge memory usage in Xilinx Vivado

Vivado consumes all of the free memory space in my machine during synthesis and for this reason, the machine either hangs or crashes after a while.
I encountered this issue when using Vivado 2018.1 on Windows 10 (w/ 8GB RAM) and Vivado 2020.1 on CentOS 7 (w/ 16GB RAM).
Is there any option in Vivado to limit its memory usage?
If this problem happens when you are synthesizing multiple out of context modules, try reducing the Number of Jobs when you start the run.

Building clang taking forever

I'm baffled. I have a VM running Ubuntu 14.04. I've followed procedures here: http://clang.llvm.org/docs/LibASTMatchersTutorial.html and am at the step to run ninja. This builds llvm and clang. Now, my VM is no slouch, I gave it 6GB of RAM and 4 CPUs and a 20GB swap file. The biggest problem comes at link time - it seems to start a large number of ld processes, each using at least 3-4GB or virtual memory, and at some point a lot of CPU each. But the the swap file grew to over 12GB and the processes are all IO bound, but I don't know if they are doing something useful, or thrashing. All I know is the disk is getting hammered and the jobs run forever. I've actually just dropped the CPU count to the VM to 1, to see if it might be more efficient with less parallelism, as I surmised the issue may be thrashing.
I suppose my disk could be slow... Any ideas? Should I be using make instead of ninja? My expertise is not Linux (although I'm getting there :-) ) So I'm following the tutorial but perhaps it is not recommended the "best" way to build the clang / llvm programs.
I have been there, It's happening with the latest svn release (but not if you get clang 3.8 or older releases). What is happening is that since during development a lot of debug information is also being generated for each compilation unit the file sizes are becoming big.
The solution is to turn off all the debug info that's been attached by default. You probably are not going to debug clang, so won't need it. SO instead of just doing this
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON
What you should do is
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
All the other steps remain the same. Now I have not tested this with ninja, but have verified it with make on ubuntu (this tutorial, I modified the same thing in step 7). This should owkr as weel.

What is the best CLI tool to take memory dumps for C++ in Linux

What is the best CLI tool to take memory dumps for C++ processes in Linux. I have a program which monitors the memory usage of different processes running on Linux. For Java based proceses, I am using jstack and Jmap to take the thread and heap dumps. But, are there any good CLI tools take similar dumps for C++ based processes?? And if yes, how do we use them and once dump is taken how to analyse the dumps?
Any inuputs will be appreciated.
I would recommend using gcore which is an open source executable to dump for remote process. In order to achieve consistency, target process is being suspended while collecting memory, and resumed afterwards.
usage info can be found in the following link :
gsp.com/cgi-bin/man.cgi?section=1&topic=gcore
another option is via gcc, by attaching the process to gcc instantiation and typing the 'gcore' command, and then detaching it.
$ gdb --pid=123
(gdb) gcore
Saved corefile core.123
(gdb) detach

Jenkins build throwing an out of memory error

We have Jenkins running on an ec2 instance. When doing a build, we see the following error:
17:29:39.149 [INFO] [org.gradle.api.Project] OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007ac000000, 234881024, 0) failed; error='Cannot allocate memory' (errno=12)
17:29:39.150 [INFO] [org.gradle.api.Project] #
17:29:39.150 [INFO] [org.gradle.api.Project] # There is insufficient memory for the Java Runtime Environment to continue.
17:29:39.150 [INFO] [org.gradle.api.Project] # Native memory allocation (malloc) failed to allocate 234881024 bytes for committing reserved memory.
I researched on this topic and tried various settings such as increasing the heap memory, ram and PermGenSize. Here is my current memory setting on Jenkins:
-Xms256m -Xmx2048m -XX:MaxPermSize=512m
Are there any other things that I'm missing that's causing an OOM?
I've sold the same problem. (I have ec2, t2.micro, Ubuntu 14, Jenkins, Tomcat, Maven).
By default you don't have swap space.
To confirm this:
free -m
Just add some. Try with 1 GB for begin.
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Check again:
free -m
For more details look here
This is not a memory issue on JVM level, but on OS level. The JVM tries to allocate 224MB, but this amount of memory isn't available on OS level. This happens when the -Xmx settings of a JVM are larger than the amount of free memory in a system. Check the amount of free memory in the OS, and either limit the memory of your current JVM so that fits within the free memory, or try to free up memory (by limiting the amount of memory other processes use) or try out an EC2 instance with more memory.
if you jules build failing for out of memory then follow below steps:
Increase memory size in manifest.yml file
Ex- memory:4270 M (increase here)
Add MAVEN_OPTS into config argument of jules.yml file
Enjoy :)

Resources