g++ - Terminal Memory Allocation - memory

I've compiled and am currently running a program with g++. I expect that it's going to take a while to run but I'm hoping I might be able to speed it up. I'm currently using Ubuntu. Checking the system monitor I found the terminal I'm running the program from. While it certainly is using a chunk of memory, there's far more memory available. Is there some sort of command for the terminal or something that will allow me to allocate more memory to the program so it will run a bit faster? Or a command for g++? Or just something to put in the C++ code?
Thanks!

Giving the program more memory will not make it run faster; it will ask for more memory from the operating system as needed. You're thinking of the behavior of languages with garbage collection, like Java. Normal C++ programs don't include a garbage collector, and thus won't run any faster with a larger heap.

Related

Zynq 7020: some memory is unreadable

I have a Zynq 7020 chip with 250 MB of DDR memory attached to it put in ECC mode (so 125 MB effectively). It's attached to NAND flash memory and has a series of bootloaders which eventually load VxWorks to run some stuff.
We are about to do a test which will require me to read all the memory, flash, and FPGA configuration memory on the device after execution.
I have another [small] program that I will install via JTAG after the run and have it write out the rest of the RAM, then all the flash and FPGA configuration memory. This program is compiled by the Xilinx SDK and is bare-metal (no OS/bootloader).
When I load this program, I reset the processor (JTAG command), run a ps7_init.tcl script that sets all the CPU registers to a good configuration as set by Vivado, load the elf file onto the device, then run the processor. This program then tries to read memory starting at the address 0x0, but it crashes quickly. I told it to start at an address of 1 MB (1<<20) (because I know there's some weird memory map stuff at the beginning, so I tried this just in case) and it reads a little bit more then crashes again.
The crash appears to be the CPU not letting me read these areas of RAM.
Why not? How do I make it so I can read every byte of the 125 MB of RAM I have?
There is a template project that Xilinx provides that is close to exactly what you are trying to do. It is the "Memory Tests" template, and it runs through attached memory ranges and tests read/write operations. By default it tests DDR memory range and ps7_ram_1. The linker script for the application puts the program in ps7_ram_0, and doesn't test that range since you can't overwrite instruction and data memory for the application.
Code for the template can be found here:
<SDK installation directory>\data\embeddedsw\lib\sw_apps\memory_tests
I would recommend creating a new project from this template, and changing the memorytest.c file to fit your need.
To answer your question more directly: You are likely running into problems with the processors MMU (memory management unit) and cache management. If your application is being loaded into DDR, then it is possible that it is blocking you from accessing application instruction and data memory. If your application is loaded into OCM (like the template) there may be access problems with the memory in cache. If you disable cache using
Xil_DCacheDisable();
Then you should be able to read from the entire DDR memory space (as long as it exists). Make sure that you configure your applications linker script (*.ld) so that the application knows which memory devices are out there, their base addresses and size.

How much maximum memory in a separate isolate in dart

I am learning about isolate in dart/flutter, in almost documents say that isolates don't share memory each other, but it not say how much maximum memory in an isolate. Is it limited by App maximum memory or each isolate have a separate memory space and doesn't depend on total initial memory allocated of Application?
Thank for any help.
Update
I found information at Dart Glossary: "Dart supports concurrent execution by way of isolates, which you can think of as processes without the overhead. Each isolate has its own memory and code, which can’t be affected by any other isolate"
See https://github.com/dart-lang/sdk/issues/34886
You can use --old_gen_heap_size to set the memory limit in megabytes.
You can specify such options by setting the environment variable like for example
DART_VM_OPTIONS="--old_gen_heap_size=2048 --observe"
The memory seems to be for the whole VM instance, not per isolate.
To get all available options use
dart --help --verbose
or
dart -h -v

Random memory address

I'm working on a virtual machine under Debian with EGLIBC 2.13 in order to learn memory address.
So I wrote a simple code giving me the address of a test variable, but everytime I exec this script, I'm getting a totally different address.
Here's two screens from 2 distincts executions :
What's causing this ? The fact I'm working on a VM or my GLIBC version ?
I guess it's GLIBC to prevent buffer overflow but I can't find my answer on the web.
And is it totally random ?
First, Glib (from GTK) is not GNU libc (a.k.a. glibc)
Then, you are observing the effect of ASLR (address space layout randomization). Don't try to disable it on a server directly connected to the Internet, it is a valuable security measure.
ASLR is mostly provided by the Linux kernel (e.g. when handing mmap(2) without MAP_FIXED, as most implementations of malloc do, and probably also at execve(2) time for the initial stack). Changing your libc (e.g. to musl-libc) won't disable it.
You could disable system-wide ASLR on a laptop (or on a Linux system running inside some VM) using proc(5): run
echo 0 > /proc/sys/kernel/randomize_va_space
as root. Be careful, by doing that you are decreasing the security of your system.
I don't know what you call totally random, but ASLR is random enough. IIRC, (but I might be wrong) the middle 32 bits of the 64-bits address (assuming a 64 bits Linux system) are quite random, to the point of making result of mmap (hence of malloc using it) practically unpredictable and non-reproducible.
BTW, to see ASLR in practice, try several times (with ASLR enabled) the following command
cat /proc/self/maps
this command displays a textual representation of the address space (in virtual memory) of the process running that cat command. You'll see different outputs when you run it several times !
For debugging memory leaks, use valgrind. With a recent GCC 4.9 or better (or recent Clang/LLVM) compiler, the address sanitizer is also useful, so you could compile with gcc then -Wall -Wextra to get all the warnings even the extra ones, then -g to get debug info, then -fsanitize=address

Write protect a stack page on AIX?

I've a program where argv[0] gets overwritten from time to time. This happens (only) on a production machine which I cannot access and where I cannot use a debugger. In order to find the origin of this corruption, I'd like to write protect this stack page, so that any write access would be turned in a fault, and I could get the address of the culprit instruction.
The system is an AIX 5.3 64 bits based. When I try to invoke mprotect on my stack page, I get an ENOMEM error. I'm using gcc to generate my program.
On a Linux system (x86 based) I can set a similar protection using mprotect without trouble.
Is there any way to achieve this on AIX. Or is this a hopeless attempt?
On AIX, mprotect() requires that requested pages be shared memory or memory mapped files only. On AIX 6.1 and later, you can extend this to the text region, shared libraries, etc, with the MPROTECT_TXT environment variable.
You can however use the -qstackprotect option on XLC 11/AIX 6.1TL4 and later. "Stack Smashing Protection" is designed to protect against exactly the situation you're describing.
On AIX 5.3, my only suggestion would be to look into building with a toolset like Parasoft's Insure++. It would locate errant writes to your stack at runtime. It's pretty much the best (and now only) tool in the business for AIX development. We use it in house and its invaluable when you need it.
For the record, a workaround for this problem is to move processing over to a pthread thread. On AIX, pthread thread stacks live in the data segment which can be mprotected (as opposed to the primordial thread, which cannot be mprotected). This is the way the JVM (OpenJDK) on AIX implements stack guards.

ejabberd: Memory difference between erlang and Linux process

I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).

Resources