I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).
Related
I have a Zynq 7020 chip with 250 MB of DDR memory attached to it put in ECC mode (so 125 MB effectively). It's attached to NAND flash memory and has a series of bootloaders which eventually load VxWorks to run some stuff.
We are about to do a test which will require me to read all the memory, flash, and FPGA configuration memory on the device after execution.
I have another [small] program that I will install via JTAG after the run and have it write out the rest of the RAM, then all the flash and FPGA configuration memory. This program is compiled by the Xilinx SDK and is bare-metal (no OS/bootloader).
When I load this program, I reset the processor (JTAG command), run a ps7_init.tcl script that sets all the CPU registers to a good configuration as set by Vivado, load the elf file onto the device, then run the processor. This program then tries to read memory starting at the address 0x0, but it crashes quickly. I told it to start at an address of 1 MB (1<<20) (because I know there's some weird memory map stuff at the beginning, so I tried this just in case) and it reads a little bit more then crashes again.
The crash appears to be the CPU not letting me read these areas of RAM.
Why not? How do I make it so I can read every byte of the 125 MB of RAM I have?
There is a template project that Xilinx provides that is close to exactly what you are trying to do. It is the "Memory Tests" template, and it runs through attached memory ranges and tests read/write operations. By default it tests DDR memory range and ps7_ram_1. The linker script for the application puts the program in ps7_ram_0, and doesn't test that range since you can't overwrite instruction and data memory for the application.
Code for the template can be found here:
<SDK installation directory>\data\embeddedsw\lib\sw_apps\memory_tests
I would recommend creating a new project from this template, and changing the memorytest.c file to fit your need.
To answer your question more directly: You are likely running into problems with the processors MMU (memory management unit) and cache management. If your application is being loaded into DDR, then it is possible that it is blocking you from accessing application instruction and data memory. If your application is loaded into OCM (like the template) there may be access problems with the memory in cache. If you disable cache using
Xil_DCacheDisable();
Then you should be able to read from the entire DDR memory space (as long as it exists). Make sure that you configure your applications linker script (*.ld) so that the application knows which memory devices are out there, their base addresses and size.
I am learning about isolate in dart/flutter, in almost documents say that isolates don't share memory each other, but it not say how much maximum memory in an isolate. Is it limited by App maximum memory or each isolate have a separate memory space and doesn't depend on total initial memory allocated of Application?
Thank for any help.
Update
I found information at Dart Glossary: "Dart supports concurrent execution by way of isolates, which you can think of as processes without the overhead. Each isolate has its own memory and code, which can’t be affected by any other isolate"
See https://github.com/dart-lang/sdk/issues/34886
You can use --old_gen_heap_size to set the memory limit in megabytes.
You can specify such options by setting the environment variable like for example
DART_VM_OPTIONS="--old_gen_heap_size=2048 --observe"
The memory seems to be for the whole VM instance, not per isolate.
To get all available options use
dart --help --verbose
or
dart -h -v
I am learning the concept of virtual memory, but this question has been confusing me for a while. Since most modern computers use virtual memory, when a program is in execution, the os is supposed to page data in and out between RAM and disk. But why do we still encounter "out of memory" issue? Could you please correct me if I misunderstood the concept? I really appreciate your explanation.
PS: For example, I was analyzing a large amount of data (>100G) output from simulation on a computing cluster, and read in the data to an C array. Very often the system crashed and complained a memory error.
First: Modern computer do indeed use virtual memory, however there is no magic here. Memory is not created out of nothing. Virtual memory schemes typically allow a portion of the mass storage sub-system (aka hard disk) to be used to hold portions of the process that are (hopefully) less frequently used.
This technique allows processes to use more memory than is available as RAM. However nothing is infinite. Eventually all RAM and Hard Drive resources will be used up and the process will get an out of memory error.
Second: It is not unheard of for operating systems to place a cap on the memory that a process may use. Hit that cap and again, the process gets an out of memory error.
Even with virtual memory the memory available is not unlimited.
Limit 1) Architectural limits. The processor and operating system will place some maximum virtual memory limit.
Limit 2) System Parameters. Many operating systems configure the maximum virtual memory size.
Limit 3) Process quotas. Many operating system have process quotas that limit the maximum virtual memory size.
Limit 4) System resources. Notably page file space.
Is WM operating system protects process memory against one another?
Can one badly written application crash some other application just mistakenly writing over the first one memory?
Windows Mobile, at least in all current incarnations, is build on Windows CE 5.0 and therefore uses CE 5.0's memory model (which is the same as it was in CE 3.0). The OS doesn't actually do a lot to protect process memory, but it does enough to generally keep processes from interfering with one another. It's not hard and fast though.
CE processes run in "slots" of which there are 32. The currently running process gets swapped to slot zero, and it's addresses are re-based to zero (so all memory in the running process effectively has 2 addresses, the slot 0 address and it's non-zero slot address). These addresses are proctected (though there's a simple API call to cross the boundary). This means that pointer corruptions, etc will not step on other apps but if you want to, you still can.
Also CE has the concept of shared memory. All processes have access to this area and it is 100% unprotected. If your app is using shared memory (and the memory manager can give you a shared address without you specifically asking, depending on your allocation and its size). If you have shared memory then yes, any process can access that data, including corrupting it, and you will get no error or warning in either process.
Is WM operating system protects process memory against one another?
Yes.
Can one badly written application crash some other application just mistakenly writing over the first one memory?
No (but it might do other things like use up all the 'disk' space).
Even if you're a device driver, to get permission to write to memory that's owned by a different process there's an API which you must invoke explicitly.
While ChrisW's answer is technically correct, my experience of Windows mobile is that it is much easier to crash the entire device from an application than it is on the desktop. I could guess at a few reasons why this is the case;
The operating sytem is often much more heavily OEMed than Windows desktop, that is the amount of manufacturer specific low level code can be very high, which leads to manufacturer specific bugs at a level that can cause bad crashes. On many devices it is common to see a new firmware revision every month or so, where the revisions are fixes to such bugs.
Resources are scarcer, and an application that exhausts all available resources is liable to cause a crash.
The protection mechanisms and architecture vary quite a bit. The device I'm currently working with is SH4 based, while you mostly see ARM, X86 and the odd MIPs CPU..
I have windows 2003 terminal servers, multi-core. I'm looking for a way to monitor individual CPU core usage on these servers. It is possible for an end-user to have a run-away process (e.g. Internet Explorer or Outlook). The core for that process may spike to near 100% leaving the other cores 'normal'. Thus, the overall CPU usage on the server is just the total of all the cores or if 7 of the cores on a 8 core server are idle and the 8th is running at 100% then 1/8 = 12.5% usage.
What utility can I use to monitor multiple servers ? If the CPU usage for a core is "high" what would I use to determine the offending process and then how could I automatically kill that process if it was on the 'approved kill process' list?
A product from http://www.packettrap.com/ called PT360 would be perfect except they use SMNP to get data and SMNP appears to only give total CPU usage, it's not broken out by an individual core. Take a look at their Dashboard option with the CPU gauge 'gadget'. That's exactly what I need if only it worked at the core level.
Any ideas?
Individual CPU usage is available through the standard windows performance counters. You can monitor this in perfmon.
However, it won't give you the result you are looking for. Unless a thread/process has been explicitly bound to a single CPU then a run-away process will not spike one core to 100% while all the others idle. The run-away process will bounce around between all the processors. I don't know why windows schedules threads this way, presumably because there is no gain from forcing affinity and some loss due to having to handle interrupts on particular cores.
You can see this easily enough just in task manager. Watch the individual CPU graphs when you have a single compute bound process running.
You can give Spotlight on Windows a try. You can graphically drill into all sorts of performance and load indicators. Its freeware.
perfmon from Microsoft can monitor each individual CPU. perfmon also works remote and you can monitor farious aspects of Windows.
I'm not sure if it helps to find run-away processes because the Windows scheduler dos not execute a process always on the same CPU -> on your 8 CPU machine you will see 12.5 % usage on all CPU's if one process runs away.