htop : swapping when still having freememory - htop

I am using htop and I don't understand this output (MAC OS 10.13):
I understand there is 748 M of swap, while 4.7g of memory is free.. Why is there swap when there is free memory ? Do I miss something ?

Related

FSB, RAM MHz, MT/s, memory divider 1:1, Core2Duo

Due to temp (hopefully) financial problems I have to use old laptop. It's FSB (Front Side Bridge) clock is 333MHz (https://www.techsiting.com/mt-s-vs-mhz/). It has 2 SO-DIMM slots for DDR2 SDRAM. It had only 1 DIMM 2 Gb previously and it was a nightmare.
Each slot can handle maximum 2Gb so maximum amount of memory is 4Gb. Knowing that supported DDR stands for double data ratio, I've bought for funny money (10 euro) 2 DDR2 DIMM SO-DIMM 800MHz hoping to get (assuming memory divider is 1:2 - it's a double data ratio, isn't it?) 2x333MHz->apply divider=667MT/s (no idea how they have avoided 666). As I have Core2Duo I even had a very little hope to get 4x333MHz=1333MT/s.
But it seems that my memory divider is 1:1, so I get either
2x333MHzxDivider=333MT/s
4x333MHzxDivider=?
And utilities like lshw and dmidecode seem to confirm that:
~ >>> sudo lshw -C memory | grep clock
clock: 333MHz (3.0ns) # notice 333MHz here
clock: 333MHz (3.0ns) # notice 333MHz here
~ >>> sudo dmidecode --type memory | grep Speed
Supported Speeds:
Current Speed: Unknown
Current Speed: Unknown
Speed: 333 MT/s # notice 333MT/s here
Speed: 333 MT/s # notice 333MT/s here
~ >>>
So my 333MHz on FSB has been multiplied by 1 (one) and I've got 333MT/s (if I understood correct). I'm still satisfied: OS does not swap that much, boot process is faster, programs starts faster, browser does not hang every hour and I can open much more tabs). I just want to know, since I have Core2Duo what **MT/s8*8 do I have from these two? Or maybe it is even more comlicated?
2x333MHzxDivider=333MT/s
4x333MHzxDivider=667MT/s # 4 because of Duo
and is there any difference for 2 processors system with just 4Gb of RAM with MT\s == MHz?
PS BIOS is old (although latest) and I cannot see real FSB clock there, nor change it nor change the memory divider.
Looks like there's no point in checking I/O bus clock using some Linux command/tool because it is just always half of memory clock.
if what is written in electronics.stackexchange.com/a/424928:
I/O bus clock is always half of bus data rate.
my old machine has these parameters:
It is DDR2-333 (not standardized by JEDEC since they start from DDR-400)
It has memory MHz = 333
It has memory MT/s = 333
It has I/O bus MHz = 166.5 # just because
The thing I still don't get is that I have Core2Duo, so is my memory MT/s = 333 or 666.

recon_alloc:memory(allocated) is much smaller than the memory allocated from os

> recon_alloc:memory(allocated).
49221632
> os:getpid().
"5656"
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22443 root 20 0 5763100 1.345g 2964 S 1.1 8.7 96:59.10 beam.smp
5656 root 20 0 5253972 1.001g 4736 S 2.2 6.4 35:05.21 beam.smp
I find the memory I see in erlang vm is much smaller than the memory allocated from os, which means my erlang node has a memory leak.
Is there a way to locate where a memory leak has occurred?
I've written https://www.erlang-in-anger.com/ to go with recon to explain some of these things. It's all free, and contains mostly everything you'll need for this issue. You will want to look at Chapter 7 (Memory leaks), and particularly section 7.3 on Memory fragmentation.
Essentially, you can look at other functions in recon_memory to find about which type of allocator is seeing lower usage in its allocation for various block sizes, and then play with the allocation strategies to optimize.

Prometheus container level buffered memory metric is missing

When i checked prometheus custom metrics, i see container_memory_cache but container level memory buffer data is not available.. When i run vmstat -S M command.. i can get buffered memory as below. But in kubernetes architecture, running this command on each pods will be wasting resources.. Is there any alternative way to get these datas for each pods?In addition to that, vmstat metrics also do not have buffered memory data too... Any idea? Thanks
vmstat -S M
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 0 456 2 19594 0 0 6 39 4 3 7 5 87 0 0
You can find a lot of information regarding container memory usage in cgroups, for example
/sys/fs/cgroup/memory/docker/{id}/memory.stat
Such files could be accessed from node OS and will provide you with all available information regarding container memory usage.
However, as far as I understand, buffered memory is not available inside containers to avoid double caching by node OS and container itself.

ARM: Safe physical memory position (to reserve) for my ARM hypervisor in relation to a Linux/Android guest

I am developing a basic hypervisor on ARM (using the board Arndale Exynos 5250).
I want to load Linux(ubuntu or smth else)/Android as the guest. Currently I'm using a Linaro distribution.
I'm almost there, most of the big problems have already been dealt with, except for the last one: reserving memory for my hypervisor such that the kernel does not try to OVERWRITE it BEFORE parsing the FDT or the kernel command line.
The problem is that my Linaro distribution's U-Boot passes a FDT in R2 to the linux kernel, BUT the kernel tries to overwrite my hypervisor's memory before seeing that I reserved that memory region in the FDT (by decompiling the DTB, modifying the DTS and recompiling it). I've tried to change the kernel command-line parameters, but they are also parsed AFTER the kernel tries to overwrite my reserved portion of memory.
Thus, what I need is a safe memory location in the physical RAM where to put my hypervisor's code at such that the Linux kernel won't try to access (r/w) it BEFORE parsing the FDT or it's kernel command line.
Context details:
The system RAM layout on Exynos 5250 is: physical RAM starts at 0x4000_0000 (=1GB) and has the length 0x8000_0000 (=2GB).
The linux kernel is loaded (by U-Boot) at 0x4000_7000, it's size (uncompressed uImage) is less than 5MB and it's entry point is set to be at 0x4000_8000;
uInitrd is loaded at 0x4200_0000 and has the size less than 2MB
The FDT (board.dtb) is loaded at 0x41f0_0000 (passed in R2) and has the size less than 35KB
I currently load my hypervisor at 0x40C0_0000 and I want to reserve 200MB (0x0C80_0000) starting from that address, but the kernel tries to write there (a stage 2 HYP trap tells me that) before looking in the FDT or in the command line to see that the region is actually reserved. If instead I load my hypervisor at 0x5000_0000 (without even modifying the original DTB or the command line), it does not try to overwrite me!
The FDT is passed directly, not through ATAGs
Since when loading my hypervisor at 0x5000_0000 the kernel does not try to overwrite it whatsoever, I assume there are memory regions that Linux does not touch before parsing the FDT/command-line. I need to know whether this is true or not, and if true, some details regarding these memory regions.
Thanks!
RELATED QUESTION:
Does anyone happen to know what is the priority between the following: ATAGs / kernel-command line / FDT? For instance, if I reserve memory through the kernel command-line, but not in the FDT (.dtb) should it work or is the command-line overriden by the FDT? Is there somekind of concatenation between these three?
As per
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm/Booting, safe locations start 128MB from start of RAM (assuming the kernel is loaded in that region, which is should be). If a zImage was loaded lower in memory than what is likely to be the end address of the decompressed image, it might relocate itself higher up before it starts decompressing. But in addition to this, the kernel has a .bss region beyond the end of the decompressed image in memory.
(Do also note that your FDT and initrd locations already violate this specification, and that the memory block you are wanting to reserve covers the locations of both of these.)
Effectively, your reserved area should go after the FDT and initrd in memory - which 0x50000000 is. But anything > 0x08000000 from start of RAM should work, portably, so long as that doesn't overwrite the FDT, initrd or U-Boot in memory.
The priority of kernel/FDT/bootloader command line depends on the kernel configuration - do a menuconfig and check under "Boot options". You can combine ATAGS with the built-in command lines, but not FDT - after all, the FDT chosen node is supposed to be generated by the bootloader - U-boot's FDT support is OK so you should let it do this rather than baking it into the .dts if you want an FDT command line.
The kernel is pretty conservative before it's got its memory map since it has to blindly trust the bootloader has laid things out as specified. U-boot on the other hand is copying bits of itself all over the place and is certainly the culprit for the top end of RAM - if you #define DEBUG in (I think) common/board_f.c you'll get a dump of what it hits during relocation (not including the Exynos iRAM SPL/boot code stuff, but that won't make a difference here anyway).

GC and memory limit issues with R

I am using R on some relatively big data and am hitting some memory issues. This is on Linux. I have significantly less data than the available memory on the system so it's an issue of managing transient allocation.
When I run gc(), I get the following listing
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 2147186 114.7 3215540 171.8 2945794 157.4
Vcells 251427223 1918.3 592488509 4520.4 592482377 4520.3
yet R appears to have 4gb allocated in resident memory and 2gb in swap. I'm assuming this is OS-allocated memory that R's memory management system will allocate and GC as needed. However, lets say that I don't want to let R OS-allocate more than 4gb, to prevent swap thrashing. I could always ulimit, but then it would just crash instead of working within the reduced space and GCing more often. Is there a way to specify an arbitrary maximum for the gc trigger and make sure that R never os-allocates more? Or is there something else I could do to manage memory usage?
In short: no. I found that you simply cannot micromanage memory management and gc().
On the other hand, you could try to keep your data in memory, but 'outside' of R. The bigmemory makes that fairly easy. Of course, using a 64bit version of R and ample ram may make the problem go away too.

Resources