How do I find the amount of memory installed on my QNX Neutrino system?
uname -a doesn't show it
top only shows how much memory is available
I've looked at pidin syspage without success
pidin mem shows all the used memory in gory detail
pidin info will show the amount of memory installed, as shown below.
pidin info
CPU:X86 Release:6.4.1 FreeMem:836Mb/1015Mb BootTime:Jun 04 14:01:55 UTC 2014
The amount of free RAM is the size of "/proc" !
In your own program, you can write something like this :
#include <sys/stat.h>
struct stat buf;
if (stat("/proc", &buf) != -1) {
printf("Mem free = %d\n", buf.st_size);
}
--
Hope this help
Emmanuel
showmem -S will show the amount of RAM memory installed as shown below,
showmem -S
System RAM: 1936M ( 2030043136)
Total Used: 401M ( 420642376)
Used Private: 317M ( 332529404)
Used Shared: 79M ( 83333120)
Other: 4667K ( 4779852) (includes IFS and reserved RAM)
Related
I wrote a simple program to test page faults and tlb miss with perf.
The code is as follow. It writes 1 GB data sequentially and is expected
to trigger 1GB/4KB=256K tlb misses and page faults.
#include<stdio.h>
#include <stdlib.h>
#define STEP 64
#define LENGTH (1024*1024*1024)
int main(){
char* a = malloc(LENGTH);
int i;
for(i=0; i<LENGTH; i+=STEP){
a[i] = 'a';
}
return 0;
}
However, the result is as follow and far smaller than expected. Is perf so imprecise? I would be very appreciated if anyone can run the code on his machine.
$ perf stat -e dTLB-load-misses,page-faults ./a.out
Performance counter stats for './a.out':
12299 dTLB-load-misses
1070 page-faults
0.427970453 seconds time elapsed
Environment: Ubuntu 14.04.5 LTS , kernel 4.4.0; gcc 4.8.4 glibc 2.19. No compile flags.
The CPU is Intel(R) Xeon(R) CPU E5-2640 v2 # 2.00GHz.
The kernel prefetches pages on a fault, at least after it has evidence of a pattern. Can't find a definitive reference on the algorithm, but perhaps https://github.com/torvalds/linux/blob/master/mm/readahead.c is a starting point to seeing what is going on. I'd look for other performance counters that capture the behavior of this mechanism.
I'm attempting to do an exercise from "Expert C Programming" where the point is to see how much memory a program can allocate. It hinges on malloc returning NULL when it cannot allocate anymore.
#include <stdio.h>
#include <stdlib.h>
int main() {
int totalMB = 0;
int oneMeg = 1<<20;
while (malloc(oneMeg)) {
++totalMB;
}
printf("Allocated %d Mb total \n", totalMB);
return 0;
}
Rather than printing the total, I get a kernel panic after allocating ~8GB on my 16GB Macbook Pro.
Kernel panic log:
Anonymous UUID: 0B87CC9D-2495-4639-EA18-6F1F8696029F
Tue Dec 13 23:09:12 2016
*** Panic Report ***
panic(cpu 0 caller 0xffffff800c51f5a4): "zalloc: zone map exhausted while allocating from zone VM map entries, likely due to memory leak in zone VM map entries (6178859600 total bytes, 77235745 elements allocated)"#/Library/Caches/com.apple.xbs/Sources/xnu/xnu-3248.50.21/osfmk/kern/zalloc.c:2628
Backtrace (CPU 0), Frame : Return Address
0xffffff91f89bb960 : 0xffffff800c4dab12
0xffffff91f89bb9e0 : 0xffffff800c51f5a4
0xffffff91f89bbb10 : 0xffffff800c5614e0
0xffffff91f89bbb30 : 0xffffff800c5550e2
0xffffff91f89bbba0 : 0xffffff800c554960
0xffffff91f89bbd90 : 0xffffff800c55f493
0xffffff91f89bbea0 : 0xffffff800c4d17cb
0xffffff91f89bbf10 : 0xffffff800c5b8dca
0xffffff91f89bbfb0 : 0xffffff800c5ecc86
BSD process name corresponding to current thread: a.out
Mac OS version:
15F34
I understand that this can easily be fixed by the doctor's cliche of "It hurts when you do that? Then don't do that" but I want to understand why malloc isn't working as expected.
OS X 10.11.5
For the definitive answer to that question, you can look at the source code, which you'll find here:
zalloc.c source in XNU
In that source file find the function zalloc_internal(). This is the function that gives the kernel panic.
In the function you'll find a "for (;;) {" loop, which basically tries to allocate the memory you're requesting in the specified zone. If there isn't enough space, it immediately tries again. If that fails it does a zone_gc() (garbage collect) to try to reclaim memory. If that also fails, it simply kernel panics - effectively halting the computer.
If you want to understand how zalloc.c works, look up zone-based memory allocators.
Your program is making the kernel run out of space in the zone called "VM map entries", which is a predefined zone allocated at boot. You could probably get the result you are expecting from your program, without a kernel panic, if you allocated more than 1 MB at a time.
In essence it is not really a problem for the kernel to allocate you several gigabytes of memory. However, allocating thousands of smaller allocations summing up to those gigabytes is much harder.
what is the maximum amount of memory for a single process in UNIX and Linux and windows? how to calculate that? How much user address space and kernel address space for 4 GB of RAM?
How much user address space and kernel address space for 4 GB of RAM?
The address space of a process is divided into two parts,
User space: On standard 32 bit x86_64 architecture,the maximum addressable memory is 4GB, out of which addresses from 0x00000000 to 0xbfffffff = (3GB) meant for code, data segments. This region can be addressed when user process executing either in user or kernel mode.
Kernel space: Similarly, addresses from 0xc0000000 to 0xffffffff = (1GB) are meant for virtual address space of the kernel and can only addressed when the process executes in kernel mode.
This particular address space split on x86 is determined by the value of PAGE_OFFSET. Referring to Linux 3.11.1v page_32_types.h and page_64_types.h, page offset is defined as below,
#define __PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
Where Kconfig defines a default value of default 0xC0000000 also with other address split options available.
Similarly for 64 bit,
#define __PAGE_OFFSET _AC(0xffff880000000000, UL).
On 64 bit architecture 3G/1G split doesn't hold anymore due to huge address space. As per the source latest Linux version has given above offset as offset.
When I see my 64 bit x86_64 architecture, a 32 bit process can have entire 4GB of user address space and kernel will hold address range above 4GB. Interestingly on modern 64 bit x86_64 CPU's not all address lines are enabled(or the address bus is not large enough) to provide us 2^64 = 16 exabytes of virtual address space. Perhaps AMD64/x86 architectures has 48/42 lower bits enabled respectively resulting to 2^48 = 256TB / 2^42= 4TB of address space. Now this definitely improves performance with large amount of RAM, at the same time question arises how it is efficiently managed with the OS limitations.
In Linux there's a way to find out the limit of address space you can have.
Using the rlimit structure.
struct rlimit {
rlim_t cur; //current limit
rlim_t max; //ceiling for cur.
}
rlim_t is a unsigned long type.
and you can have something like:
#include <stdio.h>
#include <stdlib.h>
#include <sys/resource.h>
//Bytes To GigaBytes
static inline unsigned long btogb(unsigned long bytes) {
return bytes / (1024 * 1024 * 1024);
}
//Bytes To ExaBytes
static inline double btoeb(double bytes) {
return bytes / (1024.00 * 1024.00 * 1024.00 * 1024.00 * 1024.00 * 1024.00);
}
int main() {
printf("\n");
struct rlimit rlim_addr_space;
rlim_t addr_space;
/*
* Here we call to getrlimit(), with RLIMIT_AS (Address Space) and
* a pointer to our instance of rlimit struct.
*/
int retval = getrlimit(RLIMIT_AS, &rlim_addr_space);
// Get limit returns 0 if succeded, let's check that.
if(!retval) {
addr_space = rlim_addr_space.rlim_cur;
fprintf(stdout, "Current address_space: %lu Bytes, or %lu GB, or %f EB\n", addr_space, btogb(addr_space), btoeb((double)addr_space));
} else {
fprintf(stderr, "Coundn\'t get address space current limit.");
return 1;
}
return 0;
}
I ran this on my computer and... prrrrrrrrrrrrrrrrr tsk!
Output: Current address_space: 18446744073709551615 Bytes, or 17179869183 GB, or 16.000000 EB
I have 16 ExaBytes of max address space available on my Linux x86_64.
Here's getrlimit()'s definition
it also lists the other constants you can pass to getrlimits() and introduces getrlimit()s sister setrlimit(). There is when the max member of rlimit becomes really important, you should always check you don't exceed this value so the kernel don't punch your face, drink your coffee and steal your papers.
PD: please excuse my sorry excuse of a drum roll ^_^
On Linux systems, see man ulimit
(UPDATED)
It says:
The ulimit builtin is used to set the resource usage limits of the
shell and any processes spawned by it. If a new limit value is
omitted, the current value of the limit of the resource is printed.
ulimit -a prints out all current values with switch options, other switches, e.g. ulimit -n prints out no. of max. open files.
Unfortunatelly, "max memory size" tells "unlimited", which means that it is not limited by system administrator.
You can view the memory size by
cat /proc/meminfo
Which results something like:
MemTotal: 4048744 kB
MemFree: 465504 kB
Buffers: 316192 kB
Cached: 1306740 kB
SwapCached: 508 kB
Active: 1744884 kB
(...)
So, if ulimit says "unlimited", the MemFree is all yours. Almost.
Don't forget that malloc() (and new operator, which calls malloc()) is a STDLIB function, so if you call malloc(100) 10 times, there will be lot of "slack", follow link to learn why.
I'm getting ready to release a tool that is only effective with regular hard drives, not SSD (solid state drive). In fact, it shouldn't be used with SSD's because it will result in a lot of read/writes with no real effectiveness.
Anyone knows of a way of detecting if a given drive is solid-state?
Finally a reliable solution! Two of them, actually!
Check /sys/block/sdX/queue/rotational, where sdX is the drive name. If it's 0, you're dealing with an SSD, and 1 means plain old HDD.
I can't put my finger on the Linux version where it was introduced, but it's present in Ubuntu's Linux 3.2 and in vanilla Linux 3.6 and not present in vanilla 2.6.38. Oracle also backported it to their Unbreakable Enterprise kernel 5.5, which is based on 2.6.32.
There's also an ioctl to check if the drive is rotational since Linux 3.3, introduced by this commit. Using sysfs is usually more convenient, though.
You can actually fairly easily determine the rotational latency -- I did this once as part of a university project. It is described in this report. You'll want to skip to page 7 where you see some nice graphs of the latency. It goes from about 9.3 ms to 1.1 ms -- a drop of 8.2 ms. That corresponds directly to 60 s / 8.2 ms = 7317 RPM.
It was done with simple C code -- here's the part that measures the between positions aand b in a scratch file. We did this with larger and larger b values until we have been wandered all the way around a cylinder:
/* Measure the difference in access time between a and b. The result
* is measured in nanoseconds. */
int measure_latency(off_t a, off_t b) {
cycles_t ta, tb;
overflow_disk_buffer();
lseek(work_file, a, SEEK_SET);
read(work_file, buf, KiB/2);
ta = get_cycles();
lseek(work_file, b, SEEK_SET);
read(work_file, buf, KiB/2);
tb = get_cycles();
int diff = (tb - ta)/cycles_per_ns;
fprintf(stderr, "%i KiB to %i KiB: %i nsec\n", a / KiB, b / KiB, diff);
return diff;
}
This command lsblk -d -o name,rota lists your drives and has a 1 at ROTA if it's a rotational disk and a 0 if it's an SSD.
Example output :
NAME ROTA
sda 1
sdb 0
Detecting SSDs is not as impossible as dseifert makes out. There is already some progress in linux's libata (http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-04/msg03625.html), though it doesn't seem user-ready yet.
And I definitely understand why this needs to be done. It's basically the difference between a linked list and an array. Defragmentation and such is usually counter-productive on a SSD.
You could get lucky by running
smartctl -i sda
from Smartmontools. Almost all SSDs has SSD in the Model field. No guarantee though.
My two cents to answering this old but very important question... If a disk is accessed via SCSI, then you will (potentially) be able to use SCSI INQUIRY command to request its rotational rate. VPD (Vital Product Data) page for that is called Block Device Characteristics and has a number 0xB1. Bytes 4 and 5 of this page contain a number with meaning:
0000h "Medium rotation rate is not reported"
0001h "Non-rotating medium (e.g., solid state)"
0002h - 0400h "Reserved"
0401h - FFFEh "Nominal medium rotation rate in rotations per minute (i.e.,
rpm) (e.g., 7 200 rpm = 1C20h, 10 000 rpm = 2710h, and 15 000 rpm = 3A98h)"
FFFFh "Reserved"
So, SSD must have 0001h in this field. The T10.org document about this page can be found here.
However, the implementation status of this standard is not clear to me.
I wrote the following javascript code. I needed to determine if machine was ussing SSD drive and if it was boot drive. The solution uses MSFT_PhysicalDisk WMI interface.
function main()
{
var retval= false;
// MediaType - 0 Unknown, 3 HDD, 4 SSD
// SpindleSpeed - -1 has rotational speed, 0 has no rotational speed (SSD)
// DeviceID - 0 boot device
var objWMIService = GetObject("winmgmts:\\\\.\\root\\Microsoft\\Windows\\Storage");
var colItems = objWMIService.ExecQuery("select * from MSFT_PhysicalDisk");
var enumItems = new Enumerator(colItems);
for (; !enumItems.atEnd(); enumItems.moveNext())
{
var objItem = enumItems.item();
if (objItem.MediaType == 4 && objItem.SpindleSpeed == 0)
{
if (objItem.DeviceID ==0)
{
retval=true;
}
}
}
if (retval)
{
WScript.Echo("You have SSD Drive and it is your boot drive.");
}
else
{
WScript.Echo("You do not have SSD Drive");
}
return retval;
}
main();
SSD devices emulate a hard disk device interface, so they can just be used like hard disks. This also means that there is no general way to detect what they are.
You probably could use some characteristics of the drive (latency, speed, size), though this won't be accurate for all drives. Another possibility may be to look at the S.M.A.R.T. data and see whether you can determine the type of disk through this (by model name, certain values), however unless you keep a database of all drives out there, this is not gonna be 100% accurate either.
write text file
read text file
repeat 10000 times...
10000/elapsed
for an ssd will be much higher, python3:
def ssd_test():
doc = 'ssd_test.txt'
start = time.time()
for i in range(10000):
with open(doc, 'w+') as f:
f.write('ssd test')
f.close()
with open(doc, 'r') as f:
ret = f.read()
f.close()
stop = time.time()
elapsed = stop - start
ios = int(10000/elapsed)
hd = 'HDD'
if ios > 6000: # ssd>8000; hdd <4000
hd = 'SSD'
print('detecting hard drive type by read/write speed')
print('ios', ios, 'hard drive type', hd)
return hd
Segments of memory - BSS, Stack, Heap, Data, Code/Text (Are there any more?).
Say I have a 128MB RAM, Can someone tell me:
How much memory is allocated for each of these memory segments?
Where do they start? Please specify the address range or something like that for better clarity.
What factors influence which should start where?
That question depends on the number of variables used. Since you did not specify what compiler or language or even operating system, that is a difficult one to pin down on! It all rests with the operating system who is responsible for the memory management of the applications. In short, there is no definite answer to this question, think about this, the compiler/linker at runtime, requests the operating system to allocate a block of memory, that allocation is dependent on how many variables there are, how big are they, the scope and usage of the variables. For instance, this simple C program, in a file called simpletest.c:
#include <stdio.h>
int main(int argc, char **argv){
int num = 42;
printf("The number is %d!\n", num);
return 0;
}
Supposing the environment was Unix/Linux based and was compiled like this:
gcc -o simpletest simpletest.c
If you were to issue a objdump or nm on the binary image simpletest, you will see the sections of the executable, in this instance, 'bss', 'text'. Make note of the sizes of these sections, now add a int var[100]; to the above code, recompile and reissue the objdump or nm, you will find that the data section has appeared - why? because we added a variable of an array type of int, with 100 elements.
This simple exercise will prove that the sections grows, and hence the binary gets bigger, and it will also prove that you cannot pre-determine how much memory will be allocated as the runtime implementation varies from compiler to compiler and from operating system to operating system.
In short, the OS calls the shot on the memory management!
you can get all this information compiling your program
# gcc -o hello hello.c // you might compile with -static for simplicity
and then readelf:
# readelf -l hello
Elf file type is EXEC (Executable file)
Entry point 0x80480e0
There are 3 program headers, starting at offset 52
Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x08048000 0x08048000 0x55dac 0x55dac R E 0x1000
LOAD 0x055dc0 0x0809edc0 0x0809edc0 0x01df4 0x03240 RW 0x1000
NOTE 0x000094 0x08048094 0x08048094 0x00020 0x00020 R 0x4
Section to Segment mapping:
Segment Sections...
00 .init .text .fini .rodata __libc_atexit __libc_subfreeres .note.ABI-tag
01 .data .eh_frame .got .bss
02 .note.ABI-tag
The output shows the overall structure of hello. The first program header corresponds to the process' code segment, which will be loaded from file at offset 0x000000 into a memory region that will be mapped into the process' address space at address 0x08048000. The code segment will be 0x55dac bytes large and must be page-aligned (0x1000). This segment will comprise the .text and .rodata ELF segments discussed earlier, plus additional segments generated during the linking procedure. As expected, it's flagged read-only (R) and executable (X), but not writable (W).
The second program header corresponds to the process' data segment. Loading this segment follows the same steps mentioned above. However, note that the segment size is 0x01df4 on file and 0x03240 in memory. This is due to the .bss section, which is to be zeroed and therefore doesn't need to be present in the file. The data segment will also be page-aligned (0x1000) and will contain the .data and .bss ELF segments. It will be flagged readable and writable (RW). The third program header results from the linking procedure and is irrelevant for this discussion.
If you have a proc file system, you can check this, as long as you get "Hello World" to run long enough (hint: gdb), with the following command:
# cat /proc/`ps -C hello -o pid=`/maps
08048000-0809e000 r-xp 00000000 03:06 479202 .../hello
0809e000-080a1000 rw-p 00055000 03:06 479202 .../hello
080a1000-080a3000 rwxp 00000000 00:00 0
bffff000-c0000000 rwxp 00000000 00:00 0
The first mapped region is the process' code segment, the second and third build up the data segment (data + bss + heap), and the fourth, which has no correspondence in the ELF file, is the stack. Additional information about the running hello process can be obtained with GNU time, ps, and /proc/pid/stat.
example taken from:
http://www.lisha.ufsc.br/teaching/os/exercise/hello.html
memory depend on the global variable and local variable