I am trying to find the memory map of an array or some memory allocated from malloc() using mmap() but it is showing invalid argument.
#include<stdio.h>
#include<sys/mman.h>
#include<stdlib.h>
int main()
{
int *var1=NULL;
size_t size=0;
size = 1000*sizeof(int);
var1 = (int*)malloc(size);
int i=0;
for(i=0;i<999;i++)
{
var1[i] = 1;
}
printf("%p\n",var1);
void *addr=NULL;
addr = mmap((void *)var1, size, PROT_EXEC|PROT_READ|PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED, -1, 0); //to create memory map of var1
err(1,NULL); //to print error
return 0;
}
Error:
a.out: Invalid argument
Please help me.
Thank you in advance.
Proximate cause: mmap fails because you asked it do create a new memory mapping, you asked for the mapping to be placed at a specific address (var1's address), that address is already occupied (by the heap from which malloc got its memory), and you told the operating system it was not allowed to choose an alternate address in case var1 was not a suitable address (MAP_FIXED).
Analysis: What are you trying to do here? What does "find the memory map of an array" mean? Do you want to have your array of integers located in heap memory (returned by malloc()) or in an anonymous memory mapping created by mmap()? By the way, unless you fork() (create a child process) there is little functional difference: both are areas of memory that are private to your process. But they are not the same thing and you can't manipulate the heap with mmap() nor can you manage mapped memory with malloc().
Related
I came across this strange phenomenon while playing with unsafe Rust. I think this code should make a segmentation fault but it does not. Am I missing something?
I tried to set a pointer to a variable with a shorter lifetime and then dereference it.
// function that sets a pointer to a variable with a shorter lifetime
unsafe fn what(p: &mut *const i32) {
let a = 2;
*p = &a;
//let addr = *p; // I will talk about this later
println!("inside: {}", **p);
}
fn main() {
let mut p: *const i32 = 0 as *const i32;
unsafe {
what(&mut p);
// I thought this line would make a segfault because 'a' goes out of scope at the end of the function making the address invalid
println!("segfault? {}", *p);
// Even more unsettling: I can increment the address and still dereference it.
p = ((p as usize) + 1) as *const i32;
println!("I'm definitely missing something: {}", *p);
}
}
This program outputs:
inside: 2
segfault? {random number around 20000. probably uninitialized memory but why?}
I'm definitely missing something: {uninitialized memory}
If I uncomment the line
let addr = *p;
the second row becomes
segfault? 2
Why is there no segfault? Can the compiler extend the lifetime of a or the address p points at for safety? Am I missing some basic information about pointers in Rust?
This isn't unique to Rust. See:
Why doesn't the following code produce a segmentation fault?
Accessing an array out of bounds gives no error, why?
Can a local variable's memory be accessed outside its scope?
TL;DR: you have lied to the compiler. Your unsafe block does not uphold the requirements of safe code. This means you have created undefined behavior and the program is allowed to do whatever it wants. That could mean:
it crashes (such as via a segfault)
it runs perfectly
it erases your hard drive
it empties your bank account
it generates nasal demons
it eats your laundry
etc.
Segfaults are never a guaranteed outcome. A segmentation fault occurs when you access memory that is outside of the chunk of memory your thread/process has. The memory of the stack is well inside of that chunk, so it's unlikely to trigger the case.
See also:
Why doesn't this Rust program crash?
I'm attempting to do an exercise from "Expert C Programming" where the point is to see how much memory a program can allocate. It hinges on malloc returning NULL when it cannot allocate anymore.
#include <stdio.h>
#include <stdlib.h>
int main() {
int totalMB = 0;
int oneMeg = 1<<20;
while (malloc(oneMeg)) {
++totalMB;
}
printf("Allocated %d Mb total \n", totalMB);
return 0;
}
Rather than printing the total, I get a kernel panic after allocating ~8GB on my 16GB Macbook Pro.
Kernel panic log:
Anonymous UUID: 0B87CC9D-2495-4639-EA18-6F1F8696029F
Tue Dec 13 23:09:12 2016
*** Panic Report ***
panic(cpu 0 caller 0xffffff800c51f5a4): "zalloc: zone map exhausted while allocating from zone VM map entries, likely due to memory leak in zone VM map entries (6178859600 total bytes, 77235745 elements allocated)"#/Library/Caches/com.apple.xbs/Sources/xnu/xnu-3248.50.21/osfmk/kern/zalloc.c:2628
Backtrace (CPU 0), Frame : Return Address
0xffffff91f89bb960 : 0xffffff800c4dab12
0xffffff91f89bb9e0 : 0xffffff800c51f5a4
0xffffff91f89bbb10 : 0xffffff800c5614e0
0xffffff91f89bbb30 : 0xffffff800c5550e2
0xffffff91f89bbba0 : 0xffffff800c554960
0xffffff91f89bbd90 : 0xffffff800c55f493
0xffffff91f89bbea0 : 0xffffff800c4d17cb
0xffffff91f89bbf10 : 0xffffff800c5b8dca
0xffffff91f89bbfb0 : 0xffffff800c5ecc86
BSD process name corresponding to current thread: a.out
Mac OS version:
15F34
I understand that this can easily be fixed by the doctor's cliche of "It hurts when you do that? Then don't do that" but I want to understand why malloc isn't working as expected.
OS X 10.11.5
For the definitive answer to that question, you can look at the source code, which you'll find here:
zalloc.c source in XNU
In that source file find the function zalloc_internal(). This is the function that gives the kernel panic.
In the function you'll find a "for (;;) {" loop, which basically tries to allocate the memory you're requesting in the specified zone. If there isn't enough space, it immediately tries again. If that fails it does a zone_gc() (garbage collect) to try to reclaim memory. If that also fails, it simply kernel panics - effectively halting the computer.
If you want to understand how zalloc.c works, look up zone-based memory allocators.
Your program is making the kernel run out of space in the zone called "VM map entries", which is a predefined zone allocated at boot. You could probably get the result you are expecting from your program, without a kernel panic, if you allocated more than 1 MB at a time.
In essence it is not really a problem for the kernel to allocate you several gigabytes of memory. However, allocating thousands of smaller allocations summing up to those gigabytes is much harder.
In most managed languages (that is, the ones with a GC), local variables that go out of scope are inaccessible and have a higher GC-priority (hence, they'll be freed first).
Now, C is not a managed language, what happens to variables that go out of scope here?
I created a small test-case in C:
#include <stdio.h>
int main(void){
int *ptr;
{
// New scope
int tmp = 17;
ptr = &tmp; // Just to see if the memory is cleared
}
//printf("tmp = %d", tmp); // Compile-time error (as expected)
printf("ptr = %d\n", *ptr);
return 0;
}
I'm using GCC 4.7.3 to compile and the program above prints 17, why? And when/under what circumstances will the local variables be freed?
The actual behavior of your code sample is determined by two primary factors: 1) the behavior is undefined by the language, 2) an optimizing compiler will generate machine code that does not physically match your C code.
For example, despite the fact that the behavior is undefined, GCC can (and will) easily optimize your code to a mere
printf("ptr = %d\n", 17);
which means that the output you see has very little to do with what happens to any variables in your code.
If you want the behavior of your code to better reflect what happens physically, you should declare your pointers volatile. The behavior will still be undefined, but at least it will restrict some optimizations.
Now, as to what happens to local variables when they go out of scope. Nothing physical happens. A typical implementation will allocate enough space in the program stack to store all variables at the deepest level of block nesting in the current function. This space is typically allocated in the stack in one shot at the function startup and released back at the function exit.
That means that the memory formerly occupied by tmp continues to remain reserved in the stack until the function exits. That also means that the same stack space can (and will) be reused by different variables having approximately the same level of "locality depth" in sibling blocks. The space will hold the value of the last variable until some other variable declared in some sibling block variable overrides it. In your example nobody overrides the space formerly occupied by tmp, so you will typically see the value 17 survive intact in that memory.
However, if you do this
int main(void) {
volatile int *ptr;
volatile int *ptrd;
{ // Block
int tmp = 17;
ptr = &tmp; // Just to see if the memory is cleared
}
{ // Sibling block
int d = 5;
ptrd = &d;
}
printf("ptr = %d %d\n", *ptr, *ptrd);
printf("%p %p\n", ptr, ptrd);
}
you will see that the space formerly occupied by tmp has been reused for d and its former value has been overriden. The second printf will typically output the same pointer value for both pointers.
The lifetime of an automatic object ends at the end of the block where it is declared.
Accessing an object outside of its lifetime is undefined behavior in C.
(C99, 6.2.4p2) "If an object is referred to outside of its lifetime, the behavior is undefined. The value of a pointer becomes indeterminate when the object it points to reaches the end of its lifetime."
Local variables are allocated on the stack. They are not "freed" in the sense you think about GC languages, or memory allocated on the heap. They simply go out of scope, and for builtin types the code won't do anything - and for objects, the destructor is called.
Accessing them beyond their scope is Undefined Behaviour. You were just lucky, as no other code has overwritten that memory area...yet.
I have a weird problem, so I thought I would ask and see if someone more experienced than me could see a solution.
I am writing a program with CUDA C/C++, and I have some constant integers that specify various things, like coordinates of the bounds of the calculation, etc.. Currently I just have those things in global device memory. They are accessed by every thread in every kernel call, and so I figured that if they are in global memory, then they never are being cached or broadcast (right?). And so these little integers are taking up a lot (relatively) of overhead, and have a lot of 'read redundancy.'
So I declare in a header:
__constant__ int* number;
I include that header, and, when I do memory stuff, I do:
cutilSafeCall( cudaMemcpyToSymbol(number, &(some_host_int), sizeof(int) );
I pass number into all my kernel's then:
__global__ void magical_kernel(int* number, ...){
//and I access 'number' like this
int data_thingy = big_array[ *number ];
}
My code crashes. With number in global memory, it is just fine. I have determined that it crashes sometime upon accessing number within the kernel. This means that either I am accessing or allocating it wrong. If it holds the wrong value, it will also cause a crash, because it is used to index into arrays.
To conclude, I will ask a few questions. First, what am I doing wrong? As a bonus: is there a better way than constant memory to accomplish this task - I don't know the value of number at compile time, so a simple #define won't work. Will constant memory even speed the code up at all, or has it been cached and broadcasted all along? Could I somehow put the data in shared memory for each threadblock and have it remain in shared memory through multiple kernel calls?
There are several problems here:
You have declared number as a pointer, but never assigned it a value which is valid address in GPU memory
You have a variable scope onflict: the argument variable int * number defined in magic_kernel is not the same variable as the __constant__ int * variable defined as compilation unit scope.
The first argument of the cudaMemcpyToSymbol call is almost certainly incorrect.
If you don't understand why either of the first two point are true, you have some revision to do on pointers and scope in C++.
Based on your response to a now deleted answer, I suspect what you are actually trying to do is this:
__constant__ int number;
__global__ void magical_kernel(...){
int data_thingy = big_array[ number ];
}
cudaMemcpyToSymbol("number", &(some_host_int), sizeof(int));
i.e. number is intended to be an integer in constant memory, not a pointer, and not a kernel argument.
EDIT: here is an exmaple which shows this in action:
#include <cstdio>
__constant__ int number;
__global__ void magical_kernel(int * out)
{
out[threadIdx.x] = number;
}
int main()
{
const int value = 314159;
const size_t sz = size_t(32) * sizeof(int);
cudaMemcpyToSymbol("number", &value, sizeof(int));
int * _out, * out;
out = (int *)malloc(sz);
cudaMalloc((void **)&_out, sz);
magical_kernel<<<1,32>>>(_out);
cudaMemcpy(out, _out, sz, cudaMemcpyDeviceToHost);
for(int i=0; i<32; i++)
fprintf(stdout, "%d %d\n", i, out[i]);
return 0;
}
You should be able to run this yourself and confirm it works as advertised.
I've written a complicated lua script which uses the lua sockets library. It reads a list of files from disk, sorts them by date and sends them to a HTTP process. The number of files on disk is around 65K.The memory usage in taskmanager doesn't exceed 200Mb.
After quite a while the script returns:
lua: not enough memory
I print out the current GC count at points and it never goes above 110Mb
local freeMem = collectgarbage('count');
print("GC Count : " .. freeMem/1024 .. " MB");
This is on a 32 bit windows machine.
What's the best way to diagnose this?
All memory goes through the single lua_Alloc function. This takes the form of:
typedef void* (*lua_Alloc) (void* ud, void* ptr, size_t oszie, size_t nsize);
All allocations, reallocations and frees go through this. The documentation for this can be found at this web page. You can easily write your own to track all memory operations. For example,
void* MyAlloc (void* ud, void* ptr, size_t osize, size_t nsize)
{
(void)ud; (void)osize; // Not used
if (nsize == 0)
{
free(ptr)
TrackSubtract(osize);
return NULL;
}
else
{
void* p = realloc(ptr,nsize);
TrackSubtract(osize);
if (p) TrackAdd(nsize);
return p;
}
}
You can write the TrackAdd() and TrackSubtract() functions to whatever you want: output to a log; adjust a counter and so on.
To use your new function you pass a pointer to it when you create the Lua state:
lua_State* L = lua_newstate(&MyAlloc,0);
The documentation to lua_newstate is found here.
Good luck.
Use perfmon to monitor your process and add counters for private bytes and virtual bytes.
When your script ends with 'not enough memory' see the value of each counter. If you see sudden peaks in your memory usage, try to add more points in which you print the memory usage.