Freeing memory allocated with newCString - memory

As library docs say CString created with newCString must be freed with free function. I have been expecting that when CString is created it would take some memory and when it is released with free memory usage would go down, but it didn't! Here is example code:
module Main where
import Foreign
import Foreign.C.String
import System.IO
wait = do
putStr "Press enter" >> hFlush stdout
_ <- getLine
return ()
main = do
let s = concat $ replicate 1000000 ['0'..'9']
cs <- newCString s
cs `seq` wait -- (1)
free cs
wait -- (2)
When program stopped at (1), htop program showed that memory usage is somewhere around 410M - this is OK. I press enter and the program stops at line (2), but memory usage is still 410M despite cs has been freed!
How is this possible? Similar program written in C behaves as it should. What am I missing here?

The issue is that free just indicates to the garbage collector that it can now collect the string. That doesn't actually force the garbage collector to run though -- it just indicates that the CString is now garbage. It is still up to the GC to decide when to run, based on heap pressure heuristics.
You can force a major collection by calling performGC straight after the call to free, which immediately reduces the memory to 5M or so.
E.g. this program:
import Foreign
import Foreign.C.String
import System.IO
import System.Mem
wait = do
putStr "Press enter" >> hFlush stdout
_ <- getLine
return ()
main = do
let s = concat $ replicate 1000000 ['0'..'9']
cs <- newCString s
cs `seq` wait -- (1)
free cs
performGC
wait -- (2)
Behaves as expected, with the following memory profile - the first red dot is the call to performGC, immediately deallocating the string. The program then hovers around 5M until terminated.

Related

DTrace build in built-in variable stackdepth always return 0

I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function

fortran deallocate array but not released in OS

I have the following question:how do I deallocate array memory in type? Like a%b%c,
how do I deallocate c? the specific problem is(The compiler environment I tried are gfortran version gcc4.4.7 and ifort version 18.0.1.OS:linux):
module grist_domain_types
implicit none
public :: aaa
type bbb
real (8), allocatable :: c(:)
end type bbb
type aaa
type(bbb), allocatable :: b(:)
end type aaa
end module grist_domain_types
program main
use grist_domain_types
type(aaa) :: a
integer(4) :: time,i
time=20
allocate(a%b(1:100000000))
call sleep(time)!--------------1
do i=1,100000000
allocate(a%b(i)%c(1:1))
enddo
call sleep(time)!--------------2
do i=1,100000000
deallocate(a%b(i)%c)
enddo
call sleep(time)!--------------3
deallocate(a%b)
call sleep(time)!--------------4
end program
First,"gfortran main.F90 -o main" to compile the program, and run this program. Then I use top -p processID to see memory. When the program is executed to 1, the memory is 4.5G. When the program is executed to 2, the memory is 7.5G. When the program is executed to 3, the memory is also 7.5G(but I think is 4.5G). When the program is executed to 4, the memory is 3G(I think is 0G or close to 0G). So deallocate(a%b(i)%c) does not seem to work. However, I use valgrind to see memory. the memory of this program is all deallocate...I used ifort and gfortran. This problem happens no matter which compiler I use. How to explain this question? I allocate many c array in this way,the program will finally crash due to insufficient memory. And how to solve it?
Take a look at this post from the Intel forum. There are 2 important information in there:
(From Doctor Forran):
When you do a DEALLOCATE, the memory that was allocated returns to the pool used by the memory allocator (on Linux and OS X this is the same as C's malloc/free). The memory is not released back to the OS - it is very rare that this would even be possible. What often happens is that the pattern of allocations and deallocations causes virtual memory to be fragmented, so that while the total available space may be high, there may not be sufficient contiguous space to allocate a large item. Unlike with disks, there is no way to "defrag" memory.
(From Jim Dempsey)
See if you can deallocate the memory in the reverse order in which it was allocated. This can reduce memory fragmentation.
You may also refer to this other Intel post:
During the program run, the Fortran runtime library will manage your heap. Yes, if data is DEALLOCATED, the runtime may choose to wait to release that memory. It's an optimization - if you do another ALLOCATE with the same size it will just reuse those pages. If the heap starts to run low, it will do some collection but not until it's absolutely necessary.
Also, let me add something: Check if there aren't other objects dynamically created in scope, like automatic arrays or temporal array copies. That could be demanding memory that may be freed only when they get out of scope.
Summing up, even if 'top' says the memory is still in use, you should start to worry only if your program starts to crash or if Valgrind shows something wreid.
I modified your program ( to see where it was up to ) and ran on Windows 7 / gFortran 7.2.0. It does not demonstrate the memory retention as you report, as memory reverts to 13 mb. Contrary to my comment, memory demand did not change during initialisation of c.
module grist_domain_types
implicit none
public :: aaa
type bbb
real (8), allocatable :: c(:)
end type bbb
type aaa
type(bbb), allocatable :: b(:)
end type aaa
end module grist_domain_types
program main
use grist_domain_types
type(aaa) :: a
integer(4),parameter :: million = 1000000
integer(4) :: n = 100*million
integer(4) :: time = 5, i, pass
do pass = 1,5
write (*,*) ' go #', pass
allocate(a%b(1:n))
write (*,*) 'allocate b'
call sleep(time)!--------------1
write (*,*) ' go'
do i=1,n
allocate(a%b(i)%c(1:1))
enddo
write (*,*) 'allocate c'
call sleep(time)!--------------2
write (*,*) ' go'
do i=1,n
a%b(i)%c = real(i)
enddo
write (*,*) 'use c'
call sleep(time)!--------------2a
write (*,*) ' go'
do i=1,n
deallocate(a%b(i)%c)
enddo
write (*,*) 'deallocate c'
call sleep(time)!--------------3
write (*,*) ' go'
deallocate(a%b)
write (*,*) 'deallocate b'
call sleep(time)!--------------4
end do
write (*,*) ' done : exit ?'
read (*,*) i
end program
edit: I have given the test a repeat with do pass... to repeat the memory demands. This shows no memory leakage for this Fortran program. I use Task manager to identify memory usage, both for this program and the O/S. Your particular O/S and Fortran compiler may be different.

Large VIRT memory in Fortran

I have a large Fortran/MPI code that when running uses a very large amount of VIRT memory (~20G) although the actual memory used (500 mb) is quite modest.
How can I profile the code to understand which part produces this huge VIRT memory?
At this stage, I'm even happy to use a brute force approach.
What I have tried is to put sleep statements in the code and recorded the memory usage through "top" to try to pin-point by bisection where the big allocation are.
However, this does not work as the sleep call puts the memory usage to 0. Is there a way to freeze the code while keeping current memory usage?
PS: I have tried VALGRIND but the code being so large, VALGRIND never finished. Is there an alternative to VALGRIND that is "easy" to used?
Thank you,
Sam
A solution for this is this modified (to get VIRT memory) subroutine from Track memory usage in Fortran 90
subroutine system_mem_usage(valueRSS)
use ifport !if on intel compiler
implicit none
integer, intent(out) :: valueRSS
character(len=200):: filename=' '
character(len=80) :: line
character(len=8) :: pid_char=' '
integer :: pid
logical :: ifxst
valueRSS=-1 ! return negative number if not found
!--- get process ID
pid=getpid()
write(pid_char,'(I8)') pid
filename='/proc/'//trim(adjustl(pid_char))//'/status'
!--- read system file
inquire (file=filename,exist=ifxst)
if (.not.ifxst) then
write (*,*) 'system file does not exist'
return
endif
open(unit=100, file=filename, action='read')
do
read (100,'(a)',end=120) line
if (line(1:7).eq.'VmSize:') then
read (line(8:),*) valueRSS
exit
endif
enddo
120 continue
close(100)
return
end subroutine system_mem_usage

How to get a pointer value in Haskell?

I wish to manipulate data on a very low level.
Therefore I have a function that receives a virtual memory address as an integer and "does stuff" with this memory address. I interfaced this function from C, so it has the type (CUInt -> a).
The memory I want to link is a Word8 in a file. Sadly, I have no idea how to access the pointer value to that Word8.
To be clear, I do not need the value of the Word8, i need the value to the virtual memory address, which is the value of the pointer to it.
For the sake of a simple example, say you want to add an offset to the pointer.
Front matter:
module Main where
import Control.Monad (forM_)
import Data.Char (chr)
import Data.Word (Word8)
import Foreign.ForeignPtr (ForeignPtr, withForeignPtr)
import Foreign.Ptr (Ptr, plusPtr)
import Foreign.Storable (peek)
import System.IO.MMap (Mode(ReadOnly), mmapFileForeignPtr)
Yes, you wrote that you don't want the value of the Word8, but I've retrieved it with peek to demonstrate that the pointer is valid. You might be tempted to return the Ptr from inside withForeignPtr, but the documentation warns against that:
Note that it is not safe to return the pointer from the action and use it after the action completes. All uses of the pointer should be inside the withForeignPtr bracket. The reason for this unsafeness is the same as for unsafeForeignPtrToPtr below: the finalizer may run earlier than expected, because the compiler can only track usage of the ForeignPtr object, not a Ptr object made from it.
The code is straightforward:
doStuff :: ForeignPtr Word8 -> Int -> IO ()
doStuff fp i =
withForeignPtr fp $ \p -> do
let addr = p `plusPtr` i
val <- peek addr :: IO Word8
print (addr, val, chr $ fromIntegral val)
To approximate “a Word8 in a File” from your question, the main program memory-maps a file and uses that buffer to do stuff with memory addresses.
main :: IO ()
main = do
(p,offset,size) <- mmapFileForeignPtr path mode range
forM_ [0 .. size-1] $ \i -> do
doStuff p (offset + i)
where
path = "/tmp/input.dat"
mode = ReadOnly
range = Nothing
-- range = Just (4,3)
Output:
(0x00007f1b40edd000,71,'G')
(0x00007f1b40edd001,117,'u')
(0x00007f1b40edd002,116,'t')
(0x00007f1b40edd003,101,'e')
(0x00007f1b40edd004,110,'n')
(0x00007f1b40edd005,32,' ')
(0x00007f1b40edd006,77,'M')
(0x00007f1b40edd007,111,'o')
(0x00007f1b40edd008,114,'r')
(0x00007f1b40edd009,103,'g')
(0x00007f1b40edd00a,101,'e')
(0x00007f1b40edd00b,110,'n')
(0x00007f1b40edd00c,33,'!')
(0x00007f1b40edd00d,10,'\n')
You are probably looking for ptrToIntPtr and probably fromIntegral to make it a CUInt.
Note that a CUInt cannot represent a pointer on all platforms, though.

Erlang: erl shell hangs after building a large data structure

As suggested in answers to a previous question, I tried using Erlang proplists to implement a prefix trie.
The code seems to work decently well... But, for some reason, it doesn't play well with the interactive shell. When I try to run it, the shell hangs:
> Trie = trie:from_dict(). % Creates a trie from a dictionary
% ... the trie is printed ...
% Then nothing happens
I see the new trie printed to the screen (ie, the call to trie:from_dict() has returned), then the shell just hangs. No new > prompt comes up and ^g doesn't do anything (but ^c will eventually kill it off).
With a very small dictionary (the first 50 lines of /usr/share/dict/words), the hang only lasts a second or two (and the trie is built almost instantly)... But it seems to grow exponentially with the size of the dictionary (100 words takes 5 or 10 seconds, I haven't had the patience to try larger wordlists). Also, as the shell is hanging, I notice that the beam.smp process starts eating up a lot of memory (somewhere between 1 and 2 gigs).
So, is there anything obvious that could be causing this shell hang and incredible memory usage?
Some various comments:
I have a hunch that the garbage collector is at fault, but I don't know how to profile or create an experiment to test that.
I've tried profiling with eprof and nothing obvious showed up.
Here is my "add string to trie" function:
add([], Trie) ->
[ stop | Trie ];
add([Ch|Rest], Trie) ->
SubTrie = proplists:get_value(Ch, Trie, []),
NewSubTrie = add(Rest, SubTrie),
NewTrie = [ { Ch, NewSubTrie } | Trie ],
% Arbitrarily decide to compress key/value list once it gets
% more than 60 pairs.
if length(NewTrie) > 60 ->
proplists:compact(NewTrie);
true ->
NewTrie
end.
The problem is (amongst others ? -- see my comment) that you are always adding a new {Ch, NewSubTrie} tuple to your proplist Tries, no matter if Ch already existed, or not.
Instead of
NewTrie = [ { Ch, NewSubTrie } | Trie ]
you need something like:
NewTrie = lists:keystore(Ch, 1, Trie, {Ch, NewSubTrie})
You're not really building a trie here. Your end result is effectively a randomly ordered proplist of proplists that requires full scans at each level when walking the list. Tries are typically implied ordering based on position in the array (or list).
Here's an implementation that uses tuples as the storage mechanism. Calling set only rebuilds the root and direct path tuples.
(note: would probably have to make the pair a triple (adding size) make delete work with any efficiency)
I believe erlang tuples are really just arrays (thought I read that somewhere), so lookup should be super fast, and modify is probably straight forward. Maybe this is faster with the array module, but I haven't really played with it much to know.
this version also stores an arbitrary value, so you can do things like:
1> c(trie).
{ok,trie}
2> trie:get("ab",trie:set("aa",bar,trie:new("ab",foo))).
foo
3> trie:get("abc",trie:set("aa",bar,trie:new("ab",foo))).
undefined
4>
code (entire module): note2: assumes lower case non empty string keys
-module(trie).
-compile(export_all).
-define(NEW,{ %% 26 pairs, to avoid cost of calculating a new level at runtime
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth}
}
).
-define(POS(Ch), Ch - $a + 1).
new(Key,V) -> set(Key,V,?NEW).
set([H],V,Trie) ->
Pos = ?POS(H),
{_,SubTrie} = element(Pos,Trie),
setelement(Pos,Trie,{V,SubTrie});
set([H|T],V,Trie) ->
Pos = ?POS(H),
{SubKey,SubTrie} = element(Pos,Trie),
case SubTrie of
nodepth -> setelement(Pos,Trie,{SubKey,set(T,V,?NEW)});
SubTrie -> setelement(Pos,Trie,{SubKey,set(T,V,SubTrie)})
end.
get([H],Trie) ->
{Val,_} = element(?POS(H),Trie),
Val;
get([H|T],Trie) ->
case element(?POS(H),Trie) of
{_,nodepth} -> undefined;
{_,SubTrie} -> get(T,SubTrie)
end.

Resources