I'm trying to allocate memory with my "program" - just to allocate it and stay there - for testing purposes. When I run it on my MacOS the Activity Monitor shows it allocates 1.6 gb, when I compile it for linux and run it there it does nothing - it prints the message but the ram isn't being used on the machine. Am I doing it wrong? Is there a better way? Here is my code:
package main
import (
"fmt"
"unsafe"
"time"
)
func main() {
var buffer [100 * 1024 * 1024]string
fmt.Printf("The size of the buffer is: %d bytes\n", unsafe.Sizeof(buffer))
time.Sleep(300 * time.Second)
}
First I used byte for my array type, but it did not worked event on my mac?
There's nothing whatsoever in your code that requires memory. The compiler is perfectly within its rights to optimize the whole allocation away, and even if the compiler doesn't do it, the OS will not commit the memory - you're never assigning anything, so it's likely just mirroring a zero-page.
I don't know anything about the subtle differences between whatever Linux you're using and whatever MacOS you're using, so there's little that can be said with certainty. It might very well be that your way of checking memory on your linux machine gives you only committed memory, while you're seeing all virtual memory on your MacOS, or there might be other subtle differences. In any case, since compilers became smart and since we had virtual memory on PCs, it's been getting harder and harder to get any meaningful benchmarks - the tools we work with are usually smart enough to avoid unnecessary waste; and most benchmarks you'll try are pretty much completely unnecessary waste.
Benchmarking is hard.
Related
I have a 32-bit Delphi application running with /LARGEADDRESSAWARE flag on. This allows to allocate up to 4GB on a 64-bit system.
Am using threads (in a pool) to process files where each task loads a file in memory. When multiple threads are running (multiple files being loaded), at some point the EOutOfMemory hits me.
What would be the proper way to get the available address space so I can check if I have enough memory before processing the next file?
Something like:
if TotalMemoryUsed {from GetMemoryManagerState} + FileSize <
"AvailableUpToMaxAddressSpace" then NoOutOfMemory
I've tried using
TMemoryStatusEx.ullAvailVirtual for AvailableUpToMaxAddressSpace
but the results are not correct (sometimes 0, sometimes > than I actually have).
I don't think that you can reasonably and robustly expect to be able to predict ahead of time whether or not memory allocations will fail. At the very least you would probably need to write your own memory allocator that was dedicated to serving your application, and have a very strong understanding of the heap allocation requirements of your process.
Realistically the tractable way forward for you is to break free from the shackles of 32 bit address space. That is your fundamental problem. The way to escape from 32 bit address space is to compile for 64 bit. That requires XE2 or later.
You may need to continue supporting 32 bit versions of your application because you have users that are still on 32 bit systems. The modern versions of Delphi have 32 bit and 64 bit compilers and it is quite simple to write code that will compile and behave correctly under both scenarios.
For your 32 bit versions you are less likely to run into memory problems anyway because 32 bit systems tend to run on older hardware with fewer processors. In turn this means less demand on memory space because your thread pool tends to be smaller.
If you encounter machines with large enough processor counts to cause out of memory problems then one very simple and pragmatic approach is to give the user a mechanism to limit the number of threads used by your application's thread pool.
Since there are more processes running on the target system even if the necessary infrastructure would be available, if would be no use.
There is no guarantee that another process does not allocate the memory after you have checked its availability and before you actually allocate it. The right thing to do is writing code that will fail gracefully and catch the EOutOfMemory exception when it appears. Use it as a sign to stop creating more threads until some of them is already terminated.
Delphi is 32bit, so you can't allocate memory addresses larger than that.
Take a look at this:
What is a safe Maximum Stack Size or How to measure use of stack?
I'm trying to use mmap to read and play audio files on iOS. It works fine for files up to about 400MB. But when I try a 500MB file, I get a ENOMEM error.
char *path = [[[NSBundle mainBundle] pathForResource: #"test500MB" ofType: #"wav"] cStringUsingEncoding: [NSString defaultCStringEncoding]];
FILE *f = fopen( path, "rb" );
fseek( f, 0, SEEK_END );
int len = (int)ftell( f );
fseek( f, 0, SEEK_SET );
void *raw = mmap( 0, len, PROT_READ, MAP_SHARED, fileno( f ), 0 );
if ( raw == MAP_FAILED ) {
printf( "MAP_FAILED. errno=%d", errno ); // Here it says 12, which is ENOMEM.
}
Why?
I'd be happy with an answer like "700MB is the virtual memory limit, but sometimes the address space is fragmented, so you DO get 700MB but in smaller chunks". (This is just speculation, I still need an answer)
The Apple doc page about virtual memory says:
Although OS X supports a backing store, iOS does not. In iPhone
applications, read-only data that is already on the disk (such as code
pages) is simply removed from memory and reloaded from disk as needed.
which seems to confirm that mmap should work for blocks larger than the physical memory but still doesn't explain why I'm hitting such a low limit.
Update
This answer is interesting, but 500MB is well below the 700MB limit it mentions.
This discussion mentions contiguous memory. So memory fragmentation could be a real issue?
I'm using iPod Touch 4th generation which has 256MB physical memory.
The point of my research is to see if there's a better way of managing memory when loading read-only data from files than "keep allocating until you get a memory warning". mmap seemed like a nice way to solve this...
Update 2
I expect mmap to work perfectly with the new 64bit version of iOS. Will test once I get my hands on a 64bit device.
After further investigation and reading this excellent blog post by John Carmack, here are my conclusions:
700MB is the limit for virtual memory on iOS (as of 2012, 32-bit iOS)
It may or may not be available in a single block; this depends on device state and app behaviour
Therefore, to reliably mmap 700MB worth of file data it is necessary to break it into smaller chunks.
I don't have an answer for you, but I did test your code on my iPhone 5 running 6.0.1 and mmap succeeded on a 700MB ISO file. So I would start with other factors and assume mmap is working properly. Perhaps the error you're getting back is not really due to memory, or perhaps the memory on the device itself is somehow exhausted to where mmap is failing; try rebooting the device. Depending on your version of iOS, I wonder also if perhaps your seek to the end of the file might be causing mmap to try and map the entire file in; you might try cleaning that up and use stat instead to determine the file size, or close then re-open the file descriptor before mapping. These are all just ideas; if I could reproduce your error, I'd be glad to help fix it.
Use NSData and dont touch mmap directly here.
To get the advantages of faulting reads, use NSDataReadingMapped.
NSData ALSO frees bytes when in low-mem situations
Normally the amount of physical memory available has nothing to do with whether or not you are able to mmap a file. This is after all VIRTUAL memory we are talking about. The problem on iOS is that, at least according to the iOS App Programming Guide, the virtual memory manager only swaps out read-only sections... In theory that would mean that you are not only constrained by the amount of available address space, but you are also constrained by the amount of available RAM, if you are mapping with anything other than PROT_READ.
See http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/TheiOSEnvironment/TheiOSEnvironment.html
Nevertheless, it may well be that the problem you are having is a lack of contiguous memory large enough for your mapping in the virtual address space. So far as I can find, Apple does not publish the upper memory limit of a user-mode process. Typically the upper region of the address space is reserved for the kernel. You may only have 16-bits of memory to work with in user-mode.
What you can't see without dumping a memory map in the debugger, is that there are many (I counted over 100 in a simple sample application from Apple) shared libraries (dylibs) loaded into the process address space. Each of these are also mmap'd in, and each will fragment the available address space.
In gdb you should be able to dump the memory mappings with 'info proc mappings'. Unfortunately in lldb the only equivalent I've been able to find is 'image list', which only shows shared library mappings, not data mmap mappings.
Using the debugger in this way you should be able to determine if the address space has a contiguous block large enough for the data you are trying to map, though it make take some work to discover the upper limit (Apple should publish this!)
I'm developing a logger/sniffer using Delphi. During operation I get hugh amounts of data, that can accumulate during stress operations to around 3 GB of data.
On certain computers when we get to those levels the application stops functioning and sometimes throws exceptions.
Currently I'm using GetMem function to allocate the pointer to each message.
Is there a better way to allocate the memory so I could minimize the chances for failure? Keep in mind that I can't limit the size to a hard limit.
What do you think about using HeapAlloc, VirtualAlloc or maybe even mapped files? Which would be better?
Thank you.
Your fundamental problem is the hard address space limit of 4GB for 32 bit processes. Since you are hitting problems at 3GB I can only presume that you are using /LARGEADDRESSAWARE running on 64 bit Windows or 32 bit Windows with the /3GB boot switch.
I think you have a few options, including but not limited to the following:
Use less memory. Perhaps you can process in smaller chunks or push some of the memory to disk.
Use 64 bit Delphi (just released) or FreePascal. This relieves you of the address space constraint but constrains you to 64 bit versions of Windows.
Use memory mapped files. On a machine with a lot of memory this is a way of getting access to the OS memory cache. Memory mapped files are not for the faint hearted.
I can't advise definitively on a solution since I don't know your architecture but in my experience, reducing your memory footprint is often the best solution.
Using a different allocator is likely to make little difference. Yes it is true that there are low-fragmentation allocators but they surely won't really solve your problem. All they could do would be make it slightly less likely to arise.
Hi folks and thanks for your time in advance.
I'm currently extending our C# test framework to monitor the memory consumed by our application. The intention being that a bug is potentially raised if the memory consumption significantly jumps on a new build as resources are always tight.
I'm using System.Diagnostics.Process.GetProcessByName and then checking the PrivateMemorySize64 value.
During developing the new test, when using the same build of the application for consistency, I've seen it consume differing amounts of memory despite supposedly executing exactly the same code.
So my question is, if once an application has launched, fully loaded and in this case in it's idle state, hence in an identical state from run to run, can I expect the private bytes consumed to be identical from run to run?
I need to clarify that I can expect memory usage to be consistent as any degree of varience starts to reduce the effectiveness of the test as a degree of tolerance would need to be introduced, something I'd like to avoid.
So...
1) Should the memory usage be 100% consistent presuming the application is behaving consistenly? This was my expectation.
or
2) Is there is any degree of variance in the private byte usage returned by windows or in the memory it allocates when requested by an app?
Currently, if the answer is memory consumed should be consistent as I was expecteding, the issue lies in our app actually requesting a differing amount of memory.
Many thanks
H
Almost everything in .NET uses the runtime's garbage collector, and when exactly it runs and how much memory it frees depends on a lot of factors, many of which are out of your hands. For example, when another program needs a lot of memory, and you have a lot of collectable memory at hand, the GC might decide to free it now, whereas when your program is the only one running, the GC heuristics might decide it's more efficient to let collectable memory accumulate a bit longer. So, short answer: No, memory usage is not going to be 100% consistent.
OTOH, if you have really big differences between runs (say, a few megabytes on one run vs. half a gigabyte on another), you should get suspicious.
If the program is deterministic (like all embedded programs should be), then yes. In an OS environment you are very unlikely to get the same figures due to memory fragmentation and numerous other factors.
Update:
Just noted this a C# app, so no, but the numbers should be relatively close (+/- 10% or less).
I have a VPS with not very much memory (256Mb) which I am trying to use for Common Lisp development with SBCL+Hunchentoot to write some simple web-apps. A large amount of memory appears to be getting used without doing anything particularly complex, and after a while of serving pages it runs out of memory and either goes crazy using all swap or (if there is no swap) just dies.
So I need help to:
Find out what is using all the memory (if it's libraries or me, especially)
Limit the amount of memory which SBCL is allowed to use, to avoid massive quantities of swapping
Handle things cleanly when memory runs out, rather than crashing (since it's a web-app I want it to carry on and try to clean up).
I assume the first two are reasonably straightforward, but is the third even possible?
How do people handle out-of-memory or constrained memory conditions in Lisp?
(Also, I note that a 64-bit SBCL appears to use literally twice as much memory as 32-bit. Is this expected? I can run a 32-bit version if it will save a lot of memory)
To limit the memory usage of SBCL, use --dynamic-space-size option (e.g.,sbcl --dynamic-space-size 128 will limit memory usage to 128M).
To find out who is using memory, you may call (room) (the function that tells how much memory is being used) at different times: at startup, after all libraries are loaded and then during work (of cource, call (sb-ext:gc :full t) before room not to measure the garbage that has not yet been collected).
Also, it is possible to use SBCL Profiler to measure memory allocation.
Find out what is using all the memory
(if it's libraries or me, especially)
Attila Lendvai has some SBCL-specific code to find out where an allocated objects comes from. Refer to http://article.gmane.org/gmane.lisp.steel-bank.devel/12903 and write him a private mail if needed.
Be sure to try another implementation, preferably with a precise GC (like Clozure CL) to ensure it's not an implementation-specific leak.
Limit the amount of memory which SBCL
is allowed to use, to avoid massive
quantities of swapping
Already answered by others.
Handle things cleanly when memory runs
out, rather than crashing (since it's
a web-app I want it to carry on and
try to clean up).
256MB is tight, but anyway: schedule a recurring (maybe 1s) timed thread that checks the remaining free space. If the free space is less than X then use exec() to replace the current SBCL process image with a new one.
If you don't have any type declarations, I would expect 64-bit Lisp to take twice the space of a 32-bit one. Even a plain (small) int will use a 64-bit chunk of memory. I don't think it'll use less than a machine word, unless you declare it.
I can't help with #2 and #3, but if you figure out #1, I suspect it won't be a problem. I've seen SBCL/Hunchentoot instances running for ages. If I'm using an outrageous amount of memory, it's usually my own fault. :-)
I would not be surprised by a 64-bit SBCL using twice the meory, as it will probably use a 64-bit cell rather than a 32-bit one, but couldn't say for sure without actually checking.
Typical things that keep memory hanging around for longer than expected are no-longer-useful references that still have a path to the root allocation set (hash tables are, I find, a good way of letting these things linger). You could try interspersing explicit calls to GC in your code and make sure to (as far as possible) not store things in global variables.