Cannot free memory once occupied by bytes.Buffer - memory

I receive bytes of compressed ASCII text in compressedbytes of type []byte. The problem I face is that the following procedure occupies a lot of memory that does not get freed after the function reaches its end and remains occupied during the whole runtime of the program.
b := bytes.NewReader(compressedbytes)
r, err := zlib.NewReader(b)
if err != nil {
panic(err)
}
cleartext, err = ioutil.ReadAll(r)
if err != nil {
panic(err)
}
I noticed that the type in use is bytes.Buffer and this type has the Reset() and Truncate() functions but none of them allows to free the memory that is once occupied.
The documentation of Reset() states the following:
Reset resets the buffer to be empty, but it retains the underlying storage for use by future writes. Reset is the same as Truncate(0).
How can I unset the buffer and free the memory again?
My program needs about 50MB of memory during the run that takes 2h. When I import strings that are zlib compressed the program needs 200 MB of memory.
Thanks for your help.
=== Update
I even created a separate function for the decompression and call the garbage collector manually with runtime.GC() after the program returns from that function without success.
// unpack decompresses zlib compressed bytes
func unpack(packedData []byte) []byte {
b := bytes.NewReader(packedData)
r, err := zlib.NewReader(b)
if err != nil {
panic(err)
}
cleartext, err := ioutil.ReadAll(r)
if err != nil {
panic(err)
}
r.Close()
return cleartext
}

Some things to clear. Go is a garbage collected language, which means that memory allocated and used by variables is automatically freed by the garbage collector when those variables become unreachable (if you have another pointer to the variable, that still counts as "reachable").
Freed memory does not mean it is returned to the OS. Freed memory means the memory can be reclaimed, reused for another variable if there is a need. So from the operating system you won't see memory decreasing right away just because some variable became unreachable and the garbage collector detected this and freed memory used by it.
The Go runtime will however return memory to the OS if it is not used for some time (which is usually around 5 minutes). If the memory usage increases during this period (and optionally shrinks again), the memory will most likely not be returned to the OS.
If you wait some time and not allocate memory again, freed memory will be returned to the OS eventually (obviously not all, but unused "big chunks" will be). If you can't wait for this to happen, you may call debug.FreeOSMemory() to force this behavior:
FreeOSMemory forces a garbage collection followed by an attempt to return as much memory to the operating system as possible. (Even if this is not called, the runtime gradually returns memory to the operating system in a background task.)
Check out this kind of old but really informative question+answers:
Go 1.3 Garbage collector not releasing server memory back to system

It will eventually get released when nothing references it anymore, Go has a rather decent GC.

Related

FileHandle don't free memory in iOS

I'll send large file to server. The file will be separated to chunks. I receive high memory consumption when I call FileHandle.readData(ofLength:). Memory for chunk don't deallocate, and after some time I receive EOM exception and crash.
Profiler show problem in FileHandle.readData(ofLength:) (see screenshots)
func nextChunk(then: #escaping (Data?) -> Void) {
self.previousOffset = self.fileHandle.offsetInFile
autoreleasepool {
let data = self.fileHandle.readData(ofLength: Constants.chunkLength)
if data == Constants.endOfFile {
then(nil)
} else {
then(data)
self.currentChunk += 1
}
}
}
The allocations tool is simply showing you where the unreleased memory was initially allocated. It is up to you to figure out what you subsequently did with that object and why it was not released in a timely manner. None of the profiling tools can help you with that. They can only point to where the object was originally allocated, which is only the starting point for your research.
One possible problem might be if you are creating Data-based URLRequest objects. That means that while the associated URLSessionTask requests are in progress, the Data is held in memory. If so, you might consider using a file-based uploadTask instead. That prevents the holding the Data associated with the body of the request in memory.
Once your start using file-based uploadTask, that begs the question as to whether you need/want to break it up into chunks at all. A file-based uploadTask, even when sending very large assets, requires very little RAM at runtime. And, at some future point in time, you may even consider using a background session, so the uploads will continue even if the user leaves the app. The combination of these features may obviate the chunking altogether.
As you may have surmised, the autoreleasepool may be unnecessary. That is intended to solve a very specific problem (where one create and release autorelease objects in a tight loop). I suspect your problem rests elsewhere.

How do I free memory in a lazy_static?

The documentation states that if the type has a destructor, it won't be called: https://docs.rs/lazy_static/1.4.0/lazy_static/#semantics
So how am I supposed to free the memory?
So how am I supposed to free the memory?
That question isn't even wrong.
The entire point of lazy_static is that the object lives forever, that's what a static is, when would anything be freed? The note is there for non-memory Drop, to indicate that if e.g. you use lazy_static for a file or a temp they will not be flushed / deleted / … on program exit.
For memory stuff it'll be reclaimed by the system when the program exits, like all memory.
So how am I supposed to free the memory?
Make your lazy_static an Option, and call take() to free the memory once you no longer need it. For example:
lazy_static! {
static ref LARGE: Mutex<Option<String>> =
Mutex::new(Some(iter::repeat('x').take(1_000_000).collect()));
}
fn main() {
println!("using the string: {}", LARGE.lock().as_ref().unwrap().len());
LARGE.lock().take();
println!("string freed")
assert!(LARGE.lock().is_none());
}
Playground
As others have pointed out, it is not necessary to do this kind of thing in most cases, as the point of most global variables is to last until the end of the program, at which case the memory will be reclaimed by the OS even if the destructor never runs.
The above can be useful if the global variable is associated with resources which you no longer need past a certain point in the program.

Indy TIdFTP "Out of Memory" error when downloading a large file

I am using TIdFTP (Indy 10.6) to upload a large file that exceeds 1GB.
When I want to download that file, I get an error message saying "Out of Memory".
Here is my code:
var
FTPfile: TMemoryStream;
begin
...
FTPFile := TMemoryStream.Create;
...
TRY
IdFTP1.Get ('Myfile.pdf', FTPfile, false);
FTPfile.Position := 0;
EXCEPT
On E: Exception do
ShowMessage (E.Message);
END;
I am using a 32bit version of Windows.
Is there a solution to work around this problem?
You are trying to download a 1GB file into a TMemoryStream. Aside from just being a bad idea in general, the main problem with doing this is that the FTP protocol does not report the size of a file that is being transferred, so TIdFTP.Get() can't pre-allocate the TMemoryStream's internal buffer up front. As such, as the file downloads, the TMemoryStream.Capacity will have to grow many times, and each time it will allocate a completely new buffer, copy the existing data into it, and free the old buffer. This means that during a growth operation, there is a small window of time where 2 buffers exist in memory. So you are actually using much more memory than you think. And storing a 1GB file in memory is really pushing the limit of how much memory a 32bit process can even allocate.
Eventually, the TMemoryStream will grow so large that the system just can't allocate that 2nd buffer anymore, thus failing with the "Out of Memory" error.
You could try using TIdFTP.Size() or TIdFTP.List() to determine the remote file's size ahead of time, set the TMemoryStream.Capacity accordingly, and then call TIdFTP.Get(). This way, there is only 1 memory allocation performed.
Also, make sure you Free() the TMemoryStream when you are done using it, or else you will leak whatever memory it was able to allocate.
But really, you should download the file to disk instead. You can use a TFileStream for that, but TIdFTP.Get() has an overload that takes a local file path as the destination instead of a TStream.
If you need to access the file's data in memory, you can read/memory-map it as needed after the download is complete.

iPhone Memory management for a deallocated password (Malloc Scribble in production?, fill with zeroes deallocated memory?)

I'm doing some research on how iPhone manage the heap and stack but it's very difficult to find a good source of information about this. I'm trying to trace how a password is kept in memory, even after the NSString is deallocated.
As far as I can tell, an iPhone will not clear the memory content (write zeros or garbage) once the release count in ARC go down to 0. So the string with the password will live in memory until that memory position is overridden.
There's a debug option in Xcode, Malloc Scribble, to debug memory problems that will fill deallocated memory with 0x55, by enabling/disabling this option (and disabling Zombies), and after a memory dump of the simulator (using gcore) I can check if the content has been replaced in memory with 0x55.
I wonder if this is something that can be done with the Apple Store builds, fill deallocated memory with garbage data, if my assumption that iPhone will not do that by default is correct or not, or if there's any other better option to handle sensitive data in memory, and how it should be cleared after using it (Mutable data maybe? write in that memory position?)
I don't think that there's something that can be done on the build settings level. You can, however, apply some sort of memory scrubbing yourself by zeroing the memory (use memset with the pointer to your string).
As #Artal was saying, memset can be used to write in a memory position. I found this framework "iMAS Secure Memory" that can be useful to handle this:
The "iMAS Secure Memory" framework provides a set of tools for
securing, clearing, and validating memory regions and individual
variables. It allows an object to have it's data sections overwritten
in memory either with an encrypted version or null bytes
They have a method that it should be useful to clear a memory position:
// Return NO if wipe failed
extern inline BOOL wipe(NSObject* obj) {
NSLog(#"Object pointer: %p", obj);
if(handleType(obj, #"", &wipeWrapper) == YES) {
if (getSize(obj) > 0){
NSLog(#"WIPE OBJ");
memset(getStart(obj), 0, getSize(obj));
}
else
NSLog(#"WIPE: Unsupported Object.");
}
return YES;
}

Physical memory in task manager don't changes when momory is allocated

all
My program maybe have a memory issue, so I try to find information about memory usage provided by various tools. In order to find the cause, I do simple experiments as well.
In release mode, I add the following code,
pChar = new char[((1<<30)/2)];
for(int i; i < ((1<<30)/2); i++)
{
pChar[i] = i % 256;
}
When the code is executed, the available physical memory in Windows task manager doesn't change. In my view, the compiler may remove the code to boost performance. I declare the variable as one global variable. It doesn't work. But in debug mode, the available physical memory in Windows task manager changes as expected. I can't understand that.
I have another question. Will the new operation allocate memory from virtual memory if the physical memory runs out. Or one exception will be thrown?
It's indeed quite possible that the compiler detects a "write-only" variable. Since it's non-volatile, the writes can be safely eliminated, and then there's no need for the OS to actually allocate RAM.
new just allocates address space, on modern systems. Physical RAM is allocated when needed. Typically this happens when the ctor runs, as it initializes the members. But in new char there's of course no ctor.

Resources