How to clear dynamic memory? - memory

For example, I have a dynamic segment tree on pointers, will the memory clear if I assign root of tree to NULL? How to clear it efiiciently?

Assigning to NULL will only change the pointer's address, but won't affect the allocated memory. The deallocation shall conform the allocation. This means if you have allocated each inode of the tree separately, you also need to deallocate them separately (most probably in the reverse order - depends on the content of the chunks). If all the memory was allocated at once, it should be also deallocated at once.

Related

what exactly do stack and mean in context of memory allocation

I read things like "memory is allocated in a stack " or things like "these variable are placed in a heap ". I had once studied a book on microprocessor and can faintly remember that there had been topics or sections on something called as stack . And I do know that stacks also mean a kind of LIFO type data structure .
So , I feel confused as to what stacks imply . Are there memory locations in a every microprocessor other than the registers which are called as stack ?
I'll describe the most common situation.
In this context, stack is a dedicated memory for a program (more precisely, for a thread). This memory is allocated automatically by the operating system, when your program is started. Usually (but not always), stack is allocated from the main memory (so it is not a special memory in the CPU).
It's name is stack, because it is used "LIFO style". When a function is called, its local variables gets allocated from the stack ("pushed to the stack"). When it returns, these variables are freed ("pop from the stack").
About heap: heap is the place from where one can allocate memory in a more flexible manner than stack. Heap storage space is usually much larger than the stack. And the allocated space will be available even after the function (which allocated the space) returns. And for languages which doesn't have garbage collection, you have to manually free the allocated space. This heap is not to be confused with the data structure heap, which is a completely different thing.
char *var;
void example(int length) {
char stackVar[1024]; // a 1024 element char array allocated on the stack
char *heapVar = new char[length]; // a length sized variable allocated on the heap, and a pointer (heapVar) to this place allocated on the stack
var = heapVar; // store a pointer to the allocated space
// upon return, stackVar is automatically freed
// the pointer heapVar automatically freed
// the space that heapVar points to is not freed automatically, can be used afterwards (via the var pointer)
}

Delphi dynamic array efficiency

I am not a Delphi expert and I was reading online about dynamic arrays and static arrays. In this article I have found a chapter called "Dynamic v. Static Arrays" with a code snippet and below the author says:
[...] access to a dynamic array can be faster than a static array!
I have understood that dynamic arrays are located on the heap (they are implemented with references/pointers).
So far I know that the access time is better on dynamic arrays. But is that the same thing with the allocation? Like if I called SetLength(MyDynArray, 5) is that slower than creating a MyArray = array[0..4] of XXX?
So far I know that the access time is better on dynamic arrays.
That is not correct. The statement in that article is simply false.
But is that the same thing with the allocation? Like if I called SetLength(MyDynArray, 5) is that slower than creating a MyArray = array[0..4] of XXX?
A common fallacy is that static arrays are allocated on the heap. They could be global variables, and so allocated automatically when the module is loaded. They could be local variables and allocated on the stack. They could be dynamically allocated with calls to New or GetMem. Or they could be contained in a compound type (e.g. a record or a class) and so allocated in whatever way the owning object is allocated.
Having got that clear, let's consider a couple of common cases.
Local variable, static array type
As mentioned, static arrays declared as local variables are allocated on the stack. Allocation is automatic and essentially free. Think of the allocation as being performed by the compiler (when it generates code to reserve a stack frame). As such there is no runtime cost to the allocation. There may be a runtime cost to access because this might generate a page fault. That's all perfectly normal though, and if you want to use a small fixed size array as a local variable then there is no faster way to do it.
Member variable of a class, static array type
Again, as described above, the allocation is performed by the containing object. The static array is part of the space reserved for the object and when the object is instantiated sufficient memory is allocated on the heap. The cost for heap allocation does not typically depend significantly on the size of the block to be allocated. An exception to that statement might be really huge blocks but I'm assuming your array is relatively small in size, tens or hundreds of bytes. Armed with that knowledge we can see again that the cost for allocation is essentially zero, given that we are already allocating the memory for the containing object.
Local variable, dynamic array type
A dynamic array is represented by a pointer. So your local variable is a pointer allocated on the stack. The same argument applies as for any other local variable, for instance the local variable of static array type discussed above. The allocation is essentially free. Before you can do anything with this variable though, you need to allocate it with a call to SetLength. That incurs a heap allocation which is expensive. Likewise when you are done you have to deallocate.
Member variable of a class, dynamic array type
Again, allocation of the dynamic array pointer is free, but you must call SetLength to allocate. That's a heap allocation. There needs to be a deallocation too when the object is destroyed.
Conclusion
For small arrays, whose lengths are known at compile time, use of static arrays results in more efficient allocation and deallocation.
Note that I am only considering allocation here. If allocation is a relatively insignificant portion of the time spent working with the object then this performance characteristic may not matter. For instance, suppose the array is allocated at program startup, and then used repeatedly for the duration of the program. In such a scenario the access times dominate the allocation times and the difference between allocation times becomes insignificant.
On the flip side, imagine a short function called repeatedly during the programs lifetime, let's suppose this function is the performance bottleneck. If it operates on a small array, then it is possible that the allocation cost of using a dynamic array could be significant.
Very seldom can you draw hard and fast rules with performance. You need to understand how the tools work, and understand how your program uses these tools. You can then form opinions on which coding strategies might perform best, opinions that you should then test by profiling. You will be surprised more often than you might expect that your intuition is not a good predictor of performance.

how does malloc work in details?

I am trying to find some useful information on the malloc function.
when I call this function it allocates memory dynamically. it returns the pointer (e.g. the address) to the beginning of the allocated memory.
the questions:
how the returned address is used in order to read/write into the allocated memory block (using inderect addressing registers or how?)
if it is not possible to allocate a block of memory it returns NULL. what is NULL in terms of hardware?
in order to allocate memory in heap we need to know which memory parts are occupied. where this information (about the occupied memory) is stored (if for example we use a small risc microcontroller)?
Q3 The usual way that heaps are managed are through a linked list. In the simplest case, the malloc function retains a pointer to the first free-space block in the heap, and each free-space block has a header that points to the next free space block in the heap. So the heap is in-effect self-defining in terms of knowing what is not occupied (and by inference what is therefore occupied); this minimizes the amount of overhead RAM needed to manage the heap.
When new space is needed via a malloc call, a large enough free-space block is found by traversing the linked list. That found free-space block is given to the malloc caller (with a small hidden header), and if needed a smaller free-space block is inserted into the linked list with any residual space between the original free space block and how much memory the malloc call asked for.
When a heap block is released by the application, its block is just formatted with the linked-list header, and added to the linked list, usually with some extra logic to combine consecutive free-space blocks into one larger free-space block.
Debugging versions of malloc usually do more, including retaining linked-lists of the allocated areas too, "guard zones" around the allocated heap areas to help detect memory overflows, etc. These take up extra heap space (making the heap effectively smaller in terms of usable space for the applications), but are extremely helpful when debugging.
Q2 A NULL pointer is effectively just a zero, which if used attempts to access memory starting at location 0 of RAM, which is almost always reserved memory of the OS. This is the cause of a significant quantity of memory violation aborts, all caused by programmer's lack of error checking for NULL returns from functions that allocate memory).
Because accessing memory location 0 by a non-OS application is never what is wanted, most hardware aborts any attempt to access location 0 by non-OS software. Even with page mapping such that the applications memory space (including location 0) is never mapped to real RAM location 0, since NULL is always zero, most CPUs will still abort attempts to access location 0 on the assumption that this is an access via a pointer that contains NULL.
Given your RISC processor, you will need to read its documentation to see how it handles attempts to access memory location 0.
Q1 There are many high-level language ways to use allocated memory, primarily through pointers, strings, and arrays.
In terms of assembly language and the hardware itself, the allocated heap block address just gets put into a register that is being used for memory indirection. You will need to see how that is handled in the RISC processor. However if you use C or C++ or such higher level language, then you don't need to worry about registers; the compiler handles all that.
Since you are using malloc, can we assume you are using C?
If so, you assign the result to a pointer variable, then you can access the memory by referencing through the variable. You don't really know how this is implemented in assembly. That depends on CPU you are using. malloc return 0 if it fails. Since usually NULL is defined as 0, you can test for NULL. You don't care how malloc tracks the free memory. If you really need this information, you should look at the source in glibc/malloc available on the net
char * c = malloc(10); // allocate 10 bytes
if (c == NULL)
// handle error case
else
*c = 'a' // write a in the first character on the block

why is stack and heap both required for memory allocation

I've searched a while but no conclusive answer is present on why value types have to be allotted on the stack while the reference types i.e. dynamic memory or the objects have to reside on the heap.
why cannot the same be alloted on the stack?
They can be. In practice they're not because stack is a typically scarcer resource than heap and allocating reference types on the stack may exhaust it quickly. Further, if a function returns data allocated on its stack, it will require copying semantics on the caller's part or risk returning something that will be overwritten by the next function call.
Value types, typically local variables, can be brought in and out of scope quickly and easily with native machine instructions. Copy semantics for value types on return is trivial as most fit into machine registers. This happens often and should be as cheap as possible.
It is not correct that value types always live on the stack. Read Jon Skeet's article on the topic:
Memory in .NET - what goes where
I understand that the stack paradigm (nested allocations/deallocations) cannot handle certain algorithms which need non-nested object lifetimes.
just as the static allocation paradigm cannot handle recursive procedure calls. (e.g. naive calculation of fibonacci(n) as f(n-1) + f(n-2))
I'm not aware of a simple algorithm that would illustrate this fact though. any suggestions would be appreciated :-)
Local variables are allocated in the stack. If that was not the case, you wouldn't be able to have variables pointing to the heap when allocating variable's memory. You CAN allocate things in the stack if you want, just create a buffer big enough locally and manage it yourself.
Anything a method puts on the stack will vanish when the method exits. In .net and Java, it would be perfectly acceptable (in fact desirable) if a class object vanished as soon as the last reference to it vanished, but it would be fatal for an object to vanish while references to it still exist. It is not in the general case possible for the compiler to know, when a method creates an object, whether any references to that object will continue to exist after the method exits. Absent such assurance, the only safe way to allocate class objects is to store them on the heap.
Incidentally, in .net, one major advantage of mutable value types is that they can be passed by reference without surrendering perpetual control over them. If class 'foo', or a method thereof, has a structure 'boz' which one of foo's methods passes by reference to method 'bar', it is possible for bar, or the methods it calls, to do whatever they want to 'boz' until they return, but once 'bar' returns any references it held to 'boz' will be gone. This often leads to much safer and cleaner semantics than the promiscuously-sharable references used for class objects.

local and dynamic allocating

I have a tree and I want to release the allocated memory, but I face a problem that a pointer may refers to a variable that isn't dynamically allocated,so how to know wether this pointer refers to dynamic a variable or not
This is compiler-specific. You may compare given pointer with pointer to a local variable. Result interpretation depends on the way compiler implements heap and stack. Generally, for given compiler, stack pointer is always less (or greater) than heap pointer.
In any case, THIS IS BAD DESIGN.
This may not work if pointer belongs to another heap (for example, allocated in another Dll).

Resources