How is heap memory represented? [duplicate] - memory

This question already has answers here:
What data structure is used to implement the dynamic memory allocation heap?
(2 answers)
Closed 8 years ago.
As per the question What and where are the stack and heap?, I more or less have an understanding of stack memory vs heap memory. Another question comes to mind, however:
Most sources I've read describe stack memory as being represented by an actual stack data structure in the memory. Is it the same with heap memory? Is heap memory represented/abstracted by a min/max heap data structure? If not, then what data structure is used to implement heap memory?

It all depends on the programming language you are using, but stack and heap are traditionally implemented as a stack and tree. The stack may be used to store function calls and scopes because you can return to the calling function by popping. But this and the actual implementation can be different per language so there isn't a general answer to this question.

Related

Should someone focus on having as less memory leaks as possible or having the fastest computing time? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering:
When programming, should one focus on having as less memory leaks as possible or more on the CPU computing time?
What are the pros/cons?
Thanks!
#basile's answers is correct. And it's worth clarifying what you mean by "memory leaks".
The strict definition of a memory leak is when a block of dynamic memory is never deallocated after being used. I would argue that this is never acceptable under any circumstances.
But, fortunately, avoiding memory leaks and using CPU time efficiently are not usually competing ideals.
It sounds like your question is more along the lines of "is it more important to cut down on CPU cycles, or is it more important to use as little memory as possible?" This is a common and completely valid question because there are many instances in programming where you can shave CPU cycles by dumping stuff into memory, or on the other hand, you can save memory by brute-forcing something.
Alas, there's no clear-cut answer. There are times when one is appropriate, and times when it goes the other way. As you grow as a programmer, you learn best practices for being efficient with both. And, in the real world, as long as you program responsibly, you will probably never see an actual situation where you have to sacrifice one or the other. Especially with the speed of modern chips.
If a program runs very quickly (e.g. a small fraction of second) and you want to run it zillion times, memory leaks do not matter at all (because in a very short time it will allocate only a small reasonable amount of memory, and the OS will reclaim the memory used by a process when that process terminates).
If your program does not run quickly (in particular if it runs continuously, e.g. because it is a server or a daemon), memory leaks are of paramount importance.
BTW, memory leaks may mean slightly different things (not the same in C as in Ocaml).
If coding in C or C++, use valgrind to detect memory leaks.
Read also about garbage collection (see also the GC handbook). At the very least, the terminology and the algorithms related to GC should concern you. In C, you might sometimes consider using Boehm's conservative garbage collector.

Memory breakdown based on its speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In one technical discussion the person asked me which things you look into when you buy a laptop.
Then he asked me to Sort different types of memory e.g RAM etc on the basis of speed.In simple words he wanted memory hierarchy .
Technically speaking a processor's registers are the fastest memory a computer has. The size is very small and people generally don't include those numbers when talking about a CPU.
The quickest memory in a computer that would be advertised is the memory that is directly attached to the CPU. It's called cache, and in modern processors you have 3 levels - L1, L2, and L3 - where the first level is the fastest but also the smallest (it's expensive to produce and power). Cache typically ranges from several kilobytes to a few megabytes and is typically made from SRAM.
After that there is RAM. Today's computers use DDR3 for main memory. It's much larger and cheaper than cache, and you'll find sticks upwards of 1 gigabyte in size. The most common type of RAM today is DRAM.
Lastly storage space, such as a hard drive or flash drive, is a form of memory but in general conversation it's grouped separately from the previous types of memory. E.g. you would ask how much "memory" a computer has - meaning RAM - and how much "storage" it has - meaning hard drive space.

CUDA : why are we using so many kinds of memories? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've learned the CUDA programming and I went into some problem. The major one is in CUDA "why do we use so many kinds of memories(Global, local, shared, constant, texture, caches,registers)?" unlike in CPU where we have only three main memory(Ram, caches, hd etc).
The main reasons for having multiple kinds of memory are explained in this article: Wikipedia: Memory Hierarchy
To summarize it, it a very simplified form:
It is usually the case that the larger the memory is, the slower it is
Memory can be read and written faster when it is "closer" to the processor.
As mentioned in the comment: On the CPU, you also have several layers of memory: The main memory, and several levels of caches. These caches are much smaller than main memory, but much faster. These caches are managed by the hardware, so as a software developer, you do not directly notice that these caches exist at all. All the data seems to be in the main memory.
On the GPU, you have to manage this memory manually (althogh in newer CUDA versions, you can also declare the shared memory as "cache", and let CUDA take care of the data management).
For example, reading some data from the shared memory in CUDA may be done within a few NANOseconds. Reading data from global memory may take a few MICROseconds. One of the keys to high performance in CUDA is thus data locality: You should try to keep the data that you are working on in local or shared memory, and avoid reading/writing data in global memory.
(P.S.: The "Close" votes that mark this question as "Primarily Opinion Based" are somewhat ridiculous. The question may show a lack of own research, but is a reasonable question that can clearly be answered here)

Stack and Heap memory [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My question here is
What is stack and heap memory
Why we need both of these memories
what are the pros and cons of each
In a nutshell:
The stack - The memory the program uses to actually run the program. This contains local variables, call-back data (for example when you call a function, the stack stores the state and place you were in the code before you entered the new function), and some other little things of that nature. You usually don't control the stack directly, the variables and data are destroyed, created when you move in and out function scopes.
The heap - The "dynamic" memory of the program. Each time you create a new object or variable dynamically, it is stored on the heap. This memory is controlled by the programmer directly, you are supposed to take care of the creation AND deletion of the objects there.
In C / C++ language memory allocated onto the stack is automatically free when the allocating scope ends, memory on the heap has to be free with some policy ( free(), delete ... or some garbage collector ). Memory allocated on the heap is visible among different function scope. In the stack we can't allocate big chunk of memory so heap is also useful when tou need to allocate big space for data.
I am not sure in which context you are asking but i can answer from their use in memory allocation. Both these data structures are required my platforms like .NET for Garbage collection. Remember all value types are stored on stack and all reference type on heap. This help runtime environment to create an object graph and keep track of what all objects are not in use and can be considered for garbage collection.

Difference between stack memory and heap memory [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What and where are the stack and heap
Where is heap memory and stack memory stored?I mean where on the harddisk?what are the limits of their size?
You should consider memory and hard-disk as opposites.
Memory is the more expensive stuff that comes in sticks and is 1000x faster than hard disk.
I don't think you'll be able to "find" the heap and stack memory the way you want to. The OS sets this up by assigning some range of memory for each ( like 0x682CFF00 - 0x681CFF00 ).
Perhaps this discussion will help What and where are the stack and heap?

Resources