Tools for checking memory fragmentation [closed] - memory

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have recently read topics about memory fragmentation:
How to solve Memory Fragmentation and What is memory fragmentation?
I want to see some memory allocation map such as author in these article http://pavlovdotnet.wordpress.com/2007/11/10/memory-fragmentation/
Could you recomend some tools to get memory allocation map like that, so I could see if the memory is fragmented and what is the biggest free space available.
I'm on Windows so I would prefer tools working on this system.

Here is a tool that visualizes GC memory and heap usage, also the source code is provided. Another similar app is linked in the comments there as well.
If you need to be able to profile memory usage for a .NET solution, you could check out ANTS Memory Profiler, it can run alongside a project in Visual Studio and keep tabs on how processes and objects are using memory.

There is indirect solution to the problem. I have developing server application for a few years. Initially we are doing the allocation on demand and as a result after a running for few weeks the performance of the server degraded. As a workaround we followed this approach -
Suppose you have user defined classes X,Y,Z, .. which you need to allocate from heap at runtime. Allocate n number of objects X at startup. Put all these objects in free pool list. On demand , take each object of x and provide it to your app. When in use, put it in busy pool list.
When app wants to release it, put it back to the free pool list. Follow this startegy for Y. Z etc.
Since you are allocating all the needed objects at startup and never releasing back to the OS memory manger until your program exits, you will not face the performance degradation caused by memory fragmentation.

Related

What is "Memory Management" in iOS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have been given these 2 questions in my interview.
1. What is **MEMORY MANAGEMENT** in iOS.
2. What is reference counting?
Can any one explain this to me? I am new to iOS. Please help me out.
Thanks in advance!
Memory management is important to development of any type. Mobile devices in general have less memory to work with than full sized computers, and so it is even more critical that you manage memory wisely when working with them. This means ensuring that you do not hold on to memory any longer than you need to, and that you are careful about the amount of memory that you allocate.
Luckily in iOS it is no longer necessary to deal directly with reference counting because it is handled automatically by ARC (Automatic Reference Counting), but it is still good to know what it is. Whenever an object is allocated it gets a reference count of 1. That reference count can be increased by calling retain, or decreased by calling release. When the reference count hits 0, the object is deallocated.
Application memory management is the process of allocating memory during your program’s runtime, using it, and freeing it when you are done with it. A well-written program uses as little memory as possible. In Objective-C, it can also be seen as a way of distributing ownership of limited memory resources among many pieces of data and code. When you have finished working through this guide, you will have the knowledge you need to manage your application’s memory by explicitly managing the life cycle of objects and freeing them when they are no longer needed.
Reference counting

Memory breakdown based on its speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In one technical discussion the person asked me which things you look into when you buy a laptop.
Then he asked me to Sort different types of memory e.g RAM etc on the basis of speed.In simple words he wanted memory hierarchy .
Technically speaking a processor's registers are the fastest memory a computer has. The size is very small and people generally don't include those numbers when talking about a CPU.
The quickest memory in a computer that would be advertised is the memory that is directly attached to the CPU. It's called cache, and in modern processors you have 3 levels - L1, L2, and L3 - where the first level is the fastest but also the smallest (it's expensive to produce and power). Cache typically ranges from several kilobytes to a few megabytes and is typically made from SRAM.
After that there is RAM. Today's computers use DDR3 for main memory. It's much larger and cheaper than cache, and you'll find sticks upwards of 1 gigabyte in size. The most common type of RAM today is DRAM.
Lastly storage space, such as a hard drive or flash drive, is a form of memory but in general conversation it's grouped separately from the previous types of memory. E.g. you would ask how much "memory" a computer has - meaning RAM - and how much "storage" it has - meaning hard drive space.

CUDA : why are we using so many kinds of memories? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've learned the CUDA programming and I went into some problem. The major one is in CUDA "why do we use so many kinds of memories(Global, local, shared, constant, texture, caches,registers)?" unlike in CPU where we have only three main memory(Ram, caches, hd etc).
The main reasons for having multiple kinds of memory are explained in this article: Wikipedia: Memory Hierarchy
To summarize it, it a very simplified form:
It is usually the case that the larger the memory is, the slower it is
Memory can be read and written faster when it is "closer" to the processor.
As mentioned in the comment: On the CPU, you also have several layers of memory: The main memory, and several levels of caches. These caches are much smaller than main memory, but much faster. These caches are managed by the hardware, so as a software developer, you do not directly notice that these caches exist at all. All the data seems to be in the main memory.
On the GPU, you have to manage this memory manually (althogh in newer CUDA versions, you can also declare the shared memory as "cache", and let CUDA take care of the data management).
For example, reading some data from the shared memory in CUDA may be done within a few NANOseconds. Reading data from global memory may take a few MICROseconds. One of the keys to high performance in CUDA is thus data locality: You should try to keep the data that you are working on in local or shared memory, and avoid reading/writing data in global memory.
(P.S.: The "Close" votes that mark this question as "Primarily Opinion Based" are somewhat ridiculous. The question may show a lack of own research, but is a reasonable question that can clearly be answered here)

Why does Coldfusion 10 occupy 50% of available RAM instantly, without anything running? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are running ColdFusion 10 in a Windows 2008 R2 Standard environment.
We notice that immediately upon launching ColdFusion services, it eats 4 - 5 gigs of available RAM (we have 8 gigs available).
This occurs even though nothing is actually happening. No pages are running, no processes are going, literally nothing is happening. It occupies this RAM immediately upon launch.
Was wondering if anyone has Experienced this before, and whether there is something in the default settings of ColdFusion admin that we may have screwed up?
Check your JVM.config file. You will probably find settings like:
-Xmx4096
-Xms4096
The important one in this context is Xms. That is the minimum size of the heap space for the JVM. Which means that the JVM will immediately claim that much memory regardless of what it is doing. This is OK. This is how I handle my servers.
Having Xmx and Xms set to the same values is usually recommended because if you start with a smaller heap it takes time and resources for the heap to grow to the size that you need it. Performance is usually better if the JVM just claims all the memory it needs up front.

How to use graphics memory as RAM? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Since graphics cards provide large amounts of RAM (0.5GiB to 2GiB) and API access to the GPU is not that difficult with CUDA, Stream and more portable OpenCL I wondered if it is possible to use graphics memory as RAM. Grahics RAM might have a larger latency (from CPU) than real RAM but its definitively faster than HDD so it could be optimal for caching.
Is it possible to access graphics memory directly or at least with a thin memory management layer within own applications (rather than free usable for the OS)? If so, what the the preferred way to do this?
Yes, you can use it as swap memory on Linux. Refer to the link here for more details.
With Linux, it's possible to use it as swap space, or even as RAM disk.
Be warned
It's nice to have fast swap or RAM
disk on your home computer but be
warned, if a binary driver is loaded
for X, it may freeze the whole system
or create graphical glitches. Usually
there is no way to tell the driver how
much memory could be used, so it won't
know the upper limit. However, the
VESA driver can be used because it
provides the possibility to set the
video RAM size.
So, Direct Rendering or fast swap.
Your choice.
Unlike motherboard RAM and hard
drives, there aren't any known video
cards that have ECC memory. This may
not be a big deal for graphics
rendering, but you definitely don't
want to put critical data in it or use
this feature on servers.

Resources