Memory management in BADA Operating System - memory

I am finding it very hard to get the details of how memory management is done in BADA OS.
Does anyone have any info about it or do all smart-phones have similar memory management concepts?

Programming on bada you mainly have to deal with heap-memory. In some classes of bada-API you have to use automatic memory management (Osp::Base::Collection can release memory of its elements if you want; in Osp::Ui::Container method RemoveControl() will free memory of his child ).
But in general cases you need handle memory freeing by yourself.

Memory management in BADA follows the conventional C++ memory handling policy.
An app is always responsible for deleting memory it allocates (every call to new must have
a symmetrical call to delete)
Memory in BADA at runtime is divided between:
Static memory
:Assigned by the compiler and is part of the application binary at runtime.
Stack memory:Allocated and freed at runtime by the OS as function activation frames for the running program are created and released
Heap memory:Allocated and freed dynamically as requested by a program.
Object Ownership Responsibilities
A further small, but important, complication relating to memory allocation and object
construction is that sometimes framework methods require the framework to allocate and
return a new object to the calling app.
However, once the object is returned by the framework, and the object is passed into the
ownership of the caller, the framework no longer knows when the object is finished with.
In this case, the simple rule that allocating and freeing memory should always be done
symmetrically no longer holds.
The problem for the app programmer is, then, to know whether or not the app, or the
framework, should be responsible for cleaning up a given object.
This problem is solved almost trivially in BADA by a simple naming convention, and the
associated rule
Convention
Trailing ‘N’ in a method name, for example: Sometype() to SomethingN()
Rule
The caller is always responsible for deleting objects returned by a framework method
named with a trailing ‘N

Related

In programming environments that have automatic memory management, how often are the OS memory allocation routines invoked at runtime?

Do implementations pre-allocate blocks of memory for objects using malloc? When these blocks are used up, will additional memory be requested? When garbage collection runs and compaction occurs, will memory be returned to the OS via calls to free?
Do implementations pre-allocate blocks of memory for objects using malloc?
Yes. Most often they pre-allocate continuous blocks of memory and implement they own allocation mechanism inside (for example based on allocation pointer - pointing the memory address for the next object so allocating an object is simply returning this address and moving this pointer by given amount of bytes). This is faster than relying on OS calls and gives better control of those memory regions. For example, in case of CLR on Windows, those blocks are called segments and are managed via VirtualAlloc/VirtualFree calls. First quite a big memory region is reserved and then more and more pages are being committed as they are needed. Malloc (or more general - HeapAPI in case of Windows) is not used in CLR.
When these blocks are used up, will additional memory be requested?
Yes, they may be more blocks created but first they grow "inside" by committing (consuming) reserved memory.
When garbage collection runs and compaction occurs, will memory be returned to the OS via calls to free?
It depends on specific runtime implementation but you should not look at it as a main memory reclamation mechanism. Compaction works inside those preallocated memory blocks - for example, allocation pointer will be moved back to the left after compaction occurred. But yes, in general, segments may be returned to OS when GC decides that it is no longer needed (like all objects living inside have been reclaimed). However, on 32-bit architectures with quite limited virtual memory space it could lead to unwanted memory fragmentation and reusing such memory block was a better option. On 64-bit this may not be so big problem, however, reusing those blocks still may be a just good idea.

Mapping and allocating

I am little confused with term mapping, for example, when we say mapping memory for database, it means that we assigning specific amount of memory at some memory location to that database?
Also is allocating memory synonym for reserving memory?
Very often I encounter these two terms, and they aren't so clear to me.
If someone can clarify these two terms, I will be very thankful.
This might be a question better asked to the software community at stackoverflow. However, I am a CS.
I would say that terms aren't always used accurately and precisely.
In general allocating memory is making memory available to a program for an active purpose, such as allocating memory for buffers to hold a file or in in-memory structure now.
Reserving memory is often used to mean the same thing. However, it is sometimes more passive. For example reserving memory in case their is a future requirement, or protecting against too much memory allocation for a different purpose.
Often when the term 'mapping' is used, it is for a file. It may mean exactly the same as allocating. Or it means more; mapping may be using an underlying mechanism provided by virtual memory management systems, where part of virtual memory is 'mapped' to the file, without actually reading the file into physical memory. The trick is, as the memory-mapped file is accessed, the block/page being accessed is read in 'invisibly' to the process when necessary. This uses a mechanism called demand paging. It's benefit is a program can access the file as if it is all read into memory, but only the parts actually accessed are retrieved from the persistent storage system (disk, flash, whatever), which can be a huge win if only small parts of the file are needed.
Further, it simplifies the program, which can be written as if the whole file is in memory. Instead of the application developer trying to keep track of which parts of the file have been loaded into memory, the operating system does that instead.
Even better, the Operating system can be asked to track which blocks/pages have their contents changed, and it can be asked to periodically write that back out to persistent storage. This can even further simplify the application program.
This is popular with some databases.
Mapping basically means assigning. Except we often want a 1 to 1 mapping in the case of functions. If you define the function of an object, physical or just logical, and define it's relationships and how it changes under transformation then you have mapped it.

As ARC came into existance in iOS, do we stil need the requirement of using xcode instruments (Allocations and Leak)?

As I learnt from the apple documentation that ,In iOS ARC will automatically take care of the memory leaks and memory management.
But my doubt was, do we still need the role of Xcode instruments (Allocations and Leak) to ensure whether memory leak has happened in our application??
Please do share if you know the solution.
Yes, of course you need to use Intruments.
Swift uses Automatic Reference Counting (ARC) to track and manage your app’s memory usage. In most cases, this means that memory management “just works” in Swift, and you do not need to think about memory management yourself. ARC automatically frees up the memory used by class instances when those instances are no longer needed.
However, in a few cases ARC requires more information about the relationships between parts of your code in order to manage memory for you. This chapter describes those situations and shows how you enable ARC to manage all of your app’s memory.
You should take a look over Automated Reference Counting.
One of the most common situation is when you have strong reference cycles between class instances, because the compiler doesn't know when to release that part of memory. Also take a look over the differences of strong and weak references.
But as even Apple saids, "In most cases", you should be ok without, but if your application crashes, it could be that you have memory issues.
Automated reference counting provides a new, simpler, way of managing reference counted objects. By automating the tasks of calling retain and release it eliminates a large class of memory leaks and invalid references caused by programmers forgetting to call memory management functions.
However, ARC does not eliminate a different class of leaks caused by logical errors in the design of your code, when your object graph has cycles. ARC provides tools for you to address this issue by adding weak references, but if you don't do it right, there is nothing ARC can do to help you.
In addition, you may have "lingering references", when an object remains in memory even though your program no longer needs it. Memory leaks of this kind can happen even in garbage-collected environments, such as Java and C#. They represent a logical error in design, and cannot be eliminated by clever compiler tricks in the current state of compiler technology.
This is when Xcode memory tools come in handy. You run them to check for memory leaks, ensuring that your code does not have cycles and "lingering references".

In what situations Static Allocation fares better than Dynamic Allocation?

I was going through some of the decisions made to make Xara Xtreme, an open source SVG graphics application. Their memory management decision was quite intriguing to me since I naively took it for granted that on-demand dynamic allocation as the way of writing object oriented application.
The explanation from the documentation is
How on earth can static allocations be efficient?
If you are used to large dynamic data structures, this may seem strange
to you. Firstly, all our objects (and
thus allocation size) are far smaller
(on average) than each dynamic area
allocation within a program such as
Impression. This means that though
there are likely to be many holes
within memory, they are small. Also,
we have far more allocated objects
within memory, and thus these holes
quickly get filled. Furthermore,
virtual memory managers will free up
any pages of memory that contain no
allocations and give this memory back
to the operating system so that it may
be used again (either by us, or by
another task).
We benefit greatly from
the fact that whenever we allocate
memory in this manner, we do not have
to move any memory about. This proved
a bottleneck in ArtWorks which also
had many small allocations being used
concurrently. more
In brief, the presence of plenty of small objects and the need to prevent memory move are the reasons given for choosing static allocation. I don't have clear understanding about the reasons mentioned.
Though this talks about static allocation, what I see from the cursory look at the code is that a block of memory is dynamically allocated at the application start and kept alive till the application ends, roughly simulating static allocation.
Could you explain in what situations Static Allocation fares better than on-demand Dynamic Allocation in order to consider it as the main mode of allocation in a serious applications?
It's quicker because you avoid the overhead of calling a system routine to manage your storage. malloc() maintains a heap, so every request requires a scan for an appropriately-sized block, possibly resizing the block, updating the block list to mark this block as used, etc. If you're allocating a lot of small objects, this overhead can be excessive. With static allocation you can create an allocation pool and just maintain a simple bitmap to show which areas are in use. This assumes that each object is the same size, so you commonly create one pool per object type.
In short, there's really no such thing as static allocation other than the space allocated for your functions themselves and other read-only kinds of memory. (Do an assemble-only "gcc -S" and look for all the memory blocks, if you're interested.) If you're making and breaking objects, you're dynamically allocating. That being said, there's nothing to stop you from tightly controlling the allocation mechanism itself.
That's what functions like mallinfo() and mallopt() do for controlling how malloc() does its magic. However, that might not even be good enough for you. If you know all your chunks are going to be the same size, you can allocate and deallocate much more efficiently. And if you know you have 3 sizes of stuff, you can keep 3 arenas of memory each with their own allocator.
On top of this, you have the situation at runtime where the process doesn't have enough room and needs to ask the os for more - that involves a system call that is more expensive than just incrementing an array index. On unix, it's usually brk() or sbrk() or the like. And that can take valuable time.
Another, rarer situation, would be if you need to multiply-allocate things. Like 3 threads need to share information and only when all 3 release it does it get freed. That's something nonstandard and not generally covered by typical mallopt() or even pthread-specific memory or mutex/semaphore-locked chunks.
So if you have high speed optimization issues or you are running on an embedded system where you need to squeeze all you can out of the available memory, then "static allocation", or at least controlling the allocation mechanism, may be the way to go.

Accessing outside the memory allocated by the program. (Accessing other app's memory)

Is there a way to access (read or free) memory chunks that are outside the memory that is allocated for the program without getting access violation exceptions.
Well what I actually would like to understand apart from this, is how a memory cleaner (system garbage collector) works. I've always wanted to write such a program. (The language isn't an issue)
Thanks in advance :)
No.
Any modern operating system will prevent one process from accessing memory that belongs to another process.
In fact, it you understood virtual memory, you'd understand that this is impossible. Each process has its own virtual address space.
The simple answer (less I'm mistaken), no. Generally it's not a good idea for 2 reasons. First is because it causes a trust problem between your program and other programs (not to mention us humans won't trust your application either). second is if you were able to access another applications memory and make a change without the application knowing about it, you will cause the application to crash (also viruses do this).
A garbage collector is called from a runtime. The runtime "owns" the memory space and allows other applications to "live" within that memory space. This is why the garbage collector can exist. You will have to create a runtime that the OS allocates memory to, have the runtime execute the application under it's authority and use the GC under it's authority as well. You will need to allow some instrumentation or API that allows the application developer to "request" memory from your runtime (not the OS) and your runtime have a way to not only response to such a request but also keep track of the memory space it's allocating to that application. You will probably need to have a framework (set of DLL's) that makes these calls available to the application (the developer would use them to form the request inside their application).
You have to be sure that your garbage collector does not remove memory other then the memory that is used by the application being executed, as you may have more then 1 application running within your runtime at the same time.
Hope this helps.
Actually the right answer is YES.. there are some programs that does it (and if they exists.. it means it is possible...)
maybe you need to write a kernel drive to accomplish this, but it is possible.
Oh - and I have another example... Debugger attach command... here is one program that interacts with another program memory even though both started as a different process....
of course - messing with another program memory.. if you don't know what you're doing will probably make it crush...

Resources