MIPS32 - Deallocate memory - memory

Suppose I have a linked list in MIPS32 and at some point I want to remove one of the nodes.
What I do it to make the predecessor node to point to the next node of the node removed.
However the removed node still contains some data. So the question is how do I find out whether that node is usable in future or not?
One suggestion was to create a second linked list containing all the nodes that are usable. However, how would I go to implement such a list? Also, do you think this list should point to all the usable space in memory or just to the one of the removed nodes?
Is there any other better way to accomplish the same result?
Solution:
Whenever we "ask" for new memory we use the sbrk service using syscall. However, if we've removed something from our data structure we may want to use that removed part of memory.
So a solution could be to have a linked list of nodes that could be re-used. Whenever we remove something from our data structure we add the part of memory (i.e. a node) to the linked list that keeps track of re-usable memory.
Therefore, when we've to add something to our data structure we first check whether there is some re-usable node in our "memory linked list". If this is not the case, we can use sbrk as usual.

Whenever we "ask" for new memory we use the sbrk service using syscall. However, if we've removed something from our data structure we may want to use that removed part of memory.
So a solution could be to have a linked list of nodes that could be re-used. Whenever we remove something from our data structure we add the part of memory (i.e. a node) to the linked list that keeps track of re-usable memory.
Therefore, when we've to add something to our data structure we first check whether there is some re-usable node in our "memory linked list". If this is not the case, we can use sbrk as usual.

Related

Locking nodes when deleting from threaded linked list

I'm new to pthreads and I need to safely delete nodes from a linked list that is shared by all threads. I am not completely understanding when to lock and unlock a node. This is what I have so far for removing a node that is the head. I lock the head before it is accessed (it is accessed in the while condition) but when do I unlock it?
When deleting a node, you can't just lock the node itself: because you're changing the pointer to that node, which is stored outside of the node, you need to protect that pointer from concurrent access.
In other words, you can't use head->lock to protect head, because lock is inside the node, and the pointer head itself is not. For example, you could have a lock declared alongside head called head_lock.
This also affects how your code that adds to and looks up the list works - that code needs to lock head_lock while it accesses the head pointer, too.
Whether or not you should just rely on the single head_lock to protect the entire list, or also have individual per-node locks depends on how you use the list nodes and the amount of contention there is for access to the list.

lua_xmove between different lua states

According to the lua 5.1 manual, lua_xmove moves values between stacks of different threads belonging to the same Lua state. But, I accidentally happened to use it to move values across different Lua states and it seemed to work fine! Is there any other API to move values from one Lua state to another (in 5.1), or can lua_xmove be used?
Lua stores garbage collection data in the global state. So, if you move GC or string objects across states, you can potentially confuse the garbage collector and create dangling references.
So, while it might look like it works, it could just as easily cause problems later on.
For reference, see this mailing list thread where developers discuss this exact issue.
Note that lua_xmove does check that the global states are the same:
api_check(from, G(from) == G(to));

How to revert changes with procedural memory?

Is it possible to store all changes of a set by using some means of logical paths - of the changes as they occur - such that one may revert the changes by essentially "stepping back"? I assume that something would need to map the changes as they occur, and the process of reverting them would thus ultimately be linear.
Apologies for any incoherence and this isn't applicable to any particular language. Rather, it's a problem of memory – i.e. can a set * (e.g. which may be some store of user input)* of a finite size that's changed continuously * (e.g. at any given time for any amount of time - there's no limit with regards to how much it can be changed)* be mapped procedurally such that new - future - changes are assumed to be the consequence of prior change * (in a second, mirror store that can be used to revert the state of the set all the way to its initial state)*.
You might want to look at some functional data structures. Functional languages, like Erlang, make it easy to roll back to the earlier state, since changes are always made on new data structures instead of mutating existing ones. While this feature can be used at repeatedly internally, Erlang programming typically uses this abundantly at the top level of a "process" so that on any kind of failure, it aborts both processing as well as all the changes in their entirety simply by throwing an exception (in a non-functional language, using mutable data structures, you'd be able to throw an exception to abort, but restoring originals would be your program's job not the runtime's job). This is one reason that Erlang has a solid reputation.
Some of this functional style of programming is usefully applied to non-functional languages, in particular, use of immutable data structures, such as immutable sets, lists, or trees.
Regarding immutable sets, for example, one might design a functionally-oriented data structure where modifications always generate a new set given some changes and an existing set (a change set consisting of additions and removals). You'd leave the old set hanging around for reference (by whomever); languages with automatic garbage collection reclaim the old ones when they're no longer being used (referenced).
You can put a id or tag into your set data structure, this way you can do some introspection to see what data structure id someone has a hold of. You also can capture the id of the base off of which each new version was generated; this gives you some history or lineage.
If desired, you can also capture a reference to the entire old data structure in the new one, or, one can maintain a global list of all of the sets as they are being generated. If you do, however, you'll have to take over more responsibility for storage management, as an automatic collector will probably not find any unused (unreferenced) garbage to collect without additional some help.
Database designs do some of this in their transaction controllers. For the purposes of your question, you can think of a database as a glorified set. You might look into MVCC (Multi-version Concurrency Control) as one example that is reasonably well written up in literature. This technique keeps old snapshot versions of data structures around (temporarily), meaning that mutations always appear to be in new versions of the data. An old snapshot is maintained until no active transaction references it; then is discarded. When two concurrently running transactions both modify the database, they each get a new version based off the same current and latest data set. (The transaction controller knows exactly which version each transaction is based off of, though the transaction's client doesn't see the version information.) Assuming both concurrent transactions choose to commit their changes, the versioning control in the transaction controller recognizes that the second committer is trying to commit a change set that is not a logical successor to the first (since both changes sets as we postulated above were based on the same earlier version). If possible, the transaction controller will merge the changes as if the 2nd committer was really working off the other, newer version committed by the first committer. (There are varying definitions of when this is possible, MVCC says it is when there are no write conflicts, which is a less-than-perfect answer but fast and scalable.) But if not possible, it will abort the 2nd committers transaction and inform the 2nd committer thereof (they then have the opportunity, should they like, to retry their transaction starting from the newer base). Under the covers, various snapshot versions in flight by concurrent transactions will probably share the bulk of the data (with some transaction-specific change sets that are consulted first) in order to make the snapshots cheap. There is usually no API provided to access older versions, so in this domain, the transaction controller knows that as transactions retire, the original snapshot versions they were using can also be (reference counted and) retired.
Another area this is done is using Append-Only-Files. Logging is a way of recording changes; some databases are based 100% on log-oriented designs.
BerkeleyDB has a nice log structure. Though used mostly for recovery, it does contain all the history so you can recreate the database from the log (up to the point you purge the log in which case you should also archive the database). Again someone has to decide when they can start a new log file, and when they can purge old log files, which you'd do to conserve space.
These database techniques can be applied in memory as well. (Nothing is free, though, of course ;)
Anyway, yes, there are fields where this is done.
Immutable data structures help preserve history, by simply keeping old copies; changes always go to new copies. (And efficiency techniques can make this not as bad as it sounds.)
Id's can help understand lineage without necessarily holding onto all the old copies.
If you do want to hold onto all old the copies, you have to look at your domain design to understand when/how/if old data structures possibly can get accessed with an eye toward how to eventually reclaim them. You'll mostly likely have to help get involved in defining how they get released, if ever. Or how they get archived for posterity though at the cost of slower access later.

List all living nodes / actions / animations in a Cocos2D app

Is it possible to list all living nodes (actions and animations are of interest to me too) in a Cocos2D app?
Currently I am fighting memory issues in an app and even though profiler helps with that I would like to try other approaches too.
You can recursively list all child nodes. The start node will be your scene. For actions, I know that you can get number of actions for given node, but i don't know if it is possible to list all actions in some way.
Also, you may use CCTextureCache to check if all of unused textures were already removed from memory. It has no public methods to access this data, but you can see loaded textures names in debugger or add some dumping method.
To prevent memory leak by scheduling some action on node, that you want to remove from parent, send cleanup message to all of nodes before removing from parent. Or if it is instance of your class, make [self cleanup]; in it's onExit() method.
I don't think, that you can receive the list of all created nodes. It sounds like garbage collection in .net =) In objective-C you must watch for leaked objects by yourself.

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

Resources