Does QNX support copy-on-write for child process? - qnx

As we know on Linux, after the parent process creates a child process by using fork, the child process will have the same memory map as the parent process for the data segment.
When the child process modifies some variables, the Linux kernel will catch this access, and create a page, copy it, and remap this page for the child process, then the new value actually goes into the new page, this is the copy-on-write what I am talking about.
As we know, QNX is a real-time OS, so my question is whether QNX has this mechanism?

Related

Does Erlang (and Elixir by extension) provide a way to remove atoms? [duplicate]

This question already has answers here:
How Erlang atoms can be garbage collected
(3 answers)
Closed 3 years ago.
Can atoms be removed from a running Erlang/Elixir system?
Specifically, I am interested in how I would create an application server where modules, representing applications, can be loaded and run on demand and later removed.
I suppose it's more complicated than just removing the atom representing the module in question, as it may be defining more atoms which may be difficult or impossible to track.
Alternatively, I wonder if a module can be run in isolation so that all references it produces can be effectively removed from a running system when it is no longer needed.
EDIT: Just to clarify, because SO thinks this question is answered elsewhere, the question does not relate to garbage collection of atoms, but manual management thereof. To further clarify, here is my comment on Alex's answer below:
I have also thought about spinning up separate instances (nodes?) but
that would be very expensive for on-demand, per-user applications.
What I am trying to do is imitate how an SAP ABAP system works. One
option may be to pre-emptively have a certain number of instances
running, then restart them each time a request is complete. (Again,
pretty expensive though). Another may be to monitor the atom table of
an instance and restart that instance when it is close to the limit.
The drawback I see with running several nodes/instances (although that is what an ABAP system has; several OS processes serving requests from users) is that you lose out on the ability to share cached bytecode between those instances. In an ABAP system, the cache of bytecode (which they call a "load") is accessible to the different processes so when a program is started, it checks the cache first before fetching it from storage.
Unfortunately not, atoms are not destroyed within the VM at all until the VM shuts down. Atom limits are also shared across processes, meaning that spawning a new process to handle atom allocation/deallocation won't work in your case.
You might have some luck spawning a completely separate VM instance by running a separate Erlang application and communicating to it through sockets, although I'm not sure how effective that will be.

How can I get host application handle in dll project with Delphi without passing handle parameter

I Have a dll project. I have to get host application handle. I Can't pass handle from host application to dll project because the host application is not mine.
The host application runs on second monitor but when the host application calls my form (dll) the form shown in first monitor. I have to detect host application screen coordinates or i have to detech the host application runs on first monitor or second monitor.
You are looking for top-level windows in your process. Find them like so:
Call GetCurrentProcessId to obtain your process ID.
Call EnumWindows to enumerate all top level windows.
In the enumeration callback, for each top level window, call GetWindowThreadProcessId to obtain the process ID that owns the window. Any that match the process ID found in step 1 are from your process.
The problem that you face is that step 3 might identify multiple such windows. You can call GetWindow passing GW_OWNER to obtain the owner of the window and use that to trim down the field of candidates. What you perceive as being the main window is likely to have no owner, but the other top level windows may well be owned. Even this cannot be guaranteed to trim the field down to a single candidate and you very likely will need to come up with some additional logic.

How to know if core has been already binded

hwloc_obj_t obj = hwloc_get_obj_by_depth(topology, depth_CORE,MyRank%4);
hwloc_cpuset_t cpuset = hwloc_bitmap_dup(obj->cpuset);
hwloc_set_cpubind(topology, cpuset, 0)
Is there anyway in hwloc to know if a thread is already binded to that "cpuset".
Reason why I need to know this:
Suppose I have a quad core. But if I issue 8 processors at runtime. So it happens to bind 2 process to each core. However I want to bind process to core only when it is free. So is there anyway I could know that core has been already binded to process.
It seems that the only way to do so is to enumerate all processes and check whether some of them are bound to the specified core. You can get an idea how to do that by examining the source code of the hwloc-ps utility. What it does is that it reads through the /proc filesystem and extracts process PIDs form there, then uses hwloc_get_proc_cpubind() to obtain the binding mask. This should work on Linux and Solaris, as well as *BSD systems with mounted /proc. On Windows one should use the system-specific API from the Tool Help library to obtain the list of PIDs. OS X does not support processor affinity.

erlang inter-process lock mechanism (such as flock)

Does Erlang have an inter-process (I mean Linux or Windows process) lock mechanism such as flock ?
The usage would be as follows :
an Erlang server starts serving a repository, and puts a file lock (or whatever)
if another OS process (another Erlang server or a command-line Erlang script) interacts with the repo, then the file lock warns about possible conflict
If you mean between Erlang processes, no, it has inter-process lock mechanisms. That is not the Erlang way of controlling access to a shared resource. Generally if you want to control access to a resource you have an Erlang process which manages the resource and all access to the resource goes through this process. This means we have no need for inter-process locks or mutexes to control access. It is also safe as you can't "cheat" and access anyway and the managing process can detect if clients die in the middle of a transaction.
In Erlang, you would probably use a different way of solving this. One thing that comes to mind is to keep a single Erlang node() which handles all the repositories. It has a lock_mgr process which does the resource lock management.
When another node or escript wants to run, it can connect to the running Erlang node over distribution and request the locking.
There is module global which could fit your needs.
global:set_lock/1,2,3
Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId.

Is there NUMA next-touch policy in modern Linux

When we working on NUMA system, memory can be local or remote relative to current NUMA node.
To make memory more local there is a "first-touch" policy (the default memory to node binding strategy):
http://lse.sourceforge.net/numa/status/description.html
Default Memory Binding
It is important that user programs' memory is allocated on a node close to the one containing the CPU on which they are running. Therefore, by default, page faults are satisfied by memory from the node containing the page-faulting CPU. Because the first CPU to touch the page will be the CPU that faults the page in, this default policy is called "first touch".
http://techpubs.sgi.com/library/dynaweb_docs/0640/SGI_Developer/books/OrOn2_PfTune/sgi_html/ch08.html
The default policy is called first-touch. Under this policy, the process that first touches (that is, writes to, or reads from) a page of memory causes that page to be allocated in the node on which the process is running. This policy works well for sequential programs and for many parallel programs as well.
There are also some other non-local policies. Also there is a function to require explicit move of memory segment to some NUMA node.
But sometimes (in context of many threads of single applications) it can be useful to have "next touch" policy: call some function to "unbind" some memory region (up to 100s MB) with some data and reapply the "first touch"-like handler on this region which will migrate the page on next touch (read or write) to the numa node of accessing thread.
This policy is useful in case when there are huge data to process by many threads and there are different patterns of access to this data (e.g. first phase - split the 2D array by columns via threads; second - split the same data by rows).
Such policy was supported in Solaris since 9 via madvice with MADV_ACCESS_LWP flag
https://cims.nyu.edu/cgi-systems/man.cgi?section=3C&topic=madvise
MADV_ACCESS_LWP Tell the kernel that the next LWP to
touch the specified address range
will access it most heavily, so the
kernel should try to allocate the
memory and other resources for this
range and the LWP accordingly.
There was (may 2009) the patch to linux kernel named "affinity-on-next-touch", http://lwn.net/Articles/332754/ (thread) but as I understand it was unaccepted into mainline, isn't it?
Also there were Lee Schermerhorn's "migrate_on_fault" patches http://free.linux.hp.com/~lts/Patches/PageMigration/.
So, the question: Is there some next-touch for NUMA in current vanilla Linux kernel or in some major fork, like RedHat linux kernel or Oracle linux kernel?
Given my understanding, there aren't anything similar in the vanilla kernel. numactl has functions to migrate pages manually, but it's probably not helpful in your case. (NUMA policy description is in Documentation/vm/numa_memory_policy if you want to check yourself)
I think those patches are not merged as I don't see any of the relevant code snippets showing up in current kernel.

Resources