how to make user custom signal SIGRTMIN unique? - pthreads

I'm writing a library and need to send a custom signal to threads, I used signal(SIGRTMIN, handler); and all works fine.
Since this is a library, I'm worried that some one who used my library will also use this SIGRTMIN, so this there any way in linux/posix_thread to get an unique SIGRTMIN id?
currently my way to solve this is to add a magic number, like SIGRTMIN + MAGIC_NUMBER to reduce the change of duplicate signals, but I wondered if there's a better solution.

There is no viable way to programmatically identify an otherwise unused signal number. Fixed signal numbers used by a library should be considered part of its API, and it is up to the library client to avoid clashes.
However, it should be possible to set up a mechanism by which the library client can register its choice of signal number with the library at runtime. That could be in the form either of an absolute number or an offset from SIGRTMIN. The client would be expected to call a function to specify the signal number, and the library could register the handler to listen for the caller-specified signal.
Note also, by the way, that you should always prefer sigaction() over signal() for registering signal handlers. Several important details of the latter's behavior are not well defined and do vary from implementation to implementation, but sigaction() allows the programmer to specify all the relevant details.

Related

Is there a formal way to zeroize in CANopen

I have a system with multiple subsystems communicating with CANOpen. There is a main unit with a screen (for men-machine interface and stuff) and sub-units for minor operations(like sample button status, manage power, take measurements...).
We defined a CANOpen based communication protocol for this system. Subsystems share their conditions periodically with TPDO messages and do stuff according to main unit's commands sent with RPDO messages. And also some NMTs are in use too.
So I've been asked to add a new command to this protocol, zeroize. This command shall be sent broadcast and it shall cause everybody to delete softwares. What is the right way to do this?
Maybe I can use a RPDO? Are we allowed to define new NMT commands in CANopen? Maybe I can do it with NMT but by using a new commandt hat is not in use already?
Thanks in advance
Ip.
It is a bit confusing what you mean with TPDO and RPDO since the main unit's TPDO is going to be the peripheral units' RPDO and vice versa. But yes, the correct way to send out some custom broadcast message would be with a PDO.
Although, depending on what you mean with "delete software", CANopen might provide a mean for it. There are the save (OD 1010h) and load (OD 1011h) registers in the object dictionary. Save is to be used for the purpose of storing all CANopen communication (PDO configuration, mapping etc) in non-volatile memory. And load is used to restore CANopen parameters to factory defaults. These should however not be used to save/load application-specific settings.
You are not allowed to define new NMT commands.
Objects 1010h and 1011h can be used to reset the values in the object dictionary. If you really want to delete the software, the firmware upgrade protocol from CiA 302-3 might help. Writing 00h (Stop program) followed by 03h (Clear program) to object 1F51h sub-index 1 on the slave will delete the application. Whether it's actually "zeroed out" depends on the implementation. You'll need two SDO requests per slave for this though. The standard specifies that object 1F51h cannot be PDO mapped. Although that requirement may not be enforced for your devices, in which case you could achieve broadcast "zeroing" with two PDOs.

Is the process dictionary appropriate in this case?

I've read several comments here and elsewhere suggesting that Erlang's process dictionary was a bad idea and should die. Normally, as a total Erlang newbie, I'd just avoid it. However, in this situation my other options aren't great.
I have a main dispatcher function that looks something like this:
dispatch(State) ->
receive
{cmd1, Params} ->
NewState = do_cmd1_stuff(Params, State),
dispatch(NewState);
{cmd2, Params} ->
NewState = do_cmd2_stuff(Params, State),
dispatch(NewState);
BadMsg ->
log_error(BadMsg),
dispatch(State)
end.
Obviously, my names are more meaningful to me, but that's the gist of it. Deep down in a function called by a function called by a function called by do_cmd2_stuff(), I want to send out messages to all my users telling them about something I've done. In order to do that, I need to get the list of users from the point where I send the messages. The user list doesn't lend itself easily to sticking in the global state, since that's just one data structure representing the only block of data on which I operate.
The way I see it, I have a couple unpleasant options other than using the process dictionary. I can send the user list through all the various levels of functions down to the very bottom one that does the broadcasting. That's unpleasant because it causes all my functions to gain a parameter, whether they really care about it or not.
Alternatively, I could have all the do_cmdN_stuff() functions return a message to send. That's not great either though, since sending the message may not be the last thing I want to do and it clutters up my dispatcher with a bunch of {Msg, NewState} tuples. Furthermore, some of the functions might not have any messages to send some of the time.
Like I said earlier, I'm very new to Erlang. Maybe someone with more experience can point me at a better way. Is there one? Is the process dictionary appropriate in this case?
The general rule is that if you have doubts, you shouldn't use the process dictionary.
If the two options you mentioned aren't good enough (I personally like the one where you return the messages to send) and what you want is some particular piece of code to track users and forward messages to them, maybe what you want to do is have a process holding that info.
Pid ! {forward, Msg}
where Pid will take care of sending everything to a bunch of other processes. Now, you would still need to pass the Pid around, unless you give it a name in some registry to find it. Either with register/2, global or gproc.
A simple answer would be to nest your global within a state record, which is then threaded through the system, at least at the stop level. This makes it easy to add new fields to the state in the future, not an uncommon occurrence, and allow you to keep your global state data structure untouched. So initially
-record(state, {users=[],state_data}).
Defining it as a record makes it easy to access and extend when necessary.
As you mentioned you can always pass the user list as extra param, thats not so bad.
If you don't want to do this just put it in State. You can have a special State just for this part of the calculation that also contains the user list.
Then there always is the possibility of putting it in ETS or in another server process.
What exactly to do is hard to recommend since it depends a lot on your exact application and preferences.
Just choose from the mentioned possibilities as if the process dictionary doesn't exist. Maybe your code needs restructuring if none of the variants look elegant, there always is some better way without the process dictionary.
Its really bad it is still there, because its alluring to many beginning Erlang users.
You really should not use process dictionary. I accept using dictionary only if
It is short living process.
I have full control about the process from spawn to termination i.e. I use minimum and well known set of external modules.
I need performance gain badly. It means avoid copy of data when using ets and dict/gb_tree is too slow (for GC reason).
ad 1. is not your case, you are using in server. ad 2. I don't know if it is your case. ad 3. is not your case because you need list of recipient so you don't gain nothing from that process dictionary is very fast key/value storage. In your case I don't see any reason why you should not include what you need to your State. IMHO State is exactly the right place for it.
Its an interesting question because it involves the fundamentals of functional design.
My opinion:
Try as much as possible to make the function return the messages, then send them. This separates the two different tasks nicely, and separates the purely functional task from the one that causes side effects.
If this isn't possible, pass receivers as argument even if its a bit messy. If the broadcasting function uses that data, it should be given to it explicitly, for clarity and predictability.
Using ETS as Peer Stritzinger suggests is really not any better than the PD, both hides the fact that the broadcasting function uses the receiver list and makes it dependent on global data.
I'm not sure about the Erlang way of encapsulating some state in a process, as I GIVE TERRIBLE ADVICE suggests. Is it really any better that ETS or PD?
clutters up my dispatcher with a bunch
of {Msg, NewState}
This is my experience also, that you often end up like this. It's not particularly pretty, but functional design seems to encourage this. Could some language feature be introduced to make it more beautiful and natural?
EDIT:
6 years ago I wrote:
Could some language feature be introduced to make it more beautiful and natural?
After learning much more about functional programming I have realised that examples of this are state-monads and do-notation that are found in Haskell.
I would consider sending a special message to self() from deep inside the call stack, and handling it at the top level dispatch method that you've sketched, where list of users is available.

Distributed message passing in D?

I really like the message passing primitives that D implements. I have only seen examples of message passing within a program though. Is there support for distributing the messages over e.g. a network?
The message passing functions are in std.concurrency, which only deals with threads. So, the type of message passing used to pass messages between threads is for threads only. There is no RMI or anything like that in Phobos. That's not to say that we'll never get something like that in Phobos (stuff is being added to Phobos all the time), but it doesn't exist right now.
There is, however, the std.socket module which deals with talking to sockets, which is obviously network-related. I haven't used it myself, but it looks like it sends and receives void[]. So, it's not as nice as sending immutable objects around like you do with std.concurrency, but it does allow you to do network communication via sockets and presumably in a much nicer manner than if you were using C calls directly.
Seems that this has been considered. From the Phobos documentation (found it through Jonathan M Davis answer)
This is a low-level messaging API upon
which more structured or restrictive
APIs may be built. The general idea is
that every messageable entity is
represented by a common handle type
(called a Cid in this implementation),
which allows messages to be sent to
in-process threads, on-host processes,
and foreign-host processes using the
same interface. This is an important
aspect of scalability because it
allows the components of a program to
be spread across available resources
with few to no changes to the actual
implementation.
Right now, only in-process threads are
supported and referenced by a more
specialized handle called a Tid. It is
effectively a subclass of Cid, with
additional features specific to
in-process messaging.

Event manager process in erlang. Named processes or Pids?

I have event manager process that dispatches events to subscribers (e.g. http_session_created, http_sesssion_destroyed). If Pid is used instead of named process, I must put it into functions to operate with event manager but if Named process is used, code will be more clear.
Which variant is right?
Thank you!
While there is no actual difference to the process naming a process, registering it, makes it global. You in essence you are telling the system that here is a global service which anyone can use.
From you description it more sounds like you are giving them names to save the, small, effort of carrying them around in your loop. If this is the case I would put their pids in a record with all the other state data you carry around. This much better indicates the type of the processes.
If you have a fixed set of "subscriber" processes, then use registered names IMO.
If, on the contrary, you have a publish/subscribe sort of architecture where subscribers come and go, then you need an infrastructure to track those and from this point it doesn't really matter if you use Pid() or names.
One of the drawbacks of using registered names is that you need to track them in your code base to avoid "collisions". So it is up to you: personally, I tend to favor named processes as, like you say, it makes reading the code clearer. One way or another, OTP doesn't care.

EventAggregator vs CompositeCommand

I worked my way through the Prism guidance and think I got a grasp of most of their communication vehicles.
Commanding is very straightforward, so it is clear that the DelegateCommand will be used just to connect the View with its Model.
It is somewhat less clear, when it comes to cross Module Communication, specifically when to use EventAggregation over Composite Commands.
The practical effect is the same e.g.
You publish an event -> all subscribers receive notice and execute code in response
You execute a composite command -> all registered commands get executed and with it their attached code
Both work along the lines of "fire and forget", that is they don't care about any responses from their subscribers after firing the event/executing the commands.
I have trouble seeing a practical difference in usage although I understand that the implementation of both (under the hood) is very different.
So should we think of what it actually means - Event? Is that when something happens (an event occurs)? Something the user did not directly request like a "web request completed"?
And Command? Does that mean a user clicked something and thus issued a command to our application, requesting a service directly?
Is that it? Or are there other ways to determine when to use one of these communication vehicles over the other. The guidance, although one of the best documentations I read, gives no specific explanation.
So I hope people involved in/using Prism can help in shedding some light on this.
There are two primary differences between these two.
CanExecute for Commands. A Command
can say whether or not it is valid
for execution by calling
Command.RaiseCanExecuteChanged() and
having its CanExecute delegate
return false. If you consider the
case of a "Save All"
CompositeCommand compositing several
"Save" commands, but one of the
commands saying that it can't
execute, the Save All button will
automatically disable (nice!).
EventAggregator is a Messaging
pattern and Commands are a
Commanding pattern. Although
CompositeCommands are not explicitly
a UI pattern, it is implicitly so
(generally they are hooked up to an
input action, like a Button click).
EventAggregator is not this way -
any part of the application
effectively raise an EventAggregator
event: background processes,
ViewModels, etc. It is a
brokered avenue for messaging
across your application with support
for things like filtering,
background thread execution, etc.
Hope this helps explain the differences. It's more difficult to say when to use each, but generally I use the rule of thumb that if it's user interaction that raises the event, use a command for anything else, use EventAggregator.
Hope this helps.
Additionally, there is one more important difference: With the current implementation, an event from the EventAggregator is asynchronous, while the CompositeCommand is synchronous.
If you want to implement something like "notify that event X happened; do something that relies on the event handlers for event X to have executed", you either have to do something like Application.DoEvents() or use CompositeCommands.

Resources