Automatic Node ID assignment for LSS Slaves without LSS Master - can-bus

I am currently doing a system design with CANopen communication and I am curious about the following question.
In the system a device is programmed to have no Node-Id assigned (255) on startup. Normaly a LSS Master now has to assign a specific Node-Id to the device to work properly. However, if there is no LSS Master functionality implemented in any other bus node, does the CANopen standard allows the the unconfigured device to assign itself a predefined ID after a timeout?
In my opinion this is not possible because it can lead to undefined system states but I could not find anything in the standard documents.

Related

Canopen auto addressing with LSS, how to architect the system

I am new to Canopen and need to architect a system with the following characteristics:
1 canopen Master (also a gateway)
multiple canopen slave nodes, composed by multiple instances of the same device (with unique SNs, as required by LSS)
I would like to design this device to not require any pre-configuration before connecting it to the bus, and also allow devices that were previously connected to another canopen bus (and therefore had a previous node ID) to be seamlessly connected to a new bus (therefore their node IDs should not persist after a reboot).
After learning about Canopen and the LSS service I think a good solution would be:
The device has no persistent node ID, and at every boot it needs to be addressed by the master through LSS
the master will periodically scan and address new nodes through the LSS service (allowing device hot-plug)
If for any reason the master reboots, it can re-detect all already addressed nodes through a simple node scan (SDO info upload of all addresses)
Now my questions:
It is not clear to me how to have an "invalid canopen node-ID" (referenced here: https://www.can-cia.org/can-knowledge/canopen/cia305/) when they boot, if it has no initial node ID (and therefore only replies to the LSS addressing service) it should be completely silent on the bus, not even sending a boot-up message when powered (not being canopen compliant) until it gets addressed by the LSS service, but if I give it any default initial node-ID it would cause collisions when multiple nodes are simultaneously powered on (which will be the normal behaviour at every system boot-up, all devices, including the master, will be powered on at the same time), is it valid to have a canopen device "unaddressed" and silent like this, and still be canopen compliant? how to handle this case?
I read that node ID 0 means broadcast, so it means that my master could ask for all (addressed) node infos (through an SDO upload) with just one command (SDO upload info on node ID 0)? or is it not allowed, and I should inquire all 127 addresses on the bus to remap the network?
Thanks
I hope I get your questions because they are bit long:
Question 1
Yes, it is CANopen compliant if you have a Node which has no Node-ID. That's what the LS-Service is for. As long as the LSS Master has not assigned a Node-ID to the slave, your are not able to talk to the slave via SDO requests. Also PDO communication is not possible in unconfigured state.
Question 2
The ID 0 broadcast is only available for the Master NMT command. That means the CANopen master can set all NMT states of the system at the same time. SDO communication is only available between the Master and one Slave so you have to ask every node individually.

SCSI 3 Persistent Reservation when working with MPIO

We have 2 windows servers running on windows server 2012R2
we have a shared disk and a witness disk to implement a quorum behavior in the shared disk arbitration.
both quorum and data are currently configured with Fiber channel MPIO.
we do not provide the hardware so our customers work with various SAN vendors.
We are using the SCSI3 persistent reservation mechanism to make the disk arbitration, we are reserving the quorum witness disk from one machine and checking it from the other (passive) machine.
As part of the reservation flow each machine registers its unique SCSI registration key and uses it to perform the reservation when needed.
The issue occurs when MPIO is configured since in our current implementation (so it seems ) the key is registered on the device using the io path which is currently used to access the storage.
Once there is a failover/switch in IO path the reservation fails due to the fact that the key is not registered for that path.
Is there a way on the device/code level to have a SCSI reservation key be registered on all IO paths instead of just the specific path the registration command arrived on?
Thanks.
pr type need to be set as "Exclusive Access - Registrants Only". And all paths on active windows host must be registered for pr.
https://www.veritas.com/support/en_US/article.100016085.html
and https://www.veritas.com/support/en_US/article.100018257.html may help.

Is this the right way of building an Erlang network server for multi-client apps?

I'm building a small network server for a multi-player board game using Erlang.
This network server uses a local instance of Mnesia DB to store a session for each connected client app. Inside each client's record (session) stored in this local Mnesia, I store the client's PID and NODE (the node where a client is logged in).
I plan to deploy this network server on at least 2 connected servers (Node A & B).
So in order to allow a Client A who is logged in on Node A to search (query to Mnesia) for a Client B who is logged in on Node B, I replicate the Mnesia session table from Node A to Node B or vise-versa.
After Client A queries the PID and NODE of the Client B, then Client A and B can communicate with each other directly.
Is this the right way of establishing connection between two client apps that are logged-in on two different Erlang nodes?
Creating a system where two or more nodes are perfectly in sync is by definition impossible. In practice however, you might get close enough that it works for your particular problem.
You don't say the exact reason behind running on two nodes, so I'm going to assume it is for scalability. With many nodes, your system will also be more available and fault-tolerant if you get it right. However, the problem could be simplified if you know you only ever will run in a single node, and need the other node as a hot-slave to take over if the master is unavailable.
To establish a connection between two processes on two different nodes, you need some global addressing(user id 123 is pid<123,456,0>). If you also care about only one process running for User A running at a time, you also need a lock or allow only unique registrations of the addressing. If you also want to grow, you need a way to add more nodes, either while your system is running or when it is stopped.
Now, there are already some solutions out there that helps solving your problem, with different trade-offs:
gproc in global mode, allows registering a process under a given key(which gives you addressing and locking). This is distributed to the entire cluster, with no single point of failure, however the leader election (at least when I last looked at it) works only for nodes that was available when the system started. Adding new nodes requires an experimental version of gen_leader or stopping the system. Within your own code, if you know two players are only going to ever talk to each other, you could start them on the same node.
riak_core, allows you to build on top of the well-tested and proved architecture used in riak KV and riak search. It maps the keys into buckets in a fashion that allows you to add new nodes and have the keys redistributed. You can plug into this mechanism and move your processes. This approach does not let you decide where to start your processes, so if you have much communication between them, this will go across the network.
Using mnesia with distributed transactions, allows you to guarantee that every node has the data before the transaction is commited, this would give you distribution of the addressing and locking, but you would have to do everything else on top of this(like releasing the lock). Note: I have never used distributed transactions in production, so I cannot tell you how reliable they are. Also, due to being distributed, expect latency. Note2: You should check exactly how you would add more nodes and have the tables replicated, for example if it is possible without stopping mnesia.
Zookeper/doozer/roll your own, provides a centralized highly-available database which you may use to store the addressing. In this case you would need to handle unregistering yourself. Adding nodes while the system is running is easy from the addressing point of view, but you need some way to have your application learn about the new nodes and start spawning processes there.
Also, it is not necessary to store the node, as the pid contains enough information to send the messages directly to the correct node.
As a cool trick which you may already be aware of, pids may be serialized (as may all data within the VM) to a binary. Use term_to_binary/1 and binary_to_term/1 to convert between the actual pid inside the VM and a binary which you may store in whatever accepts binary data without mangling it in some stupid way.

What's the best way to run a gen_server on all nodes in an Erlang cluster?

I'm building a monitoring tool in Erlang. When run on a cluster, it should run a set of data collection functions on all nodes and record that data using RRD on a single "recorder" node.
The current version has a supervisor running on the master node (rolf_node_sup) which attempts to run a 2nd supervisor on each node in the cluster (rolf_service_sup). Each of the on-node supervisors should then start and monitor a bunch of processes which send messages back to a gen_server on the master node (rolf_recorder).
This only works locally. No supervisor is started on any remote node. I use the following code to attempt to load the on-node supervisor from the recorder node:
rpc:call(Node, supervisor, start_child, [{global, rolf_node_sup}, [Services]])
I've found a couple of people suggesting that supervisors are really only designed for local processes. E.g.
Starting processes at remote nodes
how: distributed supervision tree
What is the most OTP way to implement my requirement to have supervised code running on all nodes in a cluster?
A distributed application is suggested as one alternative to a distributed supervisor tree. These don't fit my use case. They provide for failover between nodes, but keeping code running on a set of nodes.
The pool module is interesting. However, it provides for running a job on the node which is currently the least loaded, rather than on all nodes.
Alternatively, I could create a set of supervised "proxy" processes (one per node) on the master which use proc_lib:spawn_link to start a supervisor on each node. If something goes wrong on a node, the proxy process should die and then be restarted by it's supervisor, which in turn should restart the remote processes. The slave module could be very useful here.
Or maybe I'm overcomplicating. Is directly supervising nodes a bad idea, instead perhaps I should architect the application to gather data in a more loosely coupled way. Build a cluster by running the app on multiple nodes, tell one to be master, leave it at that!
Some requirements:
The architecture should be able to cope with nodes joining and leaving the pool without manual intervention.
I'd like to build a single-master solution, at least initially, for the sake of simplicity.
I would prefer to use existing OTP facilities over hand-rolled code in my implementation.
Interesting challenges, to which there are multiple solutions. The following are just my suggestions, which hopefully makes you able to better make the choice on how to write your program.
As I understand your program, you want to have one master node where you start your application. This will start the Erlang VM on the nodes in the cluster. The pool module uses the slave module to do this, which require key-based ssh communication in both directions. It also requires that you have proper dns working.
A drawback of slave is that if the master dies, so does the slaves. This is by design as it probably fit the original use case perfectly, however in your case it might be stupid (you may want to still collect data, even if the master is down, for example)
As for the OTP applications, every node may run the same application. In your code you can determine the nodes role in the cluster using configuration or discovery.
I would suggest starting the Erlang VM using some OS facility or daemontools or similar. Every VM would start the same application, where one would be started as the master and the rest as slaves. This has the drawback of marking it harder to "automatically" run the software on machines coming up in the cluster like you could do with slave, however it is also much more robust.
In every application you could have a suitable supervision tree based on the role of the node. Removing inter-node supervision and spawning makes the system much simpler.
I would also suggest having all the nodes push to the master. This way the master does not really need to care about what's going on in the slave, it might even ignore the fact that the node is down. This also allows new nodes to be added without any change to the master. The cookie could be used as authentication. Multiple masters or "recorders" would also be relatively easy.
The "slave" nodes however will need to watch out for the master going down and coming up and take appropriate action, like storing the monitoring data so it can send it later when the master is back up.
I would look into riak_core. It provides a layer of infrastructure for managing distributed applications on top of the raw capabilities of erlang and otp itself. Under riak_core, no node needs to be designated as master. No node is central in an otp sense, and any node can take over other failing nodes. This is the very essence of fault tolerance. Moreover, riak_core provides for elegant handling of nodes joining and leaving the cluster without needing to resort to the master/slave policy.
While this sort of "topological" decentralization is handy, distributed applications usually do need logically special nodes. For this reason, riak_core nodes can advertise that they are providing specific cluster services, e.g., as embodied by your use case, a results collector node.
Another interesting feature/architecture consequence is that riak_core provides a mechanism to maintain global state visible to cluster members through a "gossip" protocol.
Basically, riak_core includes a bunch of useful code to develop high performance, reliable, and flexible distributed systems. Your application sounds complex enough that having a robust foundation will pay dividends sooner than later.
otoh, there's almost no documentation yet. :(
Here's a guy who talks about an internal AOL app he wrote with riak_core:
http://www.progski.net/blog/2011/aol_meet_riak.html
Here's a note about a rebar template:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-March/003632.html
...and here's a post about a fork of that rebar template:
https://github.com/rzezeski/try-try-try/blob/7980784b2864df9208e7cd0cd30a8b7c0349f977/2011/riak-core-first-multinode/README.md
...talk on riak_core:
http://www.infoq.com/presentations/Riak-Core
...riak_core announcement:
http://blog.basho.com/2010/07/30/introducing-riak-core/

erlang general question on socket

I have a question about a project I should implement for my Distributed System course.
The project consist in designing and implementing a library that provides a reliable multicast service to user processes. All processes belong to a group, and a message is sent by a member process to all members of the group. The sender is excluded from the recipient list.
This seems to me quite easy to implement in erlang, due to its message passing structure...more points are given if you use rpc call instead of normal sockets based implementation..
Now my question is this: one of the mandatory points of this projects requires that sockets aren't kept open when there is no communication going on between processes...
Our course is held in C, but we are free to use any language we like...can I satisfy this constraint using erlang nodes and rpc calls?
thanks in advance
Yes. The rpc module even has multicall, which takes a list of nodes and will do exactly what you described. It won't hold your sockets open when it's not using them either.
Despite what the other answers say, Erlang's default behavior does not satisfy your constraints.
A typical network of Erlang nodes using Erlang distribution will remain densely connected (every node connected to every other node) with TCP sockets open even when you're not using them. You will either have to use -connect_all false and manage opening/closing the connections to other nodes yourself, or you will have to develop your own distribution protocol. I would recommend the latter, especially since you are learning. The trick to make it easy is to use term_to_binary and binary_to_term.

Resources