I am new to Canopen and need to architect a system with the following characteristics:
1 canopen Master (also a gateway)
multiple canopen slave nodes, composed by multiple instances of the same device (with unique SNs, as required by LSS)
I would like to design this device to not require any pre-configuration before connecting it to the bus, and also allow devices that were previously connected to another canopen bus (and therefore had a previous node ID) to be seamlessly connected to a new bus (therefore their node IDs should not persist after a reboot).
After learning about Canopen and the LSS service I think a good solution would be:
The device has no persistent node ID, and at every boot it needs to be addressed by the master through LSS
the master will periodically scan and address new nodes through the LSS service (allowing device hot-plug)
If for any reason the master reboots, it can re-detect all already addressed nodes through a simple node scan (SDO info upload of all addresses)
Now my questions:
It is not clear to me how to have an "invalid canopen node-ID" (referenced here: https://www.can-cia.org/can-knowledge/canopen/cia305/) when they boot, if it has no initial node ID (and therefore only replies to the LSS addressing service) it should be completely silent on the bus, not even sending a boot-up message when powered (not being canopen compliant) until it gets addressed by the LSS service, but if I give it any default initial node-ID it would cause collisions when multiple nodes are simultaneously powered on (which will be the normal behaviour at every system boot-up, all devices, including the master, will be powered on at the same time), is it valid to have a canopen device "unaddressed" and silent like this, and still be canopen compliant? how to handle this case?
I read that node ID 0 means broadcast, so it means that my master could ask for all (addressed) node infos (through an SDO upload) with just one command (SDO upload info on node ID 0)? or is it not allowed, and I should inquire all 127 addresses on the bus to remap the network?
Thanks
I hope I get your questions because they are bit long:
Question 1
Yes, it is CANopen compliant if you have a Node which has no Node-ID. That's what the LS-Service is for. As long as the LSS Master has not assigned a Node-ID to the slave, your are not able to talk to the slave via SDO requests. Also PDO communication is not possible in unconfigured state.
Question 2
The ID 0 broadcast is only available for the Master NMT command. That means the CANopen master can set all NMT states of the system at the same time. SDO communication is only available between the Master and one Slave so you have to ask every node individually.
Related
I encountered a problem, the message that goes from the NMT to the nodes for synchronization is usually done through the Master in CANopen.
Can another department do this?
First to clear out some terms:
NMT = Network Management, a protocol used in CANopen.
Master = NMT master, the node responsible for supervision and initiating state changes of other nodes. Outside of NMT, there is no master.
Therefore the sentence "the message that goes from the NMT to the nodes" doesn't make any sense and has nothing to with the SYNC message either.
The SYNC feature is a feature of its own, related to PDOs rather than NMT. Nodes have SYNC producer or SYNC consumer capabilities. For convenience, it is often the NMT Master acting as SYNC producer, but this isn't a requirement by the standard. Any node supporting the SYNC producer feature can act as one.
(Not to be confused with Heartbeat producer/consumer, which is part of NMT.)
For reference see Object Dictionary entries 1005h to 1007h, CiA 301 chapter 7.5.2.
I have a doubt. I am working on CAN protocol which uses CSMA-CD and Arbitration mechanism to transfer message. I am planning to implement single Server Node and Multiple Client Nodes.
Server is given ID of 0 (11-bit Identifier).
When node is powered on it should get a ID to communicate in the network so it requests one from Server. Node uses Remote Frame to request ID from server which involves ID = 0 (Server ID), DLC = 2
My doubt is, there are multiple nodes on the CAN Bus so each node requests an ID from Server. And all the nodes use ID = 0 (Server ID), DLC = 2 as Remote Frame. Now suppose 2 or more nodes at the same time sends message to server arbitration mechanism is not going to work because both the nodes get control over the CAN BUS. So did CSMA-CD will prevent this and allows only a single node to communicate at a given time?
If a device does not have an ID, CSMA-CD will not help because every node needs an ID in order to participate in the CSMA-CD flavor of the CAN bus protocol. Then, if it cannot access the bus, it cannot request an ID. As you see it is a chicken-egg kind of problem.
You could assign an unique ID to every node on the bus, then every node could request a new ID, but I guess that is like double-work...
Maybe it will be useful to learn about the arbitration mechanism in CAN protocol. In short: lower ID implies higher priority.
I am currently doing a system design with CANopen communication and I am curious about the following question.
In the system a device is programmed to have no Node-Id assigned (255) on startup. Normaly a LSS Master now has to assign a specific Node-Id to the device to work properly. However, if there is no LSS Master functionality implemented in any other bus node, does the CANopen standard allows the the unconfigured device to assign itself a predefined ID after a timeout?
In my opinion this is not possible because it can lead to undefined system states but I could not find anything in the standard documents.
Currently, I am doing some R&D on Thingsboard IOT platform. I am planning to deploy it in cluster mode.
When it is deployed, how two Thingsboard servers communicate with each other?
I got this problem in my mind because a particular device can send a message to one Thingsboard server (A) but actually, the message might need to be transferred to another server (B) since a node in the B server is processing that particular device's messages (As I know Thingsboard nodes uses a device hash to handle messages).
How Kafka stream forward that message accordingly when in a cluster?
I read the official documentation and did some googling. But couldn't find exact answers.
Thingsboard uses Zookeeper as a service discovery.
Each Thingsboard microservice knows what other services run somewhere in the cluster.
All communications perform through message queues (Kafka is a good choice).
Each topic has several partitions. Each partition will be assigned to the respective node.
Message for device will be hashed by originator id and always pushed to the constant partition number. There is no direct communication between nodes.
In the case of some nodes crash or simply scaled up/down, Zookeeper will fire the repartition event on each node. And existing partitions will be reassigned according to the line node count. The device service will follow the same logic.
That is all magic. Simple and effective. Hope it helps with the Thingsboard cluster architure.
I'm building a small network server for a multi-player board game using Erlang.
This network server uses a local instance of Mnesia DB to store a session for each connected client app. Inside each client's record (session) stored in this local Mnesia, I store the client's PID and NODE (the node where a client is logged in).
I plan to deploy this network server on at least 2 connected servers (Node A & B).
So in order to allow a Client A who is logged in on Node A to search (query to Mnesia) for a Client B who is logged in on Node B, I replicate the Mnesia session table from Node A to Node B or vise-versa.
After Client A queries the PID and NODE of the Client B, then Client A and B can communicate with each other directly.
Is this the right way of establishing connection between two client apps that are logged-in on two different Erlang nodes?
Creating a system where two or more nodes are perfectly in sync is by definition impossible. In practice however, you might get close enough that it works for your particular problem.
You don't say the exact reason behind running on two nodes, so I'm going to assume it is for scalability. With many nodes, your system will also be more available and fault-tolerant if you get it right. However, the problem could be simplified if you know you only ever will run in a single node, and need the other node as a hot-slave to take over if the master is unavailable.
To establish a connection between two processes on two different nodes, you need some global addressing(user id 123 is pid<123,456,0>). If you also care about only one process running for User A running at a time, you also need a lock or allow only unique registrations of the addressing. If you also want to grow, you need a way to add more nodes, either while your system is running or when it is stopped.
Now, there are already some solutions out there that helps solving your problem, with different trade-offs:
gproc in global mode, allows registering a process under a given key(which gives you addressing and locking). This is distributed to the entire cluster, with no single point of failure, however the leader election (at least when I last looked at it) works only for nodes that was available when the system started. Adding new nodes requires an experimental version of gen_leader or stopping the system. Within your own code, if you know two players are only going to ever talk to each other, you could start them on the same node.
riak_core, allows you to build on top of the well-tested and proved architecture used in riak KV and riak search. It maps the keys into buckets in a fashion that allows you to add new nodes and have the keys redistributed. You can plug into this mechanism and move your processes. This approach does not let you decide where to start your processes, so if you have much communication between them, this will go across the network.
Using mnesia with distributed transactions, allows you to guarantee that every node has the data before the transaction is commited, this would give you distribution of the addressing and locking, but you would have to do everything else on top of this(like releasing the lock). Note: I have never used distributed transactions in production, so I cannot tell you how reliable they are. Also, due to being distributed, expect latency. Note2: You should check exactly how you would add more nodes and have the tables replicated, for example if it is possible without stopping mnesia.
Zookeper/doozer/roll your own, provides a centralized highly-available database which you may use to store the addressing. In this case you would need to handle unregistering yourself. Adding nodes while the system is running is easy from the addressing point of view, but you need some way to have your application learn about the new nodes and start spawning processes there.
Also, it is not necessary to store the node, as the pid contains enough information to send the messages directly to the correct node.
As a cool trick which you may already be aware of, pids may be serialized (as may all data within the VM) to a binary. Use term_to_binary/1 and binary_to_term/1 to convert between the actual pid inside the VM and a binary which you may store in whatever accepts binary data without mangling it in some stupid way.