Frame acknowledgement in CAN bus - can-bus

How does the sender on the CAN bus know that a node has not properly received the data?
I am currently learning the functioning of the CAN bus. Based on what I have seen so far a receiver drives the bus to a dominant state when it successfully obtains a packet and one receiving node is enough to accomplish this. However, in the event where the intended receiver does not successfully obtain the packet while others do, how is the sender made aware of the situation so that it can retransmit the packet?
Any help in providing some clarity on this topic is much appreciated. Thank you.

The sender can't know this unless you implement a higher layer protocol. The ACK slot in raw CAN frame allows sender to detect if it (not the receiver) has a bus problem. If the sender doesn't sample an ACK, it can conclude that "No one is hearing me in the bus. Maybe I'm physically disconnected or something is completely wrong."
For example in CANopen, it's generally receivers job to complain if an expected PDO packet doesn't arrive on time. Although it's not correct to talk about masters and slaves in CAN bus, a device you assign a master role can be programmed to expect periodic PDO report packets from its slaves, and can rise an error flag if they don't arrive in expected time window.

Related

CAN Communication: Knowing Which Node Transmitted Data

I am new to CAN communication and one of my tasks is to use a CANalyzer to learn what message IDs are being used for a product and what data is being sent/received.
The product has multiple nodes that can send/receive CAN messages. I know CAN messages are broadcasted to all the nodes, but the part I'm having a hard time determining is which node transmitted the message and which nodes received it.
So, for example, if I have 3 CAN nodes, is there a way I can determine that Node 1 sent the message and Node 2/3 are receiving the message?
Thank you in advance.
Generally, you can't know this by listening to the CAN bus alone. The same old story whenever someone asks about data on "CAN bus" is: what application layer protocol is it using? "CAN bus" doesn't tell you jack, it's just the specification physical and data link layers. The concept of identifying individual nodes does not exist on the data link layer, only on the application layer.
There's two possible ways for you to tell:
If you know the application layer used on top of the physical CAN bus and know that it uses node id, then you can tell which node that is sending what data by decoding the application-layer protocol.
On each node, you can sniff the Tx signal between the MCU and CAN transceiver with an oscilloscope. That one only goes active when a node is sending or ACK:ing. Most modern scopes has a CAN frame decoder feature, saving you the head ache of decoding the frames manually.

Best approach to program a PLC using dbc file? (CAN communication)

This is probably not the right forum to ask this, but I need to program a Phoenix Contact PLC in Structured Text. The PLC should communicate with a motor that uses CANOpen protocol. The only resource I have is the dbc file. I'm kinda lost about where to start. If you have some suggestions/recommendations I would appreciate it.
Implementing a CANopen stack can be very time consuming and complex but make a node working with your soft can be easy depending on the slave you have to control.
A good tip would be first to connect the motor to CAN viewer like PCAN View or whatever depending on hardware you have.
First step is to send the start message at adress 0x000 DLC:2, Data 01 00 to start all slave or 01 motor node id to start only the motor.
This will force the motor to start in operational mode and you will be able to see if it send something or not.
You should probably see a heartbeat message with ID: 700+node id and maybe some PDO with a the current speed, temperature or whatever (the motor documentation should help you better understand what the motor send and wait).
In this case in your soft you would have to implement the start message sending if no heartbeat or heartbeat value different than 0x05 (means operational), the PDO read if there is one and a PDO control command which give to the motor the speed needed (you should find the message in the motor documentation)
There might be some other needs like SDO parametrization but worth to try with these few steps before
Please let me know I can help you

Is there any way to stop a particular CAN message coming from an external device on the bus in CANoe?

I am looking to test a scenario, how my software will respond to disconnection of a particular CAN message coming from an external device. This external device will send many CAN messages in the bus, so I cannot control it to stop just a particular message.
Therefore, I am looking for a way in CANoe just to stop one particular CAN message coming into the bus.
Please need your suggestions here.
I tried to provide as much information here, if more is require kindly put in the comment. Thanks.
You would have to split the bus into two and configure CANoe to act as a gateway:
You need a network interface with two CAN channels.
You connect your DUT to one channel (say CAN2) and the remaining bus to the other channel (CAN1).
You then configure both busses in CANoe and add a node to both busses in the simulation setup.
This node should listen to all messages received on CAN1 and output them to CAN2 and vice versa.
If you want certain messages not to reach CAN2, you have to adapt the logic of this node.
Refer to this article in the Vector knowlegde base on how to setup a gateway between two CAN busses and how to control the message flow between those busses.

Do any MQTT clients actually reuse packet identifiers?

The packet identifier is required for certain MQTT control packets (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/csprd02/mqtt-v3.1.1-csprd02.html#_Toc385349761). It's defined by the standard as a 16bit integer, and is generated by each client. The identifiers are reusable by the client after the acknowledgement packet is received. So the standard allows up to 64k in-flight messages. In practice, the clients I've looked at seem to just increment a counter, and so allow a total of 64k messages to be sent by a client. Both of rust MQTT client libraries panic when that counter overflows. (UPDATED 2016-09-07: if the rust clients are compiled in release mode then they don't panic, the value of the Packet Identifier becomes 0 -- in normal circumstances this will work, but...)
Does anyone know of an MQTT client that allows more than 64k messages/client (i.e. re-uses packet identifiers)? I'm wondering if this is a limitation that I need to be aware of in general, or if it's just a few clients. I've taken a quick look at compliance tests and haven't yet seen much to indicate that this is checked -- I'll keep looking.
Edit: It could be that some clients achieve this as a side-effect of limiting the number of in-flight messages. UPDATE 2016-09-07 the rust clients do it by assuming they can wrap on overflow and never catch up to lagging messages (maybe a good bet, but not assured, and with an ugly outcome if it happens)
As you have pointed out, the packet identifier are intended as temporary value that must persist until the published packet is received and acknowledged.
Once acknowledged, you can reuse the identifier (or not).
Most client runs on embedded system and they don't track more than a single packet (so only a single identifier is being handled) since they wait for ACK or REC/COMP before making any other publishing.
So for these clients, even a single identifier would be enough.
Please notice that for QoS 1, remembering the identifier is futile since it's valid to resend the packet if the next packet is not an ACK (so you have the identifier to reply with in the packet you are receiving).
For the rare clients that do support interleaved publish packets, they only need to support 2 valid identifiers at any time (that is, in the case they have received a QoS 2 packet, answered with PUBREC and then receive another QoS 1 or 2 packet).
Once they receive a PUBREL packet they can reply with a PUBCOMP without needing to remember the identifier (it's in the PUBREL header), so the only time they do need to remember identifier is between the PUBLISH and the PUBREC packet. Provided they allow interleaved publish packets, the only case where a second identifier is required is when they are publishing while receiving a published packet at the same time.
Now, from the point of view of the broker, most implementation use a 16-bit counter per client so they could support, in theory, up to 65535 in-transit packets.
In reality, since the minimum size of a publish packet is 8 bytes (usually more), that means having to store at least 9 bytes per potential packet (the additional byte is for remembering the current state in case of QoS 2) so that's half a MB of memory in the minimal case, but likely much more in real life, since you never have an empty publish payload and topic name.
As you see, it's almost impossible for an embedded system to implement with such storage requirement so shortcut are taken.
In most scenario, either the server does not allow so many un-acknowledged packet (by simply replying to the client in order to release the identifier) or use the identifiers pool between different clients.
So typically, again, the worst case for the broker can only happen if the client does not acknowledge the published packets. If the broker does not get any answer from the client it can:
close the connection
refuse to send new published information or
ignore the answer and republish
All of these strategies needs to be implemented anyway since you could have the same issue with a slow client and a fast publisher and your 65535 identifiers.
Since you have these strategies, there is no need to waste a MB of memory per client and instead cut your leg much earlier (while keeping a reasonable working condition).
In the end, the packet identifiers are a tool to deal with identification of recent packets, not a tool to index all packet received. A counter is good enough for this case and a wrapping around should not pose any issue when you account for the memory and bandwidth requirement.

akka stream ActorSubscriber does not work with remote actors

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.

Resources