CAN Network With Single Node (CAN protocol) - can-bus

I am new to CAN protocol, going through the Robert Bosch's CAN Specification ver2.0 Part B. I can't understand the following lines on page 63
" Note:
Start up / Wake up:
if during startup only one node is online, and if this node transmits some message, it will get no acknowledgement, detect an error and repeats the message. It can become 'error passive' but not 'bus off' due to this reason."
As far as I understand when a transmitter detects an error(like Acknowledgement error) it retransmits the message and also increments the transmit error count (TEC) by 8. So if there is only one node then its TEC should increase by 8 everytime it transmits a message and should go into 'bus off' condition once TEC goes above 255.
Can someone please explain why the specification says it can only go 'Error Passive' but not 'Bus off'?

I think you missed this part of the specification:
"Exception 1:
If the TRANSMITTER is 'error passive' and detects an ACKNOWLEDGEMENT ERROR because of not detecting a 'dominant' ACK and does not detect a 'dominant' bit while sending its PASSIVE ERROR FLAG.
"
In this case, the TEC is not changed!
So, in your case, when the only node in the network re-transmits every time and the TEC becomes >= 128, it becomes 'error passive'. Then the above exception case becomes valid! And the TEC is no more changed! Hence no 'bus off'.

Yes, that is correct as if the Tx ECU is not receiving any ACK from any ECU in the CAN network then it will stay in Error passive state.
Because the Networking protocol needs when more than one systems are available in CAN network and if no ECU available in the network, then it is not meaning as the ECU is having an issue, so instead of going to CAN bus off state it should stay in Error passive state.
To know more about CAN error, you can follow the below link:

Related

When error frame generate, How does relate to Trasmit Error Count?

enter image description hereI have a question. I am using CANoe Simulation tool.
I know when a Error frame occurs. I heard that Error Frame condition is Transmit Error Count Should be
greater than 127.But first of all, On Screen Capture, An error frame occurs even though it is less than
What is the reason? and, Transmit Error Count 7 means consecutive dominant value error?
I want to clarify that ECU can't receive HU_DATC_E_03 Message because of Error Frame.
Please Help me
enter image description here
You got it backwards: functioning CAN nodes are always allowed to send error frames. Until the counter reaches 127, the node is in "error active" mode. Which means that the node is allowed to actively send error frames.
Beyond 127 it goes "error passive". This means that it is no longer allowed to send error frames, because the node is considered broken and should not be allowed to disrupt bus traffic any longer. It may still listen to the bus but not actively participate.
I don't know this specific tool, but tx error count supposedly just means that the tx error counter has reached the value 7 - that is, there have been 7 failed attempts to send a frame, for whatever reason. It shouldn't have anything necessarily have anything to do with bit stuffing (and CAN bit stuffs after 5 data bits, not 6 as some other networks do).

can I modify ACK field or CRC field in CAN frame?

To generate error on CAN, I've done change data field. But It seems just change numerical things.
I want to know how to modify ACK or CRC field to inject error.
Can I change that field with software?
No, you cannot change that from software, since that part of the message is always constructed on CAN Communication Controller level and down (Physical Layer).
Basically, the ACK field is not set in SW. It is "completed" by other nodes while the message is sent, and the bitstream arrived to the ACK bit slot.
The CRC is constructed on Communication Controler level, upon the payload the application wishes to send.
So in order to inject such faults in a CAN message, you need a special HIL (Hardware in the Loop) device, which will forcefully overwrite fields of your choosing.
One such device is a CANSTress from Vector, but there are many others.
Regarding NACK error, you can simulate that without HIL, if you have a simulation environment.Or, simply do not turn on the other nodes on the cluster, ensuring there is no other node to ACK the message. Beware, disconnecting CANH and CANL cables will result in a different error type.

CAN BUS - ACK field

When a receiving node would like to ACK acknowledge the receipt of a frame, what exactly is it supposed to transmit?
The same frame just with a dominant bit for ACK?
No, every CAN node controller on the bus will usually listen to a message transferred and will automatically check this frame for correctness (CRC).
And it will usually also acknowledge the message by overwriting the recessive ACK=1 ("send" by the transmitter) with a dominant ACK=0 during message transfer. So there is no second message needed to acknowledge the first one.
That's also why you can't have any CAN bus with just one node, because there is no one else to acknowledge and check its sent frames.
Of course, in some controllers these checks can be deactivated or ignored, but not in the common use case.

How a CAN Bus addressing works?

How a CAN Bus controller decides based on message identifier that this particular message belongs to it?Is it like the receiver already know that if identifier has suppose value 5 then its for me . And we program receiver to tell it that you should be interested in value 5 ?
The software in the CAN node must decide what message IDs it is interested in, based on the network specification which is usually some kind of document or other electronic representation of which messages contain what sorts of information. If a message arrives that is of no interest, it simply does not process it and the software returns to what it was doing just before the message arrived (assuming interrupt driven CAN handling).
Some CAN controllers (ie the part of the chip which does the CAN protocol transmission and reception) have message filtering which means that uninteresting messages can be dropped before they reach the software. Other controllers have message filtering which can be set to accept only a single message ID in a particular "message box", and these can be configured to accept the messages you are interested in. Again, other messages are dropped. Some controllers have both filters and message boxes.
At the CAN protocol level all nodes in a CAN network are equal and make a decision about whether to process a message or not. A "CAN controller" is a higher-level concept; it still needs to examine the message identifier like any other node.
Note that "processing" a message is different to the CAN protocol message check and acknowledgement. All nodes take part in that processing unless they're in "listen only" mode.
Update:
How you decide which message to process depends on what you are trying to do and the higher level protocol in use over CAN. In principle you mask out the ID bits that are relevant and then test them to see whether the message should be processed.
For example if you want to process all messages with 5 (binary 0101) in the low order four bits, your mask is 15 (binary 1111), you binary-and this with the received message ID, and then you compare the result with five.
For example:
(msg_id & 15) == 5
is a way of coding that test. Which bits you care about, and your implementation details depend on many other factors.
Specifically for PDU1 (Protocol Data Unit) messages, a destination address is specified (byte 3). If a device receives a message not addressed to it, it can simply ignore it. Addresses are assigned by various standards, or a manufacturer may assign them ad-hoc.
In the general case the CAN-ID (bytes 0-4) contains all the details about what kind of message it is, and devices can inspect particular fields to decide whether they care about the message. For example the transmission controller probably doesn't care about battery status messages, nor the fuel gauge about which doors are locked.

Writing a stream protocol: Message size field or Message delimiter?

I am about to write a message protocol going over a TCP stream. The receiver needs to know where the message boundaries are.
I can either send 1) fixed length messages, 2) size fields so the receiver knows how big the message is, or 3) a unique message terminator (I guess this can't be used anywhere else in the message).
I won't use #1 for efficiency reasons.
I like #2 but is it possible for the stream to get out of sync?
I don't like idea #3 because it means receiver can't know the size of the message ahead of time and also requires that the terminator doesn't appear elsewhere in the message.
With #2, if it's possible to get out of sync, can I add a terminator or am I guaranteed to never get out of sync as long as the sender program is correct in what it sends? Is it necessary to do #2 AND #3?
Please let me know.
Thanks,
jbu
You are using TCP, the packet delivery is reliable. So the connection either drops, timeouts or you will read the whole message.
So option #2 is ok.
I agree with sigjuice.
If you have a size field, it's not necessary to add and end-of-message delimiter --
however, it's a good idea.
Having both makes things much more robust and easier to debug.
Consider using the standard netstring format, which includes both a size field and also a end-of-string character.
Because it has a size field, it's OK for the end-of-string character to be used inside the message.
If you are developing both the transmit and receive code from scratch, it wouldn't hurt to use both length headers and delimiters. This would provide robustness and error detection. Consider the case where you just use #2. If you write a length field of N to the TCP stream, but end up sending a message which is of a size different from N, the receiving end wouldn't know any better and end up confused.
If you use both #2 and #3, while not foolproof, the receiver can have a greater degree of confidence that it received the message correctly if it encounters the delimiter after consuming N bytes from the TCP stream. You can also safely use the delimiter inside your message.
Take a look at HTTP Chunked Transfer Coding for a real world example of using both #2 and #3.
Depending on the level at which you're working, #2 may actually not have an issues with going out of sync (TCP has sequence numbering in the packets, and does reassemble the stream in correct order for you if it arrives out of order).
Thus, #2 is probably your best bet. In addition, knowing the message size early on in the transmission will make it easier to allocate memory on the receiving end.
Interesting there is no clear answer here. #2 is generally safe over TCP, and is done "in the real world" quite often. This is because TCP guarantees that all data arrives both uncorrupted* and in the order that it was sent.
*Unless corrupted in such a way that the TCP checksum still passes.
Answering to old message since there is stuff to correnct:
Unlike many answers here claim, TCP does not guarantee data to arrive uncorrupted. Not even practically.
TCP protocol has a 2-byte crc-checksum that obviously has a 1:65536 chance of collision if more than one bit flips. This is such a small chance it will never be encountered in tests, but if you are developing something that either transmits large amounts of data and/or is used by very many end users, that dice gets thrown trillions of times (not kidding, youtube throws it about 30 times a second per user.)
Option 2: size field is the only practical option for the reasons you yourself listed. Fixed length messages would be wasteful, and delimiter marks necessitate running the entire payload through some sort of encoding-decoding stage to replace at least three different symbols: start-symbol, end-symbol, and the replacement-symbol that signals replacement has occurred.
In addition to this one will most likely want to use some sort of error checking with a serious checksum. Probably implemented in tandem with the encryption protocol as a message validity check.
As to the possibility of getting out of sync:
This is possible per message, but has a remedy.
A useful scheme is to start each message with a header. This header can be quite short (<30 bytes) and contain the message payload length, eventual correct checksum of the payload, and a checksum for that first portion of the header itself. Messages will also have a maximum length. Such a short header can also be delimited with known symbols.
Now the receiving end will always be in one of two states:
Waiting for new message header to arrive
Receiving more data to an ongoing message, whose length and checksum are known.
This way the receiver will in any situation get out of sync for at most the maximum length of one message. (Assuming there was a corrupted header with corruption in message length field)
With this scheme all messages arrive as discrete payloads, the receiver cannot get stuck forever even with maliciously corrupted data in between, the length of arriving payloads is know in advance, and a successfully transmitted payload has been verified by an additional longer checksum, and that checksum itself has been verified. The overhead for all this can be a mere 26 byte header containing three 64-bit fields, and two delimiting symbols.
(The header does not require replacement-encoding since it is expected only in a state whout ongoing message, and the entire 26 bytes can be processed at once)
There is a fourth alternative: a self-describing protocol such as XML.

Resources