CAN busoff with CAN_High and CAN_Low shorted - can-bus

It is known fact that short circuiting CAN_High and CAN_Low on a CAN bus leads to a bus off condition.
With respect to the physical layer, how does this condition lead to bus off condition?

CAN is a differential protocol. That means 0 or 1 (to be specific, dominant and recessive) is decided on the basis of the difference between voltages on the CANH and CANL lines.
When you short these two lines, there will not be any voltage difference and that falls under the voltage range of recessive bits. In other words, shorting two lines will be considered as a continuous transmission of recessive bits.
When you transmit 6 or more consecutive recessive bits, it is considered as an error!
And when this error count goes more than 255, the CAN controller goes into the BUS_OFF state.
As lines are shorted, there will be way more recessive bits and the error count will reach 255 in no time which will lead to BUS_OFF.
The CAN protocol does have a "bus recovery mechanism" in which it will wait for 11 consecutive recessive bits for 128 times (which it will as the bus is shorted), but again, the same error frame thing will happen, and it will be back in BUS_OFF.
This cycle will continue!

Related

Time needed to generate all of bits in a packet... why is the packet size divided by data size?

I'm learning packet switching system and trying to understand this problem
from a text book. It's about the time needed to generate all of bits in a packet. What we learned so far was calculating a delay time that happens after a packet was made so time for making a packet feels new. Can anyone help me understand why they divided packet sizes by data size in the solution?
Information)
"Host A converts analog voice to a digital 64 kpbs bit stream on the
fly.
Host A then groups the bits into 56 byte packets."
Answer) 56*8 / 64*1000 = 7msec
They are calculating the time needed to generate all of bits in a packet.
Each new bit is added to the packet until the packet is full. The full packet
will then be sent on its way, and a new empty packet will be created to hold the
next set of bits. Once it fills up, it will be sent also.
This means that each packet will contain bits that range from brand new ones, to
bits that have been waiting around for up to 7ms. (The age of the oldest bit in
the packet is important, because it contributes to the observed latency of the
application.)
Your bits are being created in a stream, at a fixed rate of 64*1000 bits per
second. In one second, 64,000 bits would be generated. Therefore, one bit is
generated every 1/64,000 = 0.016 milliseconds.
Those bits are being assembled into packets where each packet contains exactly
56*8 bits. If each bit took 0.016 milliseconds to create, then all 56*8 bits
will be created in approximately 7 milliseconds.
I like to sanity-check this kind of formula by looking at the units: BITS / SECOND.
56*8 BITS / 0.007 SECONDS = 66,286 BITS/SECOND which is approximately your bitrate.
If BITRATE = BITS / SECONDS then by simple algebra, SECONDS = BITS / BITRATE

UART transfer speed

I want to check if my understanding is correct, however I cannot find any precise explanation or examples. Let's say I have UART communication set to 57600 bits/second and I am transferring 8 bit chars. Let's say I choose to have no parity and since I need one start bit and one stop bit, that means that essentially for transferring one char I would need to transfer 10 bits. Does that mean that the transfer speed would be 5760 chars/second?
Your calculation is essentially correct.
But the 5760 chars/second would be the maximum transfer rate. Since it's an asynchronous link, the UART transmitter is allowed to idle the line between character frames.
IOW the baud rate only applies to the bits of the character frame.
The rate that characters are transmitted depends on whether there is data available to keep the transmitter busy/saturated.
For instance, if a microcontroller used programmed I/O (with either polling or interrupt) instead of DMA for UART transmiting, high-priority interrupts could stall the transmissions and introduce delays between frames.
Baudrate = 57600
Time for 1 Bit: 1 / 57600 = 17,36 us
Time for a frame with 10 Bit = 173,6 us
this means max. 1 / 1736 us = 5760 frames(characters) / s**

Which one is the better CRC scheme?

Say I have to error-check a message of some 120-bits long.I have two alternative for checksum schemes:
Split message to 5 24-bit strings and append each with a CRC8 field
Append the whole message with a CRC32 field
Which scheme has a higher error detection probability, and why? Let's assume no prior knowledge about the error patterns distribution.
UPDATE:
What if the system has a natural mode of failure which is a received cleared bit instead of a set bit (i.e., "1" was Tx-ed but "0" was Rx-ed), and the opposite does not happen?
In this case, the probability of long bursts of error bits is much smaller, assuming that the valid data has a uniform distribution of "0"s and "1"s, so the longest burst will be bound by the longest string of "1"s in the message.
You have to make some assumption about the error patterns. If you have a uniform distribution over all possible errors, then five 8-bit CRCs will detect more of the errors than one 32-bit CRC, simply because the former has 40 bits of redundancy.
However, I can construct many 24-bit error patterns that fool an 8-bit CRC, and use any combination of five of those to get no errors over all of the 8-bit CRCs. Yet almost all of those will be caught by the 32-bit CRC.
A good paper by Philip Koopman goes through evaluation of several CRCs, mostly focusing on their Hamming Distance. Like Mark Adler pointed out, the error distribution plays an important role in CRC selection (e.g. burst errors detection is one of the variable properties of CRC), as is the length of the CRC'ed data.
The Hamming Distance of a CRC indicates the maximum number of errors in the data which are 100% detectable.
Ref:
Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks:
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.5.5027&rep=rep1&type=pdf
8-bit vs 32-bit CRC
For exemple, the 0x97 8-bit CRC polynomial has HD=4 up to 119 bits data words (which is more than your required 24-bit word), which means it detects 100% of 4 bits (or less) errors for data of length 119 bits or less.
On the 32-bit side, the 32-bit CRC 0x9d7f97d6 offer HD=9 up to 223 bits (greater than 5*24=120bits) data words. This means that it will detect 100% of the 9 bits (or less) errors for data composed of 223 bits or less.
Theoretically, 5x 8-bit CRCs would be able to 100% detect 4*4=16 evenly distributed bit flips across your 5 chunks (4 errors per 24-bit chunk). On the other end, the 32-bit CRC would only be able to 100% detect 9 bit flips per 120-bit chunk.
Error Distribution
Knowing all that, the only missing piece is the error distribution pattern. With it in hand, you'll be able to make an informed decision on the best CRC method to use. You seem to say that long burst of errors are not possible, but do not mention the exact maximum length. If that length does go up to 9 bits, you might be better off with the CRC32. If you expect occasional, <4-bit errors, both would do, though the 5x8-bit will consume more bandwidth (40 bits instead of 32 bits). If this is the case, a 32-bit CRC might even be overkill, a smaller CRC16 or even CRC9 could provide enough detection capabilities.
Beyond the hamming window, the CRC will not be able to catch every possible errors. The bigger the data length, the worse the CRC performances.
The CRC32 of course. It will detect ordering errors as between the five segments, as well as giving you 224 as much error detection.

Bit error rate uncoded vs Bit error rate digital communication

graph
Above is the graph showing the BER (bit error rate) at different Eb/No values using BPSK over AWGN channel. The pink curve shows the BER of the uncoded system (without channel encoder and decoder) while the black curve represent the BER of the digital communication with the used of hamming (7,4) code for channel encoding. However, I can't explain why both curves started to intersect and cross over at 6dB.
I started writing this in a comment and it started getting long. I figure this is either correct or not. It makes sense to me though so maybe you will have to do more research beyond this.
Note: I am aware BER is normally over seconds but for our purpose we will look at something smaller.
My first assumption though (based on your graph) is that the BER is on the actual data and not the signal. If we have a BER on these 2 different encoding schemes of 1 error every 7 bits we have a BER on the hamming encoded signal of 0 errors every 7 bits compared to 1 in 7.
Initial:
Unencoded: 1 error every 7 bits received
Hamming(7,4): 0 errors every 4 bits (if corrected)
Now lets increase the noise thereby increasing the error rate of the entire signal.
Highly increased BER:
Unencoded: 3.5 errors in 7 bits (50%) (multiple sequences to get an average)
Hamming(7,4): 2 errors in 4 bits (50%)
Somewhere during the increase in BER these must crossover as you are seeing on your graph. Beyond the crossover I would expect to see it worse on the Hamming side because of less data per error (lower actual data density). I am sure you could calculate this mathematically... unfortunately, it would take me more time to look into that than I care to though as it just intuitively makes sense to me.

Does CRC has the following feature

When the data transmission is tampered 1 bit or 2 bits, can the receiver correct it automatically?
No, CRC is an error-detecting code, not an error-correcting code.
Read more here
CRC is primarily used as an error-detecting code. If the total number of bits (including those in the CRC) is smaller than the CRC's period, though, it is possible to correct single-bit errors by computing the syndrome (xor the calculated and received CRC's). Each bit will, if flipped individually, generate a unique syndrome. One can iterate the CRC algorithm to find the syndrome that would be associated with each bit; if one finds the syndrome associated with each bit, one can flip it and correct a single-bit error.
One major danger with doing this, though, is that the CRC will be much less useful for rejecting bogus data. If one uses an 8-bit CRC on a packet with 15 bytes of data, only one in 256 random packets would pass validity, but half of all random packets could be "corrected" by flipping a single bit.

Resources