Assuming a number of different nodes and models for remaining bus simulation in CANoe produce and send out CAN frames, what is the general way to prevent CANoe to send out a message with a certain frame ID onto the (real) CAN bus? Filtering seems to only consider incoming messages.
There are different ways to tell the nodes to not send a frame (by interacting with the IL settings), but is there a general way? CANalyzer seems to have the possibility to insert nodes into the bus/evaluation tree that are not transparent to frames, but I neither see this option in CANoe nor could I confirm that this works for outgoing messages as well.
You can add output filters in the simulation setup to each simulated node.
Right click on the leg that is connecting the node to the bus. Then choose Output -> Insert Filter.
This inserts a small node between the ECU and the bus. By double-clicking on the node you can configure which messages to be filtered out.
Unfortunately there seems to be no way to configure this bus-wide. I.e. you would have to do this node-by-node.
HTH
Related
Are there any Wireshark gurus?
I am debugging an issue on my home Zigbee network. I have a sniffer dongle and I can catch all the packets transmitted. Since my network has ~40 devices, the air is quite cluttered with packets I am not really interested in. I am looking for a ways to filter uninteresting messages
Questions:
Is there a way to filter out messages related to IEEE protocol (various Data Requests, and Acks), while leaving only upper layer messages (Zigbee, Zigbee HA)?
Is there a way to assign human readable labels for the devices on the network? e.g. 'Coordinator' instead of 0x0000, or 'Light Switch' instead of '0xc83a'?
I would propose making your own filters (whoah, relax, we are not animals, we do this the smart way) ... go to Statistics > Protocol Hierarchy, the panel should show you all the traffic by protocols. Then just identify which protocols you do not want to see, mark them (one by one), right-click and Prepare as Filter > ...and not Selected (to prepare a filter to exclude the highlighted protocol). Sadly wireshark does not allow you to select multiple protocols to exclude at once. After this simply save your filter and reuse it as much as you like
What you are trying to do is Name resolution. This is done via configuration files, more specifically ethers and hosts
I have a system with multiple subsystems communicating with CANOpen. There is a main unit with a screen (for men-machine interface and stuff) and sub-units for minor operations(like sample button status, manage power, take measurements...).
We defined a CANOpen based communication protocol for this system. Subsystems share their conditions periodically with TPDO messages and do stuff according to main unit's commands sent with RPDO messages. And also some NMTs are in use too.
So I've been asked to add a new command to this protocol, zeroize. This command shall be sent broadcast and it shall cause everybody to delete softwares. What is the right way to do this?
Maybe I can use a RPDO? Are we allowed to define new NMT commands in CANopen? Maybe I can do it with NMT but by using a new commandt hat is not in use already?
Thanks in advance
Ip.
It is a bit confusing what you mean with TPDO and RPDO since the main unit's TPDO is going to be the peripheral units' RPDO and vice versa. But yes, the correct way to send out some custom broadcast message would be with a PDO.
Although, depending on what you mean with "delete software", CANopen might provide a mean for it. There are the save (OD 1010h) and load (OD 1011h) registers in the object dictionary. Save is to be used for the purpose of storing all CANopen communication (PDO configuration, mapping etc) in non-volatile memory. And load is used to restore CANopen parameters to factory defaults. These should however not be used to save/load application-specific settings.
You are not allowed to define new NMT commands.
Objects 1010h and 1011h can be used to reset the values in the object dictionary. If you really want to delete the software, the firmware upgrade protocol from CiA 302-3 might help. Writing 00h (Stop program) followed by 03h (Clear program) to object 1F51h sub-index 1 on the slave will delete the application. Whether it's actually "zeroed out" depends on the implementation. You'll need two SDO requests per slave for this though. The standard specifies that object 1F51h cannot be PDO mapped. Although that requirement may not be enforced for your devices, in which case you could achieve broadcast "zeroing" with two PDOs.
I surely must have missed something from my reading of the LoRaWAN specifications, because this seems too bad to be true. Please tell me I'm delirious :)
The following seems to happen in my testbed when I have many OTAA nodes and I can't figure out what would prevent it:
multiple nodes in my network issues JOIN REQUEST at the same time (this can happen by chance or if they are powered on simultaneously)
gateway receives (at least) one of them successfully and responds with a JOIN ACCEPT assigning a DevAddr, thinking one node did a join req
all the nodes that did the JOIN REQUEST will receive the ACCEPT and think the JOIN ACCEPT was directed at them, and gladly sets the same received DevAddr
From here on, we have several nodes that all think they joined successfully and all think they are unique but have the same DevAddr. Needless to say, the system gets severely messed up.
Reading the LoRaWAN specification, the JOIN REQUEST has a node unique DevEUI, a network unique AppEUI, and a random DevNonce (to prevent replay attacks). The MIC is calculated from these and the secret network unique AppKey stored in the node.
The JOIN ACCEPT has, as far as I can see, no data in it which is derived from the JOIN REQUEST, and therefore it can't be directed to a specific node in the case that many nodes are currently listening to an ACCEPT.
It has: AppNonce NetID DevAddr DLSettings RxDelay CFList, and is encrypted with the AppKey which is network unique and not node unique. The MIC only involves these values and so doesn't help either.
I would have expected that the JOIN ACCEPT at the minimum includes the DevEUI requesting the join as a part of the MIC, and also that it would include the DevNonce. It seems it includes neither.
What gives? Is OTAA broken or not? :)
The MIC will be different for each device because it's base on the secret (and supposedly unique) master key (AppKey) shared between the device and the network.
The first thing a device do is checking the MIC, if it's not what's expected it will drop the message.
So what you said below is not exactly right :
The JOIN ACCEPT has, as far as I can see, no data in it which is derived from the JOIN > REQUEST, and therefore it can't be directed to a specific node in the case that many > > nodes are currently listening to an ACCEPT.
It has: AppNonce NetID DevAddr DLSettings RxDelay CFList, and is
encrypted with the AppKey which is network unique and not node unique.
The MIC only involves these values and so doesn't help either
Of course if you set the same AppKey on every of your device, you will
get what you described :)
Apart from the different AppKey as mentioned in Pierre's answer (strongly recommended), the node also includes a DevNonce in its Join Request. This DevNonce is used to derive the NwkSKey and AppSKey session keys from the Join Accept response.
In LoRaWAN 1.0.x, this DevNonce should be random. So even when using the same AppKey for all devices, chances should be low that they would also have generated the same DevNonce. So even if the MIC somehow validated, then the derived keys will not match the keys known to the server, basically rendering the device useless without it knowing it.
In LoRaWAN 1.1 I think that the DevNonce is an increasing number, but in 1.1 OTAA has changed so I don't know how that affects the results.
See https://runkit.com/avbentem/deciphering-a-lorawan-otaa-join-accept.
The question also states:
this can happen by chance or if they are powered on simultaneously
As for switching on simultaneously, the 1.0.x specifications state:
The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis
Still then, such small variation probably won't avoid nodes hearing each others Join Accept messages in this scenario, as the downlink receive window will need to be slightly lenient too.
One qualifier is the timing requirements for Join Request (JR) and Join Accept (JA). The specification is that a device can only use a JA received "precisely" 5 or 6 (2nd window) seconds after it sent the JR.
I'd hope there are better fail-safes then this timing but the intention might be to prevent the wrong tags from taking a JA.
I was reading the article on gafferon about network programming, and it explains the advantages and disadvantages of UDP vs TCP.
However, It feels like that only works for a game that requires constant streams of input. I'm developing a card game, and I was wondering what would be the most effective way to create the multiplayer features, UDP or TCP. I feel like UDP is probably still the best pick just for its speed, but that raises further questions.
If I use UDP, on the low chance that a packet gets loss, how do I know it got lost and how do I recover from the loss?
How to detect the loss of a message?
The receiver needs to send something back to the original sender that tells him it received the message. If the original sender gets no acknowledgement during a defined timeframe, it will resend the message.
Another method is to send each message multiple times per default and let the receiver ignore duplicates.
The rules for this can get arbitrarily complex and in the end you'll reimplement parts of TCP. You might also want to ensure ordering of the messages.
UDP is good for frequent updates where it's not that bad to lose some messages (e.g. delivering a video stream or keeping player positions in sync in a first person shooter). A card game is slower-paced but requires reliable, ordered messages. If you don't plan to host game sessions with multiple thousands of players, use TCP. It's fast enough and much easier to work with.
It only works with constant streams of input?
TCP works fine even if you only send one message every minute. The term "stream oriented" basically means that the receiver doesn't know where a message ends. You'll have to provide that information in your protocol by prepending the message length for example.
If you choose UDP you'd be better off using a library on top of it.
I know that in a CAN controller if the error count reaches some threshold (say 255), bus off will occur which means that a particular CAN node will get switched off from the CAN network. So there won't be any communication at all. But what if the above said scenario happens while the car is moving which contains the ECU (includes the CAN controller)?
Is there any auto-recovery mechanism in a CAN controller to avoid any of the above situations?
During bus off, the node will be isolated.
CAN waits for the mandatory time period, 128 x 11 bits (1408 bits - 5.6 ms for a 250 kbit/s system) of time, and then tries to re-initialize the node.
Yes, if a CAN Tx error count reaches 255, a node will turn off and potentially reset itself. A good implementation will not continue resetting a node if the problem persists.
In addition to this safety mechanism, ECU's (electric control units) also time the duration between valid transmissions of the messages they expect to receive. Therefore, if the engine controller goes offline, nearly every ECU in the vehicle will report "Lost Communication with the Engine Controller."
Typically, these type of CAN problems are identified by DTC's (diagnostic trouble codes) beginning with U, like this one: http://www.obd-codes.com/u0115
Depending on the severity of the issue, the vehicle might enter a "limp home" mode, or might be totally disabled. Problems with the CAN bus on a vehicle are extremely rare, unless there has been some tampering.
The recovery mechanism depends on the software stack that's being used. Most new vehicles have AUTOSAR compliant software implementations. In the AUTOSAR communication stack, the CanSM (state manager) module has configurable BusOff Monitoring and Recovery. You can read more at http://autosar.org .
A BusOff however, is a serious situation in a running vehicle. How this is handled at the vehicle level is very specific to the system design. But, in most cases the system would go into a safe mode of operation and all parameters would take pre-set fault values to let the vehicle run with a reduced functionality. You would see the warning lamps on the dash go off to alert the driver. ECUs typically comply with some level of ASIL (https://en.wikipedia.org/wiki/Automotive_Safety_Integrity_Level) standard. This makes sure that such situations are thought of and taken care of during design and development.
Nothing spectacular will happen, even if the Engine Control Unit looses CAN communication. The car will continue running.
When bus-off occurs, the CAN network isolates that node and then resets that node which can able to start communication.
As you mentioned, after reaching a specific error count, that node gets disconnected/prohibited from transmitting anything on the bus. This is a description for the bus side.
On the controller side, every CAN controller generates an interrupt on BUS_OFF. It is the controller's responsibility that it should reset the CAN controller and bring it back to the normal state.
This is strictly followed for every CAN controller in any car. And this all happens in a few milliseconds... So for the driver, nothing happens!
When the ECU detects a BUS_OFF fault, the ECU should stop its emissions so this is a good question to ask.
There is an auto-recovery mechanism:
For the first three detections, the CAN controller resets its registers without a delay
For the next detections, the ECU waits 1 second before the reset
There is something called limp-home mode for the cars. That is the condition when all the ECUs fail in the car network. Then a set of default parameters for the ECUs are initialized and then the system, i.e., your car can continue running only for some time before it is properly serviced by the OEM.
I know this is an old thread, but the answers are a bit different from the situation I have observed, in relation to the OP question.
From experience, I'm have an issue where my ECU stops communicating with the diagnostic tools while the engine is running, apparantly it has entered the CAN off state. The only reason I know is I have a OBD 2 plug in monitor for engine parameters. I don't get ANY DTC, well most of the time anyways.. sometimes I get DTCs that are not applicable to my vehcile, and some U codes.
That said, the vehicle continues to run just fine, and if I didn't have the plug-in monitor, I would have no idea there was a problem! I'm now pretty sure the ECU for the Engine is having communication problems, and hitting the error counter and shutting off, it's the only thing that makes sense. I checked the CAN signals with a 2 channel O-scope, and they are a bit noisy compared to one of my other cars, so my next step is to swap the ECU and see if that fixes it. I already swapped out the TIPM (Total Integrated Power Module), it serves as a router of sorts between the 2 CAN networks, to the OBD2 port. That apparantly wasn't it.
if a CAN Tx or RX error counter reaches 255 , the node will turn off and be isolated
What happens if a bus-off error occurs in a CAN controller while a car is in motion?
1)HARD SWAPPING can be done in can network.
eg: Assume four(4) nodes(ECUS) are connected in can bus network.if we disconnected one
ecus then also can bus works properly.
2)In BUSS OFF condition it can hear every signal on the bus network but it cant transmit
mssgs(signal). If the car in motion or in rest position.
eg: Ecus(ABS) are using for better performance but actual work is done by actuator(DISK BRAKE).