what will happen if two nodes with same name claiming same address in j1939? - can-bus

If two nodes with same name claiming same address in j1939 what will happen in this situation? will any one of node will claim address or error will occour ?

My copy of the specification is dated, but I'm sure this rule hasn't changed since 2003 (SAE J1939-81):
"Manufacturers of ECUs and integrators of networks must assure that the NAMEs of all CAs intended to
transmit on a particular network are unique."
Of course, that being said, it is of course possible to put devices with the same NAME on the
same set of wires, either through ignorance or malicious intent.
I personally haven't played with it, but in theory, if your device has the exact same NAME as another,
your address claim will exactly overlap the other, neither would be aware of the other's presence,
the message would go through successfully, and each device will assume it is the one that sent it.
I may be wrong, but I think the only thing odd an CA might see is message coming in from an address
it thought it had claimed, a problem which it may not even be checking for.

From the network standpoint, there is no way to distinguish that the nodes are different since they are identifying themselves as the same entity. What would happen is that the first requests will be addressed, and the second request will be ignored. In other words, this is race condition because only one message is processed at a time in the datalink. By the time the second node tries to claim the same address, the address table is already occupied and the late request node won't be able to get the notification that the address was assigned to it. Remember that each node has its own internal states/configuration.

J1939-81 says
"Repeated Collisions Occur, devices go BUS OFF CAs should retry using
a Pseudo-random delay before attempting to reclaim and then revert to
Figures 2 and 3."

Related

Is it possible to burn from dead address in bsc network?

I sent my token to dead address(0x000000000000000000000000000000000000dead)
At first I was trying to burn all my token so I sent the token to the dead address using meta mask.
Now I can see the my token(https://bscscan.com/address/0x0083a5a7e25e0Ee5b94685091eb8d0A32DfF11D4)'s total supply isn't reduced. And the dead address is the holder of the token. How can I fix this out?
Actually I want to remove all tokens minted from my tokne.
I'm afraid you have misunderstood the concept of burning coins. Burning does not destroy coins. It sends them to an address/wallet/account that can only receive but cannot send (or spend) them, making them effectively lost forever as this is recorded in the immutable ledger.
This means that the supply of tokens in circulation (those tokens that can still be used to make transactions) is reduced, but not the total supply. So actually, everything that happened in your case is completely expected.
Here is one among many internet resources that explains the concept of burning coins:
https://www.investopedia.com/tech/cryptocurrency-burning-can-it-manage-inflation/
I see that you used the regular transfer() method to send your tokens to the zero address (link).
Your contract implements the burn() function that effectively reduces the total supply as well.
Expanding on Marko's answer: In this particular case, you should use the burn() function instead of just regular transfer. However, different token contracts might use different function names or not implement a burn mechanism at all - it all depends on the token contract implementation.

Is there a formal way to zeroize in CANopen

I have a system with multiple subsystems communicating with CANOpen. There is a main unit with a screen (for men-machine interface and stuff) and sub-units for minor operations(like sample button status, manage power, take measurements...).
We defined a CANOpen based communication protocol for this system. Subsystems share their conditions periodically with TPDO messages and do stuff according to main unit's commands sent with RPDO messages. And also some NMTs are in use too.
So I've been asked to add a new command to this protocol, zeroize. This command shall be sent broadcast and it shall cause everybody to delete softwares. What is the right way to do this?
Maybe I can use a RPDO? Are we allowed to define new NMT commands in CANopen? Maybe I can do it with NMT but by using a new commandt hat is not in use already?
Thanks in advance
Ip.
It is a bit confusing what you mean with TPDO and RPDO since the main unit's TPDO is going to be the peripheral units' RPDO and vice versa. But yes, the correct way to send out some custom broadcast message would be with a PDO.
Although, depending on what you mean with "delete software", CANopen might provide a mean for it. There are the save (OD 1010h) and load (OD 1011h) registers in the object dictionary. Save is to be used for the purpose of storing all CANopen communication (PDO configuration, mapping etc) in non-volatile memory. And load is used to restore CANopen parameters to factory defaults. These should however not be used to save/load application-specific settings.
You are not allowed to define new NMT commands.
Objects 1010h and 1011h can be used to reset the values in the object dictionary. If you really want to delete the software, the firmware upgrade protocol from CiA 302-3 might help. Writing 00h (Stop program) followed by 03h (Clear program) to object 1F51h sub-index 1 on the slave will delete the application. Whether it's actually "zeroed out" depends on the implementation. You'll need two SDO requests per slave for this though. The standard specifies that object 1F51h cannot be PDO mapped. Although that requirement may not be enforced for your devices, in which case you could achieve broadcast "zeroing" with two PDOs.

What prevents LoRaWAN nodes accepting the same JOIN ACCEPT message in OTAA

I surely must have missed something from my reading of the LoRaWAN specifications, because this seems too bad to be true. Please tell me I'm delirious :)
The following seems to happen in my testbed when I have many OTAA nodes and I can't figure out what would prevent it:
multiple nodes in my network issues JOIN REQUEST at the same time (this can happen by chance or if they are powered on simultaneously)
gateway receives (at least) one of them successfully and responds with a JOIN ACCEPT assigning a DevAddr, thinking one node did a join req
all the nodes that did the JOIN REQUEST will receive the ACCEPT and think the JOIN ACCEPT was directed at them, and gladly sets the same received DevAddr
From here on, we have several nodes that all think they joined successfully and all think they are unique but have the same DevAddr. Needless to say, the system gets severely messed up.
Reading the LoRaWAN specification, the JOIN REQUEST has a node unique DevEUI, a network unique AppEUI, and a random DevNonce (to prevent replay attacks). The MIC is calculated from these and the secret network unique AppKey stored in the node.
The JOIN ACCEPT has, as far as I can see, no data in it which is derived from the JOIN REQUEST, and therefore it can't be directed to a specific node in the case that many nodes are currently listening to an ACCEPT.
It has: AppNonce NetID DevAddr DLSettings RxDelay CFList, and is encrypted with the AppKey which is network unique and not node unique. The MIC only involves these values and so doesn't help either.
I would have expected that the JOIN ACCEPT at the minimum includes the DevEUI requesting the join as a part of the MIC, and also that it would include the DevNonce. It seems it includes neither.
What gives? Is OTAA broken or not? :)
The MIC will be different for each device because it's base on the secret (and supposedly unique) master key (AppKey) shared between the device and the network.
The first thing a device do is checking the MIC, if it's not what's expected it will drop the message.
So what you said below is not exactly right :
The JOIN ACCEPT has, as far as I can see, no data in it which is derived from the JOIN > REQUEST, and therefore it can't be directed to a specific node in the case that many > > nodes are currently listening to an ACCEPT.
It has: AppNonce NetID DevAddr DLSettings RxDelay CFList, and is
encrypted with the AppKey which is network unique and not node unique.
The MIC only involves these values and so doesn't help either
Of course if you set the same AppKey on every of your device, you will
get what you described :)
Apart from the different AppKey as mentioned in Pierre's answer (strongly recommended), the node also includes a DevNonce in its Join Request. This DevNonce is used to derive the NwkSKey and AppSKey session keys from the Join Accept response.
In LoRaWAN 1.0.x, this DevNonce should be random. So even when using the same AppKey for all devices, chances should be low that they would also have generated the same DevNonce. So even if the MIC somehow validated, then the derived keys will not match the keys known to the server, basically rendering the device useless without it knowing it.
In LoRaWAN 1.1 I think that the DevNonce is an increasing number, but in 1.1 OTAA has changed so I don't know how that affects the results.
See https://runkit.com/avbentem/deciphering-a-lorawan-otaa-join-accept.
The question also states:
this can happen by chance or if they are powered on simultaneously
As for switching on simultaneously, the 1.0.x specifications state:
The transmission slot scheduled by the end-device is based on its own communication needs with a small variation based on a random time basis
Still then, such small variation probably won't avoid nodes hearing each others Join Accept messages in this scenario, as the downlink receive window will need to be slightly lenient too.
One qualifier is the timing requirements for Join Request (JR) and Join Accept (JA). The specification is that a device can only use a JA received "precisely" 5 or 6 (2nd window) seconds after it sent the JR.
I'd hope there are better fail-safes then this timing but the intention might be to prevent the wrong tags from taking a JA.

How to handle SAP Kapsel Offline app OData conflicts properly?

I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.

What happens if a bus-off error occurs in a CAN controller while a car is in motion?

I know that in a CAN controller if the error count reaches some threshold (say 255), bus off will occur which means that a particular CAN node will get switched off from the CAN network. So there won't be any communication at all. But what if the above said scenario happens while the car is moving which contains the ECU (includes the CAN controller)?
Is there any auto-recovery mechanism in a CAN controller to avoid any of the above situations?
During bus off, the node will be isolated.
CAN waits for the mandatory time period, 128 x 11 bits (1408 bits - 5.6 ms for a 250 kbit/s system) of time, and then tries to re-initialize the node.
Yes, if a CAN Tx error count reaches 255, a node will turn off and potentially reset itself. A good implementation will not continue resetting a node if the problem persists.
In addition to this safety mechanism, ECU's (electric control units) also time the duration between valid transmissions of the messages they expect to receive. Therefore, if the engine controller goes offline, nearly every ECU in the vehicle will report "Lost Communication with the Engine Controller."
Typically, these type of CAN problems are identified by DTC's (diagnostic trouble codes) beginning with U, like this one: http://www.obd-codes.com/u0115
Depending on the severity of the issue, the vehicle might enter a "limp home" mode, or might be totally disabled. Problems with the CAN bus on a vehicle are extremely rare, unless there has been some tampering.
The recovery mechanism depends on the software stack that's being used. Most new vehicles have AUTOSAR compliant software implementations. In the AUTOSAR communication stack, the CanSM (state manager) module has configurable BusOff Monitoring and Recovery. You can read more at http://autosar.org .
A BusOff however, is a serious situation in a running vehicle. How this is handled at the vehicle level is very specific to the system design. But, in most cases the system would go into a safe mode of operation and all parameters would take pre-set fault values to let the vehicle run with a reduced functionality. You would see the warning lamps on the dash go off to alert the driver. ECUs typically comply with some level of ASIL (https://en.wikipedia.org/wiki/Automotive_Safety_Integrity_Level) standard. This makes sure that such situations are thought of and taken care of during design and development.
Nothing spectacular will happen, even if the Engine Control Unit looses CAN communication. The car will continue running.
When bus-off occurs, the CAN network isolates that node and then resets that node which can able to start communication.
As you mentioned, after reaching a specific error count, that node gets disconnected/prohibited from transmitting anything on the bus. This is a description for the bus side.
On the controller side, every CAN controller generates an interrupt on BUS_OFF. It is the controller's responsibility that it should reset the CAN controller and bring it back to the normal state.
This is strictly followed for every CAN controller in any car. And this all happens in a few milliseconds... So for the driver, nothing happens!
When the ECU detects a BUS_OFF fault, the ECU should stop its emissions so this is a good question to ask.
There is an auto-recovery mechanism:
For the first three detections, the CAN controller resets its registers without a delay
For the next detections, the ECU waits 1 second before the reset
There is something called limp-home mode for the cars. That is the condition when all the ECUs fail in the car network. Then a set of default parameters for the ECUs are initialized and then the system, i.e., your car can continue running only for some time before it is properly serviced by the OEM.
I know this is an old thread, but the answers are a bit different from the situation I have observed, in relation to the OP question.
From experience, I'm have an issue where my ECU stops communicating with the diagnostic tools while the engine is running, apparantly it has entered the CAN off state. The only reason I know is I have a OBD 2 plug in monitor for engine parameters. I don't get ANY DTC, well most of the time anyways.. sometimes I get DTCs that are not applicable to my vehcile, and some U codes.
That said, the vehicle continues to run just fine, and if I didn't have the plug-in monitor, I would have no idea there was a problem! I'm now pretty sure the ECU for the Engine is having communication problems, and hitting the error counter and shutting off, it's the only thing that makes sense. I checked the CAN signals with a 2 channel O-scope, and they are a bit noisy compared to one of my other cars, so my next step is to swap the ECU and see if that fixes it. I already swapped out the TIPM (Total Integrated Power Module), it serves as a router of sorts between the 2 CAN networks, to the OBD2 port. That apparantly wasn't it.
if a CAN Tx or RX error counter reaches 255 , the node will turn off and be isolated
What happens if a bus-off error occurs in a CAN controller while a car is in motion?
1)HARD SWAPPING can be done in can network.
eg: Assume four(4) nodes(ECUS) are connected in can bus network.if we disconnected one
ecus then also can bus works properly.
2)In BUSS OFF condition it can hear every signal on the bus network but it cant transmit
mssgs(signal). If the car in motion or in rest position.
eg: Ecus(ABS) are using for better performance but actual work is done by actuator(DISK BRAKE).

Resources