I'm working on a CAN sniffer/logger for and will connect to ECUs that may send out CANopen, J1939 or UDS. Most often two or even all of them in one session (For what I have understood this is not recommended but this is the case).
I know CANopen (non fd) has an 11-bit identifier unlike J1939 and UDS that uses 29-bit identifier. To sort out the 11-bit is easy but is there any known way to know if an unknown message with 29-bit identifier is UDS or J1939? My guess is no...
I found some code using (29bit_id & 0x70000)==0x70000 to sort it as J1939 but that misses some J1939 and it must be likely that UDS might have these bits set also. Any suggestions?
With 29-bit CAN identifiers you can differentiate between UDS and J1939 by checking bits 24 and 25 in the CAN frame ID.
In J1939, this would be the Extended Data Page bit and the Data page bit. If they're both set to 1, then the frame is UDS/Iso15765.
According to SAE J1939-21 (https://www.sae.org/standards/content/j1939/21_201810/)
My take is "you can't". At least not if you want this to work in every case. While you may apply some heuristics (such as whether the 29-bit address matches a well known PGN or whether the payload makes sense considering the PGN definition), vendors can pretty much allocate CAN addresses in the way they want to.
Related
I sent my token to dead address(0x000000000000000000000000000000000000dead)
At first I was trying to burn all my token so I sent the token to the dead address using meta mask.
Now I can see the my token(https://bscscan.com/address/0x0083a5a7e25e0Ee5b94685091eb8d0A32DfF11D4)'s total supply isn't reduced. And the dead address is the holder of the token. How can I fix this out?
Actually I want to remove all tokens minted from my tokne.
I'm afraid you have misunderstood the concept of burning coins. Burning does not destroy coins. It sends them to an address/wallet/account that can only receive but cannot send (or spend) them, making them effectively lost forever as this is recorded in the immutable ledger.
This means that the supply of tokens in circulation (those tokens that can still be used to make transactions) is reduced, but not the total supply. So actually, everything that happened in your case is completely expected.
Here is one among many internet resources that explains the concept of burning coins:
https://www.investopedia.com/tech/cryptocurrency-burning-can-it-manage-inflation/
I see that you used the regular transfer() method to send your tokens to the zero address (link).
Your contract implements the burn() function that effectively reduces the total supply as well.
Expanding on Marko's answer: In this particular case, you should use the burn() function instead of just regular transfer. However, different token contracts might use different function names or not implement a burn mechanism at all - it all depends on the token contract implementation.
I have a system with multiple subsystems communicating with CANOpen. There is a main unit with a screen (for men-machine interface and stuff) and sub-units for minor operations(like sample button status, manage power, take measurements...).
We defined a CANOpen based communication protocol for this system. Subsystems share their conditions periodically with TPDO messages and do stuff according to main unit's commands sent with RPDO messages. And also some NMTs are in use too.
So I've been asked to add a new command to this protocol, zeroize. This command shall be sent broadcast and it shall cause everybody to delete softwares. What is the right way to do this?
Maybe I can use a RPDO? Are we allowed to define new NMT commands in CANopen? Maybe I can do it with NMT but by using a new commandt hat is not in use already?
Thanks in advance
Ip.
It is a bit confusing what you mean with TPDO and RPDO since the main unit's TPDO is going to be the peripheral units' RPDO and vice versa. But yes, the correct way to send out some custom broadcast message would be with a PDO.
Although, depending on what you mean with "delete software", CANopen might provide a mean for it. There are the save (OD 1010h) and load (OD 1011h) registers in the object dictionary. Save is to be used for the purpose of storing all CANopen communication (PDO configuration, mapping etc) in non-volatile memory. And load is used to restore CANopen parameters to factory defaults. These should however not be used to save/load application-specific settings.
You are not allowed to define new NMT commands.
Objects 1010h and 1011h can be used to reset the values in the object dictionary. If you really want to delete the software, the firmware upgrade protocol from CiA 302-3 might help. Writing 00h (Stop program) followed by 03h (Clear program) to object 1F51h sub-index 1 on the slave will delete the application. Whether it's actually "zeroed out" depends on the implementation. You'll need two SDO requests per slave for this though. The standard specifies that object 1F51h cannot be PDO mapped. Although that requirement may not be enforced for your devices, in which case you could achieve broadcast "zeroing" with two PDOs.
I'm wondering if this is possible to setup a timeout for receiving data over USB interface in STM32 microcontrollers. Such approach is possible for example in UART connection (please refer to AN3109, section 2. Receive DMA timeout).
I can't find anything similar related to USB interface. What's more, it is said that DMA for USB should be enabled only if really necessary because data transfer shall be aligned to 32-bit word.
You have a receive call back function (if you use the HAL) in your ...._if.c file. Copy reived chars to the buffer. Implement timeout there.
What you refer to in case of UART is either DMA receive timeout as you've said or (when not using DMA) an IDLE interrupt. I'm not aware of such thing coming "out of the box" for USB CDC - you'd have to implement this timeout yourself, which shouldn't be too hard. Have a timer (hardware of software) that you re-trigger every time you receive data. Set its period to the timeout value of your choice and do protocol parsing after timeout elapses.
If I had to add anything - these kind of problems (not knowing how many bytes to receive) are typically solved at the protocol level. Assuming binary protocol, one way of achieving this is having frame start and end bytes which never occur in data (and if they do - you escape them) in which case you receive everything starting after "start byte" until you reveive "end byte". Yet another way is having a "start byte" and a field indicating how many bytes there are to receive. All of it should of course be checksumed in some way.
Having said that, if you have an option to change the protocol, you really should do so. Relying on timings in your communication, especially on such low level only invites problems and headaches in the long run. You introduce tight coupling between your protocol layer and interface layer. This is going to backfire on you every time you decide to use a different interface, as you'll have to re-invent the same thing again. Not to mention how painful it's going to be when you decide to move to TCP/IP with all its greatness - network jitter, dropped packets etc.
If two nodes with same name claiming same address in j1939 what will happen in this situation? will any one of node will claim address or error will occour ?
My copy of the specification is dated, but I'm sure this rule hasn't changed since 2003 (SAE J1939-81):
"Manufacturers of ECUs and integrators of networks must assure that the NAMEs of all CAs intended to
transmit on a particular network are unique."
Of course, that being said, it is of course possible to put devices with the same NAME on the
same set of wires, either through ignorance or malicious intent.
I personally haven't played with it, but in theory, if your device has the exact same NAME as another,
your address claim will exactly overlap the other, neither would be aware of the other's presence,
the message would go through successfully, and each device will assume it is the one that sent it.
I may be wrong, but I think the only thing odd an CA might see is message coming in from an address
it thought it had claimed, a problem which it may not even be checking for.
From the network standpoint, there is no way to distinguish that the nodes are different since they are identifying themselves as the same entity. What would happen is that the first requests will be addressed, and the second request will be ignored. In other words, this is race condition because only one message is processed at a time in the datalink. By the time the second node tries to claim the same address, the address table is already occupied and the late request node won't be able to get the notification that the address was assigned to it. Remember that each node has its own internal states/configuration.
J1939-81 says
"Repeated Collisions Occur, devices go BUS OFF CAs should retry using
a Pseudo-random delay before attempting to reclaim and then revert to
Figures 2 and 3."
When I was writing a simple server for a simple client <> server multiplayer game, I thought of the following text-based protocol using a translation library. Basically, each command had a certain meaning, eg:
1 = character starts turning right
2 = character starts turning left
3 = character stops turning
4 = character starts moving forward
5 = character stops moving
6 = character teleports to x, y
So, the client would simply broadcast the following to inform that the player is now moving forward and turning right:
4
1
Or, to teleport to 100x200:
6#100#200
Where # is the parameter delimiter.
The socket connection would be connected to the player identifier, so that no identifier has to be broadcasted with the protocol to know what player the message belongs to.
Of course all data would be validated server side, but that is a different subject.
Now, this seems pretty efficient to me, only 2 bytes to inform the server that I am moving forward and turning right.
However, most "professional" code snippets I saw seemed to be sending objects or xml commands. This seems to require a lot more server resources to me, doesn't it?
Is my unexperienced logic of why my text based protocol would be efficient flawed? Or what is the recommended protocol for real-time action multiplayer games?
I want to setup a protocol that is as efficient as possible, because I do not want to use multiple clusters/servers to cover excessive amounts of bandwidth for my 2D multiplayer game, and to safe synchronization problems and hassle.
However, most "professional" code
snippets I saw seemed to be sending
objects or xml commands. This seems to
require a lot more server resources to
me, doesn't it?
Is my unexperienced logic of why my
text based protocol would be efficient
flawed? Or what is the recommended
protocol for real-time action
multiplayer games?
Plain text is more expensive to send than a binary format containing the same information. For example, if you only send 1 byte, you can only send 10 different commands, digits 0 to 9. A binary format can send as many different commands as there are different values you can fit into a byte, ie. 256.
As such, although you are thinking of objects as being large, in actual fact they are almost always smaller than the plain text representation of that same object. Often they are as small as is possible without compression (and you can always add compression anyway).
The benefits of a plain text format are that they are easy to debug and understand. Unfortunately you lose those benefits if you put your own encoding in there (eg. reducing commands down to single digits instead of readable names). The downside is that the format is bigger, and that you have to write your own parser. XML formats eliminate the second problem, but they can't compete with a binary format for pure efficiency.
You are probably overthinking this issue at this stage, however. If you're only sending information about events such as the commands you mention above, bandwidth will not be a concern. It's broadcasting information about the game state that can get expensive - but even that can be mitigated by being careful who you send it to, and how frequently. I would recommend working with whatever format is easiest for now, as this will be the least of your problems. Just make sure that your code is always in a state where you can change the message writing and reading routines later if you need.
You need to be aware of the latency involved in sending your data. "Start turning"/"stop turning" will be less effective if the time between the receipt of those packets is different than the time between sending them.
I can't speak for all games, but when I've worked on this sort of code we'd send orientation and position information across the wire. That way the receiver could do smoothing and extrapolation (figure out where the object should be "now" based on data that I have that is already known to be old). Different games will want to send different data, but generally speaking you will need to figure out how to make the receiver's display of the data match the sender's, so you'll need to send data that is resilient in the face of networking problems.
Also, many games use UDP for this sort of data transfer instead of TCP. UDP is unreliable, so you may not get all of your packets. That means that "stop moving now" or "start moving now" may not be received in pairs. When coding on top of UDP then it's even more important to send "this is the state right now" every so often so that clients get ample opportunity to sync up.
The common way is to use a binary format, not text, not xml. So with only one byte you can represent one of 256 different commands.
Also use UDP and not TCP. The game will be a lot more responsive with UDP in case of packet loss. In case of packet loss you can still extrapolate the movements. With each packet send a packet number so that the server knows when the command was sent.
I highly recommend that you download the Quake source code where you can learn more about network programming in modern multiplayer games. It's really easy to read and understand.
edit:
I almost forgot..
Google's Protocol Buffers can be of great help when sending complex data structures.
I thought I would give my two cents and provide a practical application to what is being referred to as Binary Serialization. The concept is actually incredibly simple, yet only seems complicated on the outside.
You can actually send XMLs and have a server that processes the data within the XML to different functions within the server itself. You can also just send the server a single number that is stored within the server as a variable. After that, it can process the rest of the data and choose the correct course of actions.
As an example, some rough code:
private const MOVE_RIGHT:int = 0;
private const MOVE_LEFT:int = 1;
private const MOVE_UP:int = 2;
private const MOVE_DOWN:int = 3;
function processData(e:event.data)
{
switch (e)
{
case MOVE_RIGHT:
//move the clients player to the right
case MOVE_LEFT:
//move the clients player to the left
case MOVE_UP:
//move the clients player to the up
case MOVE_DOWN:
//move the clients player to the down
}
}
This would be a very simple example, and would need to be modified but as you can see you merely just store the variables encoded with whole numbers that you transmit in strings of numbers. You can parse these and create headers of information to organize them into different sections of data that needs to be transmitted.
Also, it is better to do a UDP setup for games because just missing a packet should NOT halter the gaming experience, but instead should be able to handle it client-side AND server-side.