Which SAE J1939 PGN number can delete an ECU in a CAN-bus network? - can-bus

I'm looking for a PGN number in SAE J1939 standard that can delete a ECU address, or at least say that this ECU address or ECU unit is not used anymore in the CAN-bus netork.
I have recently written a open source SAE J1939 library in pure C code. But I need one more PGN number. Examples are avaiable to use.
https://github.com/DanielMartensson/Open-SAE-J1939

There is no specific message to remove an ECU from the network, you need to read the J1939 address claim process, once that a new ECU is connected to a J1939 network, the ECU shall broadcast and address claim message indicating the source address that its trying to claim, then if other ECU is using the same address the address claim process starts. In the internal configuration of your device you will find a serial number, vendor name , industry and more. All this information is in ASCII characters e.g "VENDORNAMESERIAL12345"´.
the address claim process is a string comparison character by character using the numeric value of the character. Once one of the characters is greater than the other the "winner" ECU keeps the source address. You can find all the information on the SAE J1939 network management and address claim sections.

Related

How can I construct IP payload packet with mbuf?

My requirement is create to Ethernet-IP-Payload with DPDK Mbuf. DPDK application is running on on virtual machine, the packet generation function invokes the API (attached in second image). I send the packet through DPDK interface and capture on my host system (W10).
Wireshark, can not understand the ethernet protocol as IP.Is there something wrong?
enter image description here
enter image description here
a couple of things you can correct in your DPDK code.
Network packets are big-endian, I assume you are running the Guest OS on x86. If this is true please make sure correct the endianness.
DPDK API rte_pktmbuf_mtod gets you the start of packet. Please fill the ethernet header first, then move 14 bytes ie struct rte_eth_hdr to typecast to IP and fill the data.
in IP field, the checksum filed is set as 0. Please cross-check if you are enabling IP-CHECKSUM offload in port_init.
All fields in IP has to follow BIG endian format too.
There is also a port filed in mbuf. With rte_mbuf_alloc it will be 0, but for sending to another port it should be the right value.
Wireshark observations
Byte 13 and 14 is 0xa50. This looks like your intended payload is overwritten
Byte 1 is 0x45 which clearly shows, you are writing IP header content first instead of ethernet. Please use step 2 from DPDK fixes.

g_socket_bind behavior on UDP multicast

I have multiple readers on a single system which bind to a single address (IP:port ex. 239.0.0.1:1234). Another computer on group sends a UDP multicast packet to this group and readers should receive it. I used GLib 2.0 networking stack, g_socket_bind with allow_reuse set to true.
When there is a single reader (single socket binded to that address) or up to three readers everything is ok and readers will receive packets correctly. But when the number of readers increases to four or above, the packet loss occurs and linearly increases with number of readers on system.
If socket is a UDP socket, then allow_reuse determines whether or not other UDP sockets can be bound to the same address at the same time. In particular, you can have several UDP sockets bound to the same address, and they will all receive all of the multicast and broadcast packets sent to that address.
As stated in GIO Reference Manual, when allow_reuse set true, all readers should read all of data but it doesn't happen as the stated above.
Anybody knows what the problem is? Is there a kernel related problem?
All your sockets need to join the multicast group. If you're just relying on the bind to effect that, you are into undefined behaviour.

Do any MQTT clients actually reuse packet identifiers?

The packet identifier is required for certain MQTT control packets (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/csprd02/mqtt-v3.1.1-csprd02.html#_Toc385349761). It's defined by the standard as a 16bit integer, and is generated by each client. The identifiers are reusable by the client after the acknowledgement packet is received. So the standard allows up to 64k in-flight messages. In practice, the clients I've looked at seem to just increment a counter, and so allow a total of 64k messages to be sent by a client. Both of rust MQTT client libraries panic when that counter overflows. (UPDATED 2016-09-07: if the rust clients are compiled in release mode then they don't panic, the value of the Packet Identifier becomes 0 -- in normal circumstances this will work, but...)
Does anyone know of an MQTT client that allows more than 64k messages/client (i.e. re-uses packet identifiers)? I'm wondering if this is a limitation that I need to be aware of in general, or if it's just a few clients. I've taken a quick look at compliance tests and haven't yet seen much to indicate that this is checked -- I'll keep looking.
Edit: It could be that some clients achieve this as a side-effect of limiting the number of in-flight messages. UPDATE 2016-09-07 the rust clients do it by assuming they can wrap on overflow and never catch up to lagging messages (maybe a good bet, but not assured, and with an ugly outcome if it happens)
As you have pointed out, the packet identifier are intended as temporary value that must persist until the published packet is received and acknowledged.
Once acknowledged, you can reuse the identifier (or not).
Most client runs on embedded system and they don't track more than a single packet (so only a single identifier is being handled) since they wait for ACK or REC/COMP before making any other publishing.
So for these clients, even a single identifier would be enough.
Please notice that for QoS 1, remembering the identifier is futile since it's valid to resend the packet if the next packet is not an ACK (so you have the identifier to reply with in the packet you are receiving).
For the rare clients that do support interleaved publish packets, they only need to support 2 valid identifiers at any time (that is, in the case they have received a QoS 2 packet, answered with PUBREC and then receive another QoS 1 or 2 packet).
Once they receive a PUBREL packet they can reply with a PUBCOMP without needing to remember the identifier (it's in the PUBREL header), so the only time they do need to remember identifier is between the PUBLISH and the PUBREC packet. Provided they allow interleaved publish packets, the only case where a second identifier is required is when they are publishing while receiving a published packet at the same time.
Now, from the point of view of the broker, most implementation use a 16-bit counter per client so they could support, in theory, up to 65535 in-transit packets.
In reality, since the minimum size of a publish packet is 8 bytes (usually more), that means having to store at least 9 bytes per potential packet (the additional byte is for remembering the current state in case of QoS 2) so that's half a MB of memory in the minimal case, but likely much more in real life, since you never have an empty publish payload and topic name.
As you see, it's almost impossible for an embedded system to implement with such storage requirement so shortcut are taken.
In most scenario, either the server does not allow so many un-acknowledged packet (by simply replying to the client in order to release the identifier) or use the identifiers pool between different clients.
So typically, again, the worst case for the broker can only happen if the client does not acknowledge the published packets. If the broker does not get any answer from the client it can:
close the connection
refuse to send new published information or
ignore the answer and republish
All of these strategies needs to be implemented anyway since you could have the same issue with a slow client and a fast publisher and your 65535 identifiers.
Since you have these strategies, there is no need to waste a MB of memory per client and instead cut your leg much earlier (while keeping a reasonable working condition).
In the end, the packet identifiers are a tool to deal with identification of recent packets, not a tool to index all packet received. A counter is good enough for this case and a wrapping around should not pose any issue when you account for the memory and bandwidth requirement.

sctp multihoming with zero window probing

I have query, regarding zero window probing in multihoming. when zero window probing occurs RFC 4960(sec 6.1A) zero says one data chunk should be in flight per destination transport address
But if i have multihoming application with two source and two destination ip address. should there be two data chinks in flight per address or a single data chunk should be in flight ?
Thanks for the help
Even if you have multiple multihomed addresses on your destination, only one data chunk will be in flight. Multihomed addresses are used for failover (so if sending to one address fails, SCTP will retransmit to a different address).

Why do we need sender MAC address in ARP request?

Here is a wireshark capture of an ARP request PNG image, I contains the sender MAC inside the ARP packet. The receiving station can derive the MAC from the Ethernet frame. It seems to be redundant. Is there any particular use of separately including the sender MAC address in ARP Request too ?.
The "redundancy" was by design (RFC 826), and can be useful in targeting different layers. In RFC 3927 there's what is known as Gratuitous Address Resolution Protocol (GARP), and in certain circumstances the redundancy, or lack of, plays an important role, especially in troubleshooting and monitoring networking stacks.
Actually it's not rendunancy at all, the MAC (physical, layer 2) and IP (logical, layer 3) addresses are not the same thing. They serve different purposes on different network layers.
On large scale networks it's quite common to observe changes in the MAC/ARP/Source/Dest information, and at times can seem almost incorrect. For example, you might see a host send an ARP request with its own address as the target address. Depending on the exact situation, it might be telling us it's a link up/down event, maybe it's trying update other devices ARP tables, or possibly detecting an ip conflict and moving the ip to another NIC.
I could get into clustering, failovers — the list goes on, although I would end up writing a book trying to explain it all. Hopefully this gives you a bit of insight about the "redundancy" you were questioning. ;-)
More Info:
RFC 826 /
RFC 3927
/ Wireshark Gratuitous ARP
Although often used in conjunction with Ethernet, ARP by itself is an independent protocol. Imagine other link layer protocols that do not expose MAC addresses. ARP would not work in such circumstances if the sender field was not provided.
There is no rule that the ARP protocol field sender mac address to be same as ethernet source mac address. Eg: Its possible in few applications where multiple interfaces of same host are on network, but one only interface sends arp responses for all interfaces.

Resources