sctp multihoming with zero window probing - sctp

I have query, regarding zero window probing in multihoming. when zero window probing occurs RFC 4960(sec 6.1A) zero says one data chunk should be in flight per destination transport address
But if i have multihoming application with two source and two destination ip address. should there be two data chinks in flight per address or a single data chunk should be in flight ?
Thanks for the help

Even if you have multiple multihomed addresses on your destination, only one data chunk will be in flight. Multihomed addresses are used for failover (so if sending to one address fails, SCTP will retransmit to a different address).

Related

Which SAE J1939 PGN number can delete an ECU in a CAN-bus network?

I'm looking for a PGN number in SAE J1939 standard that can delete a ECU address, or at least say that this ECU address or ECU unit is not used anymore in the CAN-bus netork.
I have recently written a open source SAE J1939 library in pure C code. But I need one more PGN number. Examples are avaiable to use.
https://github.com/DanielMartensson/Open-SAE-J1939
There is no specific message to remove an ECU from the network, you need to read the J1939 address claim process, once that a new ECU is connected to a J1939 network, the ECU shall broadcast and address claim message indicating the source address that its trying to claim, then if other ECU is using the same address the address claim process starts. In the internal configuration of your device you will find a serial number, vendor name , industry and more. All this information is in ASCII characters e.g "VENDORNAMESERIAL12345"ยด.
the address claim process is a string comparison character by character using the numeric value of the character. Once one of the characters is greater than the other the "winner" ECU keeps the source address. You can find all the information on the SAE J1939 network management and address claim sections.

When are TLP packets created in a memory mapped PCIe configuration?

I have an understanding problem when it comes to PCIe connections. In the PCIe interface data is transferred between devices using TLP packets. In a memory mapped configuration if a piece of software wants to send data to a device, then it must write the data to a predefined memory location, which is mapped to this specific device.
When are the TLP packets created? Is the data stored in memory and the device has to "fetch" the data using TLP packets (e.g. memory read), or does the MMU of the CPU automatically detect, that this is a mapped memory region and automatically "converts" the data to TLP packets and sends them over the interface?
Thank you in advance!
The CPU generates a memory transaction using the physical MMIO address. Based on the address, the memory transaction is routed to the appropriate root port. Up to that point the operation is outside the scope of the PCIe spec. The root port constructs the TLP and sends it out over PCIe. If the operation is a read (requiring a response), the root port receives the response TLP from the device with the data and sends the data back to the proper CPU.

g_socket_bind behavior on UDP multicast

I have multiple readers on a single system which bind to a single address (IP:port ex. 239.0.0.1:1234). Another computer on group sends a UDP multicast packet to this group and readers should receive it. I used GLib 2.0 networking stack, g_socket_bind with allow_reuse set to true.
When there is a single reader (single socket binded to that address) or up to three readers everything is ok and readers will receive packets correctly. But when the number of readers increases to four or above, the packet loss occurs and linearly increases with number of readers on system.
If socket is a UDP socket, then allow_reuse determines whether or not other UDP sockets can be bound to the same address at the same time. In particular, you can have several UDP sockets bound to the same address, and they will all receive all of the multicast and broadcast packets sent to that address.
As stated in GIO Reference Manual, when allow_reuse set true, all readers should read all of data but it doesn't happen as the stated above.
Anybody knows what the problem is? Is there a kernel related problem?
All your sockets need to join the multicast group. If you're just relying on the bind to effect that, you are into undefined behaviour.

Do any MQTT clients actually reuse packet identifiers?

The packet identifier is required for certain MQTT control packets (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/csprd02/mqtt-v3.1.1-csprd02.html#_Toc385349761). It's defined by the standard as a 16bit integer, and is generated by each client. The identifiers are reusable by the client after the acknowledgement packet is received. So the standard allows up to 64k in-flight messages. In practice, the clients I've looked at seem to just increment a counter, and so allow a total of 64k messages to be sent by a client. Both of rust MQTT client libraries panic when that counter overflows. (UPDATED 2016-09-07: if the rust clients are compiled in release mode then they don't panic, the value of the Packet Identifier becomes 0 -- in normal circumstances this will work, but...)
Does anyone know of an MQTT client that allows more than 64k messages/client (i.e. re-uses packet identifiers)? I'm wondering if this is a limitation that I need to be aware of in general, or if it's just a few clients. I've taken a quick look at compliance tests and haven't yet seen much to indicate that this is checked -- I'll keep looking.
Edit: It could be that some clients achieve this as a side-effect of limiting the number of in-flight messages. UPDATE 2016-09-07 the rust clients do it by assuming they can wrap on overflow and never catch up to lagging messages (maybe a good bet, but not assured, and with an ugly outcome if it happens)
As you have pointed out, the packet identifier are intended as temporary value that must persist until the published packet is received and acknowledged.
Once acknowledged, you can reuse the identifier (or not).
Most client runs on embedded system and they don't track more than a single packet (so only a single identifier is being handled) since they wait for ACK or REC/COMP before making any other publishing.
So for these clients, even a single identifier would be enough.
Please notice that for QoS 1, remembering the identifier is futile since it's valid to resend the packet if the next packet is not an ACK (so you have the identifier to reply with in the packet you are receiving).
For the rare clients that do support interleaved publish packets, they only need to support 2 valid identifiers at any time (that is, in the case they have received a QoS 2 packet, answered with PUBREC and then receive another QoS 1 or 2 packet).
Once they receive a PUBREL packet they can reply with a PUBCOMP without needing to remember the identifier (it's in the PUBREL header), so the only time they do need to remember identifier is between the PUBLISH and the PUBREC packet. Provided they allow interleaved publish packets, the only case where a second identifier is required is when they are publishing while receiving a published packet at the same time.
Now, from the point of view of the broker, most implementation use a 16-bit counter per client so they could support, in theory, up to 65535 in-transit packets.
In reality, since the minimum size of a publish packet is 8 bytes (usually more), that means having to store at least 9 bytes per potential packet (the additional byte is for remembering the current state in case of QoS 2) so that's half a MB of memory in the minimal case, but likely much more in real life, since you never have an empty publish payload and topic name.
As you see, it's almost impossible for an embedded system to implement with such storage requirement so shortcut are taken.
In most scenario, either the server does not allow so many un-acknowledged packet (by simply replying to the client in order to release the identifier) or use the identifiers pool between different clients.
So typically, again, the worst case for the broker can only happen if the client does not acknowledge the published packets. If the broker does not get any answer from the client it can:
close the connection
refuse to send new published information or
ignore the answer and republish
All of these strategies needs to be implemented anyway since you could have the same issue with a slow client and a fast publisher and your 65535 identifiers.
Since you have these strategies, there is no need to waste a MB of memory per client and instead cut your leg much earlier (while keeping a reasonable working condition).
In the end, the packet identifiers are a tool to deal with identification of recent packets, not a tool to index all packet received. A counter is good enough for this case and a wrapping around should not pose any issue when you account for the memory and bandwidth requirement.

Building a Network Appliance Prototype Using a standard PC with Linux and Two NIC's

I am willing to build a prototype of network appliance.
This appliance is suppose to transparently manipulate Ethernet packets. It suppose to have two network interface cards having one card connected to the outside leg (i.e. eth0) and the other to the inside leg (i.e. eth1).
In a typical network layout as in the attached image, it will be placed between the router and the LAN's switch.
My plans are to write a software that hooks at the kernel driver level and do whatever I need to do to incoming and outgoing packets.
For instance, an "outgoing" packet (at eth1) would be manipulated and passed over to the other NIC (eth0) which then should be transported over to the next hope
My questions are:
Is this doable?
Those NIC's will have no IP address, is that should be a problem?
Thanks in advance for your answers.
(And no, there is no such device yet in the market, so please, "why reinvent the wheel" style of answers are irrelevant)
typical network diagram http://img163.imageshack.us/img163/1249/stackpost.png
I'd suggest libipq, which seems to do just what you want:
Netfilter provides a mechanism for passing packets out of the stack for queueing to userspace, then receiving these packets back into the kernel with a verdict specifying what to do with the packets (such as ACCEPT or DROP). These packets may also be modified in userspace prior to reinjection back into the kernel.
Apparently, it can be done.
I am actually trying to build a prototype of it using scapy
as long as the NICs are set to promiscous mode, they catch packets on the network without the need of an IP address set on them. I know it can be done as there are a lot of companies that produce the same type of equipment (I.E: Juniper Networks, Cisco, F5, Fortinet ect.)

Resources