g_socket_bind behavior on UDP multicast - network-programming

I have multiple readers on a single system which bind to a single address (IP:port ex. 239.0.0.1:1234). Another computer on group sends a UDP multicast packet to this group and readers should receive it. I used GLib 2.0 networking stack, g_socket_bind with allow_reuse set to true.
When there is a single reader (single socket binded to that address) or up to three readers everything is ok and readers will receive packets correctly. But when the number of readers increases to four or above, the packet loss occurs and linearly increases with number of readers on system.
If socket is a UDP socket, then allow_reuse determines whether or not other UDP sockets can be bound to the same address at the same time. In particular, you can have several UDP sockets bound to the same address, and they will all receive all of the multicast and broadcast packets sent to that address.
As stated in GIO Reference Manual, when allow_reuse set true, all readers should read all of data but it doesn't happen as the stated above.
Anybody knows what the problem is? Is there a kernel related problem?

All your sockets need to join the multicast group. If you're just relying on the bind to effect that, you are into undefined behaviour.

Related

Packets are greater than configured MTU

I made a tcpdump and captured packets, the configured MTU is 2140. I am analysing pcap files using Wireshark.
According to the configured MTU the expected maximum size of the packets should be 2154 (2140 bytes +14 ethernet header bytes). But I see packets of size greater than 2154 (ex 9010 bytes), On analyzing I found that these packets are generated on the machine where I made tcpdump (let's say A) and have the destination to another machine (let's say B). I expect a packet to be fragmented before it is sent to another host. I found some explanations online that says tcpdump captures packets before NIC breakdown, though this seems to be a valid explanation but it seems to contradict in my case because on machine A, I received packets of size greater than 2154 from B. Any thoughts, on why machine A is sending and receiving packets greater than configured MTU.
What you are seeing is most likely the result of TCP Segment Reassembly Offloading. This is a feature available on some network cards with matching drivers.
The idea is that the reassembly of many of the TCP segments is handled in the NIC itself. This turns out to be pretty effective in reducing overhead on the CPU/OS side since the network driver need only handle, perhaps, 1 "packet" out of 10, seeing just one large packet, rather than receiving and reassembling all 10.
You can read more about it here.
Updated answer
If your packet is UDP
This behaviour is normal. but there is not much you can do to see the individual packets on the end machines. The UDP packet is broken down into MTU compliant packets and reassembled at the Link layer, usually by specific hardware. This is too low to to be captured by Wireshark/pcap.
If you want to capture the individual broken down packets, you have to do this on an intermediate machine/network card, for example a gateway between the two hosts, because the original UDP packet is not reassembled until it reaches its final destination. Note : this gateway can be virtual.
See notes.shichao.io/tcpv1/ch10
Leaving this here in case someone with the same problem comes...
If your packet is TCP :
It sounds like Wireshark is reassembling packets for you. This is often the default for TCP streams. You can change this by richt-click on a stream -> Protocol Preferences -> Allow subdissectors to reassemble TCP.

Contiki OS: Set Promiscuous Mode and receive all UDP Packets

i'm trying to do the following:
a) Set Contiki in Promiscuous Mode.
b) Then retrieve all UDP and RPL packets send, not only to current node but also between two other nodes within communication range.
I have the following code:
NETSTACK_RADIO.set_value(RADIO_PARAM_RX_MODE, 0);
simple_udp_register(&unicast_connection, 3001,
NULL, 3000, receiver);
where receiver is an appropriate callback function. I am able to collect UDP packets send to the current node, but still unable to receive packets send between other nodes in my communication range.
Setting the RADIO_PARAM_RX_MODE only controls which packets the radio driver filters out. There are multiple layers in an OS network stack, of which the radio driver is only the first one. The next ones are RDC and MAC, which still filter out packets addressed to other nodes, and there is no API to disable that.
The solution is to either modify the MAC to disable dropping of packets not addressed to the local mode or write your own simple MAC. The latter is what the Sensniff (Contiki packet sniffer) does - see its README and source code. By the way, if you just want to log all received packets, just use Sensniff!

Contiki IPv4 UDP broadcast packets not sending

I'm currently implementing my first application in Contiki on a Telos bmote and encountered a problem.
For my application (which utilises the uIP IPv4 stack) I need to be able to broadcast messages to all nodes.
I have looked through the source and found that in uip_over_mesh.c the packet is found to be for an external network and is then being sent to a gateway node on the network instead of being distributed to all nodes. If no gateway node is present it just drops the packet.
So in fact rather than the packet being broadcast to all the nodes in the network it's either just being dropped or being sent to just the gateway node and external network.
My problem is that I need it to broadcast to the other nodes in the network(as it should), is there a step I'm missing or am I doing something wrong?
Thanks :)
p.s. This is the rough code to get the message to send.
struct uip_udp_conn *udp_conn = udp_broadcast_new(UIP_HTONS(5001), state);
udp_bind(udp_conn,UIP_HTONS(5001));
uip_udp_packet_send(udp_conn, "hello",5);
Sorry that my question didn't seem clear. To clarify what I wanted to do was send an IPv4 UDP packet to the broadcast address i.e. send to all devices on the network using the all ones addr. But I found that the sending device would only forward the message to a gateway node if it was present on the network.
The question is not clear but what I understand from the question you want to broadcast a message anonymously to all neighbour motes. You have two choices to go.
If you are using RIME stack from contiki. There is already a code under example/rime/example-broadcast.c (have a look at line 79,80 ( packetbuf_copyfrom("Hello", 6); broadcast_send(&broadcast)); I have tested the code and it is working perfectly fine under teleosB. I strongly recommend you to go with uIP (IPv6) stack using RPL. For a large network it 'll be extremely hard to maintain rime stack.
You can use udp based ipv6 enable broadcast example from examples/ipv6/simple-udp-rpl. You do n't need to change anything for receiver function unless you want more additional features. This function 'll print receiver port, sender port and data length. You can add "addr" from "uip_ipaddr_t" in the receiver function if you want to print IP addresses. For sender the lines of code are (76-91). You do n't need to change for simple message like "hello". I tested the code and it works perfectly fine.
After lots more reading of the Contiki source I found that the problem lay in uip_over_mesh.c.
When a broadcast message(255.255.255.255) was being sent it was tripping up when the send function would check if the destination was within the local network (based on the netmask and destination address). Failing this it would then try to send it out to a local gateway(if one existed) to route it out of the network.
Although IPv4 UDP broadcast had been built into the api, I saw no evidence of it actually being implemented in the uip_over_mesh.c(I might be wrong and totally missed it). So to fix this I added a broadcast RIME channel and added a check for the all ones address where the previously mentioned gateway check was. A method to receive the broadcast messages was also implemented ensure broadcast message were correctly received and passed to the upper layers.
From what I gathered from here and the mailing list, IPv6 is where the focus is and not many people are knowledgeable or using the IPv4 uip stack. When I get some time I will dig up my modified uip_over_mesh.c and see if I can push the modifications, though I'm sure it's a bit of a hack and not of much use due to the above mentioned lack of interest.

When does a UDP sendto() block?

While using the default (blocking) behavior on an UDP socket, in which case will a call to sendto() block? I'm interested essentially in the Linux behavior.
For TCP I understand that congestion control makes the send() call blocking if the sending window is full, but what about UDP? Does it even block sometimes or just let packets getting discarded at lower layers?
This can happen if you filled up your socket buffer, but it is highly operating system dependent. Since UDP does not provide any guarantee your operating system can decide to do whatever it wants when your socket buffer is full: block or drop. You can try to increase SO_SNDBUF for temporary relief.
This can even depend on the fine tuning of your system, for instance it can also depend on the size of the TX ring in the driver of your network interface. There are a few discussions about this in the iperf mailing list, but you really want to discuss this with the developers of your operating system. Pay special attention to O_NONBLOCK and EAGAIN / EWOULDBLOCK.
This may be because your operating system is attempting to perform an ARP request in order to get the hardware address of the remote host.
Basically whenever a packet goes out, the header requires the IP address of the remote host and the MAC address of the remote host (or the first gateway to reach it). 192.168.1.34 and AB:32:24:64:F3:21.
Your "block" behavior could be that ARP is working.
I've heard in older versions of Windows (2k I think), that the 1st packet would sometimes get discarded if the request is taking too long and you're sending out too much data. A service pack probably fixed that since then.

Building a Network Appliance Prototype Using a standard PC with Linux and Two NIC's

I am willing to build a prototype of network appliance.
This appliance is suppose to transparently manipulate Ethernet packets. It suppose to have two network interface cards having one card connected to the outside leg (i.e. eth0) and the other to the inside leg (i.e. eth1).
In a typical network layout as in the attached image, it will be placed between the router and the LAN's switch.
My plans are to write a software that hooks at the kernel driver level and do whatever I need to do to incoming and outgoing packets.
For instance, an "outgoing" packet (at eth1) would be manipulated and passed over to the other NIC (eth0) which then should be transported over to the next hope
My questions are:
Is this doable?
Those NIC's will have no IP address, is that should be a problem?
Thanks in advance for your answers.
(And no, there is no such device yet in the market, so please, "why reinvent the wheel" style of answers are irrelevant)
typical network diagram http://img163.imageshack.us/img163/1249/stackpost.png
I'd suggest libipq, which seems to do just what you want:
Netfilter provides a mechanism for passing packets out of the stack for queueing to userspace, then receiving these packets back into the kernel with a verdict specifying what to do with the packets (such as ACCEPT or DROP). These packets may also be modified in userspace prior to reinjection back into the kernel.
Apparently, it can be done.
I am actually trying to build a prototype of it using scapy
as long as the NICs are set to promiscous mode, they catch packets on the network without the need of an IP address set on them. I know it can be done as there are a lot of companies that produce the same type of equipment (I.E: Juniper Networks, Cisco, F5, Fortinet ect.)

Resources