When does a UDP sendto() block? - network-programming

While using the default (blocking) behavior on an UDP socket, in which case will a call to sendto() block? I'm interested essentially in the Linux behavior.
For TCP I understand that congestion control makes the send() call blocking if the sending window is full, but what about UDP? Does it even block sometimes or just let packets getting discarded at lower layers?

This can happen if you filled up your socket buffer, but it is highly operating system dependent. Since UDP does not provide any guarantee your operating system can decide to do whatever it wants when your socket buffer is full: block or drop. You can try to increase SO_SNDBUF for temporary relief.
This can even depend on the fine tuning of your system, for instance it can also depend on the size of the TX ring in the driver of your network interface. There are a few discussions about this in the iperf mailing list, but you really want to discuss this with the developers of your operating system. Pay special attention to O_NONBLOCK and EAGAIN / EWOULDBLOCK.

This may be because your operating system is attempting to perform an ARP request in order to get the hardware address of the remote host.
Basically whenever a packet goes out, the header requires the IP address of the remote host and the MAC address of the remote host (or the first gateway to reach it). 192.168.1.34 and AB:32:24:64:F3:21.
Your "block" behavior could be that ARP is working.
I've heard in older versions of Windows (2k I think), that the 1st packet would sometimes get discarded if the request is taking too long and you're sending out too much data. A service pack probably fixed that since then.

Related

g_socket_bind behavior on UDP multicast

I have multiple readers on a single system which bind to a single address (IP:port ex. 239.0.0.1:1234). Another computer on group sends a UDP multicast packet to this group and readers should receive it. I used GLib 2.0 networking stack, g_socket_bind with allow_reuse set to true.
When there is a single reader (single socket binded to that address) or up to three readers everything is ok and readers will receive packets correctly. But when the number of readers increases to four or above, the packet loss occurs and linearly increases with number of readers on system.
If socket is a UDP socket, then allow_reuse determines whether or not other UDP sockets can be bound to the same address at the same time. In particular, you can have several UDP sockets bound to the same address, and they will all receive all of the multicast and broadcast packets sent to that address.
As stated in GIO Reference Manual, when allow_reuse set true, all readers should read all of data but it doesn't happen as the stated above.
Anybody knows what the problem is? Is there a kernel related problem?
All your sockets need to join the multicast group. If you're just relying on the bind to effect that, you are into undefined behaviour.

Simulating packet encapsulation

I am developing an application which aims to simulate a real network. In order to do this, I need to have detailed information about how a packet is formed in a system.
Imagine you have an application layer message and you want to encapsulate it in a transport layer payload and add a specific port number for desired process in the header, and then encapsulate it in network layer payload and add IP addresses.
My question is that
Where does the encapsulation of upper layer protocols' packets to lower layers happen?
Is network card driver responsible for that or some other part in OS? and if so, which part?
I just want to note that I’ve read computer networks: A top down approach and Foruzan's book on the subject but all the information there ,was so theoretical.
Thanks in advance.
If you are asking about a real implementation, usually every message of a layer is conveyed as the whole payload of the lower layer message. Talking about TCP/IP stack in an OS like Windows or Linux, without SSL/TLS, this depends on the types of sockets you use. Supposing you use TCP, STREAM sockets, the application layer message you send with send or write system calls will become the payload of the TCP message. The processing of a TCP segment and an IP datagram happens in the OS Kernel. The processing of a layer 2 frame happens part in the NIC's device driver (in the kernel) and part in the NIC hardware. This depends on the specific NIC.
Something else to add is that some NIC's are able to calculate the checksum of TCP segments and UDP datagrams. Then the kernel offloads this task to the NIC. Only the checksum.

WiFi Beacon Packets

I'm trying to write a simple C code with WinPcap to broadcast a beacon packet and capture it in all nearby WiFi units. The code I'm using is very similar to the ones available at WinPcap[1].
The code runs fine if I create an ad-hoc network connection and join all the computers into it. However, this process of creating and joining to an ad-hoc network is cumbersome. It would be much better if, regardless of what network each computer is in, the beacon packets would be broadcasted and captured once the code is running.
As simple as this problem might sound, after some searching it seems that this is not possible to be done on windows (unless re-writing drivers or maybe the kernel):
Raw WiFi Packets with WinPcap[2]
Sending packets without network connection[3]
Does winpcap/libpcap allow me to send raw wireless packets?[4]
Basically, it would be necessary to use the WiFi in monitor mode, which is not supported in Windows[5]. Therefore, if the computers are not in the same network connection, the packets will be discarded.
1st Issue
I'm still intriguing, beacon and probe request packets are a normal traffic across the network. How they could be being sent and received constantly but the user is not allowed to write a program to do so? How to reconcile that?
2nd Issue
Does anyone has experience with Managed Wifi API[6]? I've heard that it might help.
3rd Issue
Acrylic WiFi[7] claims to have developed a NDIS driver which support monitor mode under Windows. Does anyone has experience with this software? Is it possible to integrate with C codes?
4th Issue
Is it possible to code such Wifi beacon on Linux? and on Android?
www.winpcap.org/docs/docs_412/html/main.html
stackoverflow.com/questions/34454592/raw-wifi-packets-with-winpcap/34461313?noredirect=1#comment56674673_34461313
stackoverflow.com/questions/25631060/sending-packets-without-network-connection-wireless-adapter
stackoverflow.com/questions/7946497/does-winpcap-libpcap-allow-me-to-send-raw-wireless-packets
en.wikipedia.org/wiki/Monitor_mode#Operating_system_support
managedwifi.codeplex.com/
www.acrylicwifi.com/
Couple questions I will try to answer. Mgmt and Ctrl packets are used for running a wifi network and don't contain data, I would not call these normal packets. Windows used to(I think still does) convert data packets into ethernet frames and pass it up the stack. Beacon and Probe Req pkts are not necessary for TCP/IP stack to work, ie. web browsers don't need beacon frames to get your web page. Most OS's need minimal info from mgmt/ctrl pkts to help a user interact with a wifi adapter, most mgmt/ctrl pkts only are useful to the driver(and low level os components) to figure how to interact with the network. This way the wifi adapters look and act like ethernet adapters to high level os components.
Never had any experience with Managed Wifi API or Acrylic, so can't give you any feedback.
Most analyzers that capture and send packets do it in 2-3 separate modes mainly because of hardware. Wifi adapters can be in listen mode(promiscuous mode and/or monitor mode) or adapter mode. To capture network traffic you need to listen and not send, ie. if someone sends a pkt while you are sending you miss that traffic. In order to capture(or send) traffic you will need a custom NDIS driver in windows, on linux many of them already do. Checkout wireshark or tshark, they use winpcap to capture pkts in windows and there are some adapters they recommend to use to capture pkts.
Yes it is possible to send a beacon on linux, ie. Aireplay. I know its possible to capture traffic on Android but you it needs to have rooted or custom firmware, which I would believe also means you can send custom pkts. If you are simply trying to send a pkt it might be easier to capture some traffic in tshark or wireshark and use something like aireplay to resend that traffic. You could also edit the packet with a hex editor to tune it to what you need.

Sending data to multiple sockets at exact same time

I'm want to design a ruby / rails solution to send out to several listening sockets on a local lan at the exact same time. I want the receiving servers to receive the message at exact same time / or millisecond second level.
What is the best strategy that I can use that will effectively allow the receiving socket to receive it at the exact same time. Naturally my requirements are extremely time sensitive.
I'm basing some of my research / design on the two following articles:
http://onestepback.org/index.cgi/Tech/Ruby/MulticastingInRuby.red
http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm
Now currently I'm working on a TCP solution and not UDP because of it's guaranteed delivery. Also, was going to stand up ready connected connections to all outbound ports. Then iterate over each connection and send the minimal packet of data.
Recently, I'm looking at multicasting now and possibly reverting back to a UDP approach with a return a passive response, but ensure the message was sent back either via UDP / TCP.
Note - The new guy syndrome here with sockets.
Is it better to just use UDP and send a broad packet spam to the entire subnet without guaranteed immediate delivery?
Is this really a time sensitive component? If it's truly down to the microsecond level then you may want to ensure its implemented close to native functions on the hardware. That being said a TCP ACK should be faster than a UDP send and software response.

Building a Network Appliance Prototype Using a standard PC with Linux and Two NIC's

I am willing to build a prototype of network appliance.
This appliance is suppose to transparently manipulate Ethernet packets. It suppose to have two network interface cards having one card connected to the outside leg (i.e. eth0) and the other to the inside leg (i.e. eth1).
In a typical network layout as in the attached image, it will be placed between the router and the LAN's switch.
My plans are to write a software that hooks at the kernel driver level and do whatever I need to do to incoming and outgoing packets.
For instance, an "outgoing" packet (at eth1) would be manipulated and passed over to the other NIC (eth0) which then should be transported over to the next hope
My questions are:
Is this doable?
Those NIC's will have no IP address, is that should be a problem?
Thanks in advance for your answers.
(And no, there is no such device yet in the market, so please, "why reinvent the wheel" style of answers are irrelevant)
typical network diagram http://img163.imageshack.us/img163/1249/stackpost.png
I'd suggest libipq, which seems to do just what you want:
Netfilter provides a mechanism for passing packets out of the stack for queueing to userspace, then receiving these packets back into the kernel with a verdict specifying what to do with the packets (such as ACCEPT or DROP). These packets may also be modified in userspace prior to reinjection back into the kernel.
Apparently, it can be done.
I am actually trying to build a prototype of it using scapy
as long as the NICs are set to promiscous mode, they catch packets on the network without the need of an IP address set on them. I know it can be done as there are a lot of companies that produce the same type of equipment (I.E: Juniper Networks, Cisco, F5, Fortinet ect.)

Resources