Anyone know why 802.11 Acknowledgement Frames have no source MAC address? I don't see it when I capture the packets from TCPDUMP or Wireshark from Linux with a monitor-mode and promiscuous-mode driver. How does the Access Point distinguish ACK frames from different 802.11 clients if there is no source MAC addresses in the frame?
I can see from all the captures that the ACK comes immediately after the frame is sent (around 10 to 30 microseconds) but that alone can't be enough to distinguish the source can it? Maybe each frame has some kind of unique identifier and the ACK frame has this ID inside it? Maybe there is identifying information in the encrypted payload since the WLAN uses WPA-PSK mode?
No, there is nothing encripted in 802.11 MAC ACK frames.
802.11 is a contention based protocol. ie the medium is shared by different STA's and AP's who all are working in the same channel [frequency] in terms of time. Those who wants to transmit are competing for medium , winner who gets medium starts transmitting.
As per 802.11 spec. , once a frame is on air, the next "SIFS" period medium should be free. ie no one is allowed to transmit.
At the end of SIFS, the reciever of unicast frame should transmit ACK. This is the rule.
SIFS [Short Interframe space] in 802.11 is ~10 microseconds for OFDM based 802.11 implementations [802.11 G,A]. For 802.11b its ~20 microseconds if my memory is correct. Thats why you are seeing 10 or 30 microseconds in between TX and ACK
So, everyone knows who is transmitting the ACK and whom the ACK is to. So no need to include source address, its implecit.
Why Source address not included?
To reduce the frame size and so to same power.
Hope it helps. If you got more questions on this, please feel free
ACK frames, like all 802.11 management frames, have a DA (Destination Address) and SA (Source Address) in their MAC header (both not to be exactly confused with just "MAC address, see below), and that's all what's needed in this context.
TLD;DR: In 802.11 context, "MAC address", SA (Source addr), TA (Transmitter addr), DA (Destination addr), BSSID or whatnot all look alike a 6 byte "MAC address" that we're familiar with from other technologies, yet they should not be confused.
And now for the demolition of "MAC address" concept in 802.11 context.
802.11 Acknowledgement Frames are parts of 802.11 managements frames, and 802.11 is "a set of media access control (MAC) and physical layer (PHY) specifications" (source).
What this mean - and this is a very important concept to grasp when working with Wi-Fi - is that 802.11 in itself, including its management frames, have nothing to do with "traditional" (say 802.3, aka Ethernet) PHY (layer 1) nor MAC (layer 2) layers - they are a kind of their own.
802.3/Ethernet, to continue with this analogy - or rather counter example - have no such thing as ACK frames, beacons, probe request, RTS/CTS, auth/deauth, association etc, which are all types of 802.11 management frames. These are simply not needed with 802.3, for the most part because wired Ethernet is not a shared media (that's IEEE terminology), that may lead to unreliability/collision as is air with 802.11/Wi-Fi.
The important consequence of this is that you should not expect a priori to meet other, more familiar concept nor data from other layer 1/2 technologies. Forget this, once for all.
Sure, Wi-Fi looks like it carry on some MAC and IP and TCP and or UDP or whatnot, and they do most of the time, but regarding management frames such as ACK, that's a different world - its own world. As it is, 802.11 could perfectly be used - and maybe is used in some niche use cases - to carry other higher level protocols than TCP/IP. And its MAC concept, though looking familiar with its 6 bytes, should not be confused in its form nor use to the MAC of 802.3/Ethernet. To take another example, 802.15 aka Bluetooth also have a 6 bytes MAC, yet that is again another thing.
And to take the 802.11 a contrario example, 802.11 layer 1/2 beacon frames for instance carry some info about SSID, supported rates, Frequency-hopping (FH), parameter set etc etc that have no counterparts in other L1/2 technologies.
Now, embrace the complexity of what is/are "MAC addresses" in 802.11...
And this is why, to take an example in day to day use, pcap/tcpdump have such weird filters such as wlan ra, wlan ta, wlan addr1, wlan addr2, wlan addr3, wlan addr4 - and the like for wireshark capture and display filter.
Right George..!!
In MAC format NAV timer is updated like that, while data frames NAV is updated while transmitting , It updates for
DATA frame + SIFS + ACK + SIFS
So there is clear for this time, only one AP <---> station is talking, all are waiting to be clear NAV time period, so doesn't need to add Source address as it is wasting frame bytes.
I came across same question and saw this stackoverflow question on internet.
If you think about it, if stationA is waiting for ack from stationB, that means stationA has pretty much secured/locked medium for that long (see Jaydeep's answer) time i.e. (enough time for 2SIFS + 1ACK assuming no follow-up transmission between these 2 stations).
Hence no other station is sending any frame (i.e. ack here), leading no need to distinguish ack.
Its just StationA is waiting for ack from stationB in that time window.
Related
Actual throughput is lower than physical layer transmission rate
Decrease.
What is the relevance of the 11a MAC to this speed reduction?
Are you comparing the Transmission rate of 802.11 and Application throughput (For example, Iperf result)?
If yes,
First of all, the transmission rate is counting with the MAC layer of the packet including 802.11 headers. The length counting for a single data packet size is already different.
More important, 802.11 is a CSMA/CA protocol that depends on the airtime for transmitting the packets. Exact management packet (block ACK, RTS/CTS, ... )or time waiting (for example, SIFS / Contention Window, ...) are needed for this mechanism. This used up the airtime so that the actual throughput will be slightly lower than the transmission rate.
I'm using Wireshark to monitor network traffinc to test a new software installed on a router. The router itself lets other networks (4g, mobile devices through usb etc) connect to it and enhance the speed on that router.
What I'm trying to do is to disconnect the connected devices and discover if there are any packet losses while doing this. I know I can simply use a filter stating "tcp.analysis.lost_segment" to track down lost packets, but how can I eventually isolate the specific device that causes the packet loss? Or even know if the reason was because of a disconnected device when there is a loss?
Also, what is the most stable method to test this with? To download a big file? To stream a video? Etc etc
All input is greatly appreciated
You can't detect lost packets solely with Wireshark or any other packet capture*.
Wireshark basically "records" what was seen on the line.
If a packet is lost, then by definition you will not see it on the line.
The * means I lied. Sort of. You can't detect them as such, but you can extremely strongly indicate them by taking simultaneous captures at/near both devices in the data exchange... then compare the two captures.
COMPUTER1<-->CAPTURE-MACHINE<-->NETWORK<-->CAPTURE-MACHINE<-->COMPUTER2
If you see the data leaving COMPUTER1, but never see it in the capture at COMPUTER2, there's your loss. (You could then move the capture machines one device closer on the network until you find the exact box/line losing your packets... or just analyze the devices in the network for eg configs, errors, etc.)
Alternately if you know exactly when the packet was sent, you could not prove but INDICATE its absence with a capture covering a minute or two before and after the packet was sent which does NOT have that packet. Such an indicator may even stand on its own as sufficient to find the problem.
I am attempting to use my arduino uno's rx and tx pins to receive an ASCII character string from an rs485 device transmitting at 2400 BAUD with 0.100Sec between transmissions, and then parse and output certain pieces of the string to a 16x2 LCD attached to the arduino.. I am getting some data strings, as I checked with my scope, coming in on the rx pin 0-5vdc square wave. Anyone with sample code to receive rs485 ascii strings into a buffer would be helpful.
RS485, RS422, and RS232 are different schemes for the hardware link layer. By that I mean, those specifications only describe what is on the wire. A transceiver chipset converts the wire signals back to the logic level signals that are connected to Arduino, or any other device. At the logic level the Arduino sees, any of the RS___ signals will look the same.
A USART converts the bit stream to a byte (this can be software or hardware). The USART does not know about the signal levels on the wire, it operates solely on the logical level bit stream. The UNO contains one USART that is available on the TxRx pins.
So your code on the microcontroller does not need to be different, RS232 or RS485. All the Serial code samples you see will function fine. You tell the Serial library baud, stop bits, and parity, and you are done. Set the serial connection to 2400, and the Arduino will start seeing characters.
Caveat
RS485 is sometimes used in half-duplex mode. This means you cannot receive and transmit at the same time. If you are wiring for half duplex, then you code must be certain you are not transmitting while some other device is still transmitting.
when does Wireshark timestamp the packets? After fully receiving a frame? Or already at receiving the first bytes of a frame? I read the following description about Wireshark timestamps but this text only states: "While packets are captured, each packet is time stamped as it comes in".
Consider the following scenario and an accurate OS time:
sender ----> Wireshark ----> receiver
The sender starts the transmission of the frame at time x. The frame is fully received at the receiver at time y (y = transmission start x + frame length / link speed). Will the captured frame appear in the Wireshark with a timestamp close to x or will it be y?
Best regards,
Jonas
Well, Wireshark doesn't time stamp the packets itself; it relies on libpcap to do them and, on almost all operating systems, libpcap doesn't time stamp them itself, either, the OS's packet capture mechanism, as used by libpcap does. The main exception is Windows, where WinPcap has to provide its own capture mechanism in the kernel, atop NDIS, but the mechanism behaves very similarly to the mechanisms inside most UN*Xes, and will give similar behavior. (The other exception is HP-UX, where the OS's capture mechanism doesn't time-stamp packets at all, so libpcap does so; that gives answers that are somewhat similar to other OSes, but with a potentially even longer delay before the packets are time-stamped.)
If Wireshark (or any other packet sniffer!) is run on the sender, the packets will be "wrapped around" within the OS and handed to the capture mechanism; the time stamp could be applied before the sender starts transmitting the packet, but the time stamp will be closer to x than to y.
If Wireshark (or any other packet sniffer) is run on the receiver, the time stamp is applied at some time after the entire packet has been received; that could involve delays due to packet being queued up, interrupts being "batched", some amount of network-stack processing being done before the packet is time stamped, etc.. The time stamp will be closer to y than to x.
If Wireshark (or any other packet sniffer) is being run on some third machine, passively sniffing a network, the time stamp will probably be closer to y than to x, but there's a difference due to the receiver and the sniffer being separate machines that might see the packet at different times, have different code paths for receiving, etc..
On one end, you have TCP, which guarantees that packets arrive and that they arrive in order. It's also designed for the commodity Internet, with congestion control algorithms that "play nice" in traffic. On the other end of the spectrum, you have UDP, which doesn't guarantee arrival time and order of packets, and it allows you to send large data to a receiver. Somewhere in the middle, you have reliable UDP-based programs, such as UDT, that offer customized congestion control algorithms and reliability, but with greater speed and flexibility.
However, what I'm looking for is the capability to send large chunks of data over UDP (greater than the 64k datagram size of UDP), but without a concern for reliability of each individual datagram. The idea is that the large data is broken down into datagrams of a specified size (<= 64,000 bytes), probably with some header data stuck on the front and sent over the network. On the receiving side, these datagrams are read in and stored. If a datagram doesn't arrive, all of the datagrams associated with that transfer are simply thrown out by the client.
Most of the "reliable UDP" implementations try to maintain reliability of each datagram, but I'm only interested in the whole, and if I don't get the whole, it doesn't matter - throw it all away and wait for the next. I'd have to dig deeper, but it might be possible with custom congestion control algorithms in UDT. However, are there any protocols with this approach?
You could try ENet, whilst not specifically aimed at what you're trying to do it does have the concept of 'fragmented data blocks' whereby you send data larger than its MTU and it sends as a sequence of datagrams of its MTU with header details that relate one part of the sequence to the rest. The version I'm using only supports 'reliable' fragments (that is the ENet reliability layer will kick in to resend missing fragments) but I seem to remember seeing discussion on the mailing list about unreliable fragments which would likely to exactly what you want; i.e. deliver the whole payload if it all arrives and throw away the bits if it doesn't.
See http://enet.bespin.org/
Alternatively take a look at the answers to this question: What do you use when you need reliable UDP?