What exactly determines what’s in the radiotap header when capturing on WLAN? - wireshark

I’m doing a study project on wifi signal quality. What I want to do is use Raspberry Pi’s to monitor as many metrics as possible on packet level data. I want to do this by putting wifi adapters on monitor mode (using airmon-ng) and than capture the data about the packets using a wireless network protocol analyzer, like tshark.
What I understand from the wireless networks is that you mainly have three parts: a frame part that has the same information independent of what you’re capturing on, which contains things as frame number, frame length and arrival time. (Want to upload images but don't have 10 reputation yet...).
Then the IEEE 802.11 data which contains the necessary stuff for the network to work. When capturing on WLAN this contains the MAC addresses.
And than we have the radiotap header, which contains all kind of information (signal strength db and dbm, noise level, signal quality, TX value, and much more). This one is a bit different, since this information is actually filled or injected by the wifi adapter you use to capture the data with.
In the present flags you can find which values are actually being injected by the wifi adapter. Now my problem is that for my research I really need as much values as possible. I’ve been working for hours but I didn’t succeed in finding a way to capture with anything more than dmb signal strength (if even available). So this is what I tried so far:
The adapters I used so far are the Edimax EW7811UN, the AirPcap Classic, the AirPcap Tx and two similar alfa adapters with Atheros AR9271 chipset. The AR9271 adapters worked out of the box on raspbian (debian for raspberry pi) on the ath9k_htc driver. Putting them on monitor mode and capturing works fine, but only dbm singal strength is given (as in the screenshots above) in the capture. The Edimax was working out of the box on the 8192cu driver, however it clearly doesn’t support monitor mode. I could put it into monitor mode when booting it on the zd1211rw driver but that didn’t even give the dbm signal strength. Strange thing however, is that a friend tried the exact same Edimax adapter and he could capture, and the only difference we could find is that the lsmod says rtl8192cu and not 8192cu. Strangely, forums are saying that 8192cu is the newer version, however this friend had the newest arch linux kernel installed (newer than the raspbian). So I installed Arch Linux on the pi, but still wasn’t able to put the edimax on 8192cu driver in monitor mode. Then I found a package in the aur repos: dkms-8192cu which was supposed to have a patched version. However, after installing it still didn’t work. Also downloading the driver from the realtek website didn’t work. There is some stuff on patching on the aircrack-ng website, but it actually is mentioning injection of frames and doesn’t really look to be what I exactly need.
Than I bought the Airpcap Classic and the Airpcap Tx to see what they could do. First of all, they have zero linux support so that already is a big drawback since l need to use it from the Pi’s. However even in windows the airpcap’s only capture db and dbm noise and signal quality. It does receive some data at dbm noise level, but it’s worthless since it is always at -100 level. I can boot the Airpcap classic and tx have zd1211B chipset so I can boot them on zd1211rw driver but this also gives no dbm signal value or anything else.
So my question is, what exactly determines what’s in the radiotap header? I guess it would be all in the driver, but I need to be exactly sure before I write off every ath9_htc driver based adapter. I’m about to purchase another adapter which runs on carl9170 driver, however I can’t find no guarantee anywhere that it will give me those values. What I did find in the literature is that the madwifi driver gives (or was giving) noise levels, however it is acquired by Atheros so the project stopped and all websites are suggestion just to use ath9k or ath5k drivers. I tried to install it but I failed because it seems to be really outdate software since the project stopped.
It would be of really big help if someone can explain me what exactly determines what’s inside the radiotap headers, and also if someone could share any experience on when they did capture more than only dbm signal strength values from linux.

Related

Comparison between USB and Mini PCIe Interfaces

I'm deciding between the MiniPCIe and USB accelerators for a home Linux CCTV project. The host has both USB3 and a MiniPCIe socket. The host's physical environment will range from an ambient 20C up to a potential 35C (during the summer).
I'm struggling to determine the pros and cons for each. I have gotten this far, although many are guesses:
USB:
Supports Windows and MacOS as well as Linux
Appears to have greater mindshare/use/community support on the Internet
External so can be placed to optimise heat dissipation
Heatsink
Two manual performance modes, highest requires ambient temp of max 25C
Can use up to 4.5W (900mA # 5V)
Mini PCie:
Cheaper (25%)
Lower power consumption (1.4W for 416 fps)
Automatic thermal throttling via driver
Relies on host system for active cooling
Will maintain max operation at 85C
There's probably many I've missed. In particular I can't determine if there's any limitations on throughput/capacity using USB vs PCIe. If there is no difference, then I suspect the USB form factor is the better option, if only for the mindshare, although the power usage/heat generated may be a concern.
To whittle this down to an actual question: in what cases would the Mini PCIe interace be a preferred option to the USB one?
If you are looking for a plug&play solution, then I definitely suggest the USB Accelerator. Overall, as long as you have the system requirements then it'll always works (maybe with some modifications to the standard linux configs like adding your user to the plugdev group, ...). Then the software for the CCTV is all up to you :)
PCIes sometimes need extra works like adding extra kernel arguments and modules to keep the pcie modules happy. If you are looking to launch a huge product where volumes are expected, then it is worth investigating it since it's cheaper and more compact. However, the power usage is a must for consideration as the USB Accelerator could uses up to 900mA, so that could play a factor.
May I know what host are you trying to attach the accelerators to?

Gnuradio streaming between two computers?

Is there a simple way to implement communication between two computers running GNUradio using the standard blocks set?
What I am have now is this:
On a Linux computer, GNUradio is running and receiving input from a Radio peripheral. On that computer I can see the received waveform on a WX scope. I can also use sliders and input boxes to change things like the receiver frequency.
What I'd like to do is this:
On a Windows computer, I have the WX scope and sliders. When I move the a slider or change an input box, that data gets sent to the Linux, which is still running the radio receiver on Gnuradio. The received signal goes through a stream back to the windows, and gets displayed on the WX scope on Windows.
Someone elsewhere suggested using the ZMQ blocks, however, when I tried setting up a PUSH/PULL to transmit a sine wave from the Linux to the Windows, nothing went through. The guy who recommended that approach tried the same and also could not get it working, so I think that block might be broken?
So is there any alternative blocks that can do what I'm trying to do? Preferably something well documented, and available on GNUradio-companion.
Depending on the data rate from the receiver, it's possible to encounter performance issues attempting to send raw waveform data using e.g. the UDP blocks, where the sender may print an error similar to the following:
gr::log :WARN: udp_source0 - Too much data; dropping packet.
Because the scope widgets usually only display a portion of the input data, a more ideal way of remotely visualizing the waveform might be to only send the rendered scope widget (e.g. using a remote desktop such as VNC or X2Go). Although this solution reaches beyond your original problem, it is probably easier to use in the long run for cases involving two-way GUI interaction.
For the scope widget data, the UDP sink and source blocks seem to be native to GNU Radio, and are either sufficiently documented solution or simple enough for this problem, again taking firewall configuration into consideration as #Zephyr mentioned.
From GRC, specify in the UDP blocks:
the hostname or IP address of the display computer, and
a choose port number that isn't already in use (and were you using Linux, OS X, or anything UNIX-like, not any port below 1024).
For setting variables over the network, you might try the XMLRPC blocks, as described in another answer. These were recently deprecated, however.
See my other answer for discussion of alternative if performance issues arise.
Both Linux and Windows should have firewalls which might be blocking the connections.
You need to post the error messages displayed in gnuradio-companion.

How to monitor packets using Snort features?

I want to create a network intrusion detection system for iOS application. The main function is to allow the user to select a home network (maybe prompt them to simply enter the IP address only) and to be able to monitor the packets and if there is anything suspicious- we need to alert user via push notification or email. i wanted to use the features and functions of Snort, an open source network intrusion detection system.
Any Suggestions,Sample code ?! Where to start?
VM's do not have native hardware access, which is necessary for monitor mode. Maybe IOMMU PCI passthrough or bridged devices might work. It is probable that it is possible to compile the iOS kernel with a module that works for the wireless nic. I don't think it's a proprietary chip specific to apple, because a chip with multie technology capabilities in RF wouldn't be cost effective qt all. I'm just not sure if the filesystem blocks access in the OS framework or whatever. I have tried to compile linux/iOS ARM packages natively in the shell with the aircrack-ng source, but have not had any luck. Maybe someone would have better luck actually cross-compiling a package and sideloading it somehow.
I don't think this is possible for multiple reasons:
You wouldn't be able to compile snort for iOS.
In order to run snort you have to have the interface (NIC) in promiscuous mode, which I really don't think you can do on an iOS device (iPhone, iPad, etc) but I have never really looked into it, but Apple probably locks this down and restricts this for security purposes so if you could do it you'd likely have to jail-break the device first. It's not even possible to put the wifi card in an Apple laptop into monitor mode, which is similar.
There are a lot of dependencies for snort, most importantly the DAQ. You would probably only be able to monitor the wifi interface (even this might not be possible), not the interface used for the cellular network as this is probably a different daq than standard Ethernet nics.
This very likely is not possible on iOS, if it is it would be VERY difficult to pull off and even if you did the use case isn't really good. Even if you could get a daq for the cellular card, I don't know if promiscuous mode even exists and if it did all of the traffic on the cellular network is encrypted, so inspecting this with snort would be pointless. If you could do it for the wifi traffic it's probably not worth the effort honestly, especially since almost all traffic nowadays is encrypted, you'd have to decrypt it first, which certainly isn't possible to do.
In the view of Johnjg12's comments, I am wondering about your goal. If you want to make a NIDS, you can make it OS independent, anyway. If you want to consider only HIDS that monitors packet destined to it, we don't need it to be in promiscuous mode (a comment to Johgj12's response). so, now it is something to do with Snort on iOS. I am wondering if we can do it on a VM and then turning its promiscuous mode? Having said that I came across a link: https://www.securemac.com/macosxsnort.php

Sending a UDP packet within a kernel module

Background: I'm a fourth year computer engineering major at UCSB. I've taken networking and operating systems courses. I created a program in userspace that broadcasts UDP packets onto the subnet and receives UDP packets in an adhoc network. What I'm trying to accomplish is to convert this program into a kernel module that will work on an ARM embedded system with Angstrom Linux, kernel version 2.6.39 (the x86 to ARM architecture cross compilation is an issue for another day). The reason for this move to the kernel is to shed some of the overhead of userspace functions and to make the sending and receiving part as quick as possible.
I've never done anything like this before in any of the courses I've taken, so please tell me if anything I am saying is incorrect, useless or inefficient!
After research with Google, I've concluded the typical way is to do away with sockets entirely and work with the sockbuf structure and fill in the necessary headers myself. Would this have an effect on the ability to broadcast packets on the subnet?
I am currently trying to follow the code here:
UDP packet send with linux-kernel module without using sockets
I've figured out the reasoning behind most of the code, but the last part is what confuses me:
eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
skb_reset_mac_header(skb);
skb->protocol = eth->h_proto = htons(ETH_P_IP);
memcpy(eth->h_source, dev->dev_addr, ETH_ALEN);
memcpy(eth->h_dest, remote_mac, ETH_ALEN);
skb->dev = dev;
dev_queue_xmit(skb);
All of the ethernet header seems to be constructed purely out of headers defined in the kernel besides the source MAC address, is this correct? I am going to be broadcasting my packets, so what exactly should be put into the destination MAC address field?
More importantly, what is dev in the skb->dev = dev; line? From my investigation, it is a pointer to the device driver it is associated with. From my understanding, I would want this to point to the wireless chip device driver as I am using 802.11 to communicate. Do I have to create my own dev struct for the wireless driver? If so, there any guidance on how to accomplish this? If not, how can I access the existing device driver and use this in a kernel module?
I've tried commenting out the dev line and running the code but unsurprisingly I get a kernel panic once it executes dev_queue_xmit(skb);.
Again, I've never done anything like this before, so any advice would be helpful, even if it means changing my approach entirely! I also understand that this could be a niche of a question, but any sort of guidance is appreciated!
Thank you in advance!
The best way is not to interfere with the protocol if you are not trying to modify one. Work on a higher (socket) layer. This API can be found in net/socket.c
This will help: (open in new browser tab/window to zoom)

Understanding the Android emulator: Testing images? Network connectivity dependencies?

To better clarify my generic question:
I have gotten the Android emulator to work by running a full "make full-eng" build, as per the Google documentation. However, I wanted to debug it, so once I ran the emulator, and called "$ adb shell dmesg" and routed that to an output text file, I found a couple of strange lines:
...
<4>goldfish_new_pdev goldfish_interrupt_controller at ff000000 irq -1
<4>goldfish_new_pdev goldfish_device_bus at ff001000 irq 1
<4>goldfish_new_pdev goldfish_timer at ff003000 irq 3
<4>goldfish_new_pdev goldfish_rtc at ff01000
So when you run the Android full build, it gives you Goldfish as the system image? I want to know if it's testing the things I want for Galaxy Nexus. The kernel was a modified maguro kernel (omap project) for Galaxy Nexus, that I put into the build tree. But the platform I want to be testing is IceCreamSandwich. Is the emulator testing this platform? (b/c the output in this log is leading me to believe it isn't) Or is the emulator testing a "generic" image?
Also, an important further question: I modified the kernel's "socket.h" file, to override the INET protocol with an undefined protocol (FINS). In theory the phone should boot up, but NO internet access. Does the phone emulator care what you do to the internet protocols? Does it use your host computer's networking capabilities?
One further follow-up: What processes/system-services/events (that are involved in booting to a stable state) of the phone DEPEND on the internet protocols of the traditional underlying network stack? (protocols being defined to set up the network sockets)
At the time I wrote the question I did not understand a few things and think I've learned a little while messing with the emulator at the "kernel level". First of all, the emulator tests the "goldfish kernel" (Linux version 2.6.29, with ARM architecture) of a "generic" phone brand. It's almost as if the emulator is a type of phone in of itself, and you cannot mix these image kernels. For example, I tried building a Nexus S crespo phone image with goldfish kernel (so in other words, no crespo kernel) and the phone just "hangs" at the Google splash-screen (at least it's not a boot-loop).
My research (FINS) worked on this emulator, but did not work on any of the 3 platforms supported on actual hardware: Nexus S, Galaxy Nexus, and Motorola Xoom. I am not sure why, given Google does not seems to give users the ability to debug at the lowest level of a phone (I'm sure the actual developers use such kinds of tools in building these phones/testing them). This leads to one major issue which answers my last follow-up: The Android Debug Bridge depends upon INET protocol. My emulator boots up successfully and runs as I want (no internet, b/c there is no INET), but these actual phones do NOT. My hypothesis is that: If INET is overridden with a protocol that is empty (in this case, that would be FINS, which intends to deal with INET at the userspace level, but this appears to be too late for the phone system to be satisfied), the ADB daemon (classified as a type of system service perhaps) cannot work/be connected to and Android hardware will crash because of this. The emulator I believe is more flexible than a real phone, as the hardware is perhaps virtually represented and does not have the same limitations as physical hardware does.
You can consult my wiki/documentation (part of my research team's larger site) of my struggle with the Android phone boot process for more details and my various attempts: http://finsframework.org/mediawiki/index.php/Alexander_G._Ororbia_II
If anyone ever figures out how to get a working boot log from a Nexus S, Galaxy Nexus, or Motorola Xoom that gets stuck in a "boot-loop" (without ADB), please let let me know, as I will be working on this problem for a while to come (and I will update my other Stack Overflow-Android questions to reflect this correction). Any corrections to my understanding would also be appreciated.
NOTE: This answer is editable, as I still think there is some way of getting the phone to produce boot logs on the host machine without the ADB daemon.

Resources