Setup and observation
I have a PC equipped with an Intel i350 T2 NIC where I would like to capture on both interfaces simultaneously using tcpdump. Both interfaces are connected to a 100mbit HUB (sic!) which forwards various traffic from an external traffic source to both interfaces at the "same time", so I can measure the difference of the timestamps done by the respective ethernet MACs.
Capturing simultaneously with:
user#rt:~$ sudo tcpdump -j adapter --time-stamp-precision nano -B 1048576 -i eth2 -w test_eth2.pcap
user#rt:~$ sudo tcpdump -j adapter --time-stamp-precision nano -B 1048576 -i eth3 -w test_eth3.pcap
After that I merge the two files together to compare the timestamps:
user#rt:~$ mergecap -F nseclibpcap -w merged.pcap test_eth2.pcap test_eth3.pcap
Wireshark then shows me, that for a few packets I get a timestamp diff of the duplicate frames of around 20-40nsec (which is nice and sufficient for my application!).
But there are also lots of frames which show a difference of up to tens of microseconds when comparing the respective duplicates.
Environment
user#rt:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 15.04
Release: 15.04
Codename: vivid
user#rt:~$ uname -r
3.19.0-28-generic
user#rt:~$ lscpu | grep "Model name"
Model name: Intel(R) Core(TM) i7-4790 CPU # 3.60GHz
Questions
How/Who does the syncing of the adapter clock to CLOCK_REALTIME (I presume)
How often does this syncing occur.
Is there some room for tweaking without too much effort?
Is it possible to use e.g. phc2sys to sync all adapter clocks to CLOCK_REALTIME?
Would (4) interfere with the mechanisms done by (1)?
THX for help or pointing out my mistakes!
From a quick glance at igb_ethtool.c your NIC indeed seems to be capable of hardware timestamping. The jitter you observe (20-40ns) is just in the range of the expected PHY jitter from synchronizing to the Ethernet clock. (For 100Mbit the clock is 25MHz or 40ns.)
So far looking great, thanks to Intel. Not many NICs/drivers have this capability.
Now the bad part: I doubt anything is syncing CLOCK_REALTIME to the NIC adapter clock at the moment. The clocks are probably free-running at a slightly different frequencies. Those oscillators are usually specified at 50ppm, typical drifts will be around 5ppm, which means they will drift apart by ~5us every second, varying with room temperature. Keep that in mind when using the nanosecond precision. If your system uses NTP you may even see NTP drift adjustments happening.
But the good news is that you probably don't need to synchronize them, unless you really want absolute timestamps. The main reason why your NIC supports hardware timestamping at all is probably to support IEEE1588 PTP (precision time protocol). If you need absolute time with sub-microsecond precision, you should look look at this protocol and/or buy a GPS receiver.
If you just need relative timestamps, you could try -j adapter_unsynced instead of -j adapter, or maybe you could try to stop NTP from trying to drift-correct your system clock. If all this fails, you could try to start linuxptp, which may have the capbability to properly sync NIC and system time even if you don't have a PTP network.
And finally... you are using a HUB, which means Ethernet is running in half-duplex mode, which means... collisions. Unless your NIC is absolutely quiet. I guess in theory this shouldn't matter because you observe the same collisions in both NICs, and frames aren't delayed or queued differently depending on which path they take. But since half-duplex is so rare these days it may be that NIC timestamping support wasn't implemented with that in mind. Typical bugs in such implementations are e.g. returning the timestamp of the previous frames instead of the current one.
Chapter 7.9 of the Intel i350 Datasheet may help you with your 3rd question. It provides the set of PTP registers you can configure in the igb driver.
http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-controller-i350-datasheet.pdf
Related
I'm deciding between the MiniPCIe and USB accelerators for a home Linux CCTV project. The host has both USB3 and a MiniPCIe socket. The host's physical environment will range from an ambient 20C up to a potential 35C (during the summer).
I'm struggling to determine the pros and cons for each. I have gotten this far, although many are guesses:
USB:
Supports Windows and MacOS as well as Linux
Appears to have greater mindshare/use/community support on the Internet
External so can be placed to optimise heat dissipation
Heatsink
Two manual performance modes, highest requires ambient temp of max 25C
Can use up to 4.5W (900mA # 5V)
Mini PCie:
Cheaper (25%)
Lower power consumption (1.4W for 416 fps)
Automatic thermal throttling via driver
Relies on host system for active cooling
Will maintain max operation at 85C
There's probably many I've missed. In particular I can't determine if there's any limitations on throughput/capacity using USB vs PCIe. If there is no difference, then I suspect the USB form factor is the better option, if only for the mindshare, although the power usage/heat generated may be a concern.
To whittle this down to an actual question: in what cases would the Mini PCIe interace be a preferred option to the USB one?
If you are looking for a plug&play solution, then I definitely suggest the USB Accelerator. Overall, as long as you have the system requirements then it'll always works (maybe with some modifications to the standard linux configs like adding your user to the plugdev group, ...). Then the software for the CCTV is all up to you :)
PCIes sometimes need extra works like adding extra kernel arguments and modules to keep the pcie modules happy. If you are looking to launch a huge product where volumes are expected, then it is worth investigating it since it's cheaper and more compact. However, the power usage is a must for consideration as the USB Accelerator could uses up to 900mA, so that could play a factor.
May I know what host are you trying to attach the accelerators to?
I want to make an IP scan using NMAP, but the operating time varies for some reason. The command can be executed in 2 seconds, an if I launch it again just after, it can take 30 seconds.
This is the command I use :
nmap -n -sn -T5 --max-rtt-timeout 1s
-n : no DNS resolution
-sn : disable port scan
-T5 fast mode
--max-rtt-timeout round trip timeout for the probes 1s
I don't know if my optimisation is good ? And how to make it better ?
Thank you
Some debug output (-d --packet-trace would be a good start) would be very helpful to diagnose this problem. My first thought was that you were telling Nmap to use timeouts that were too short, leading to retransmissions when they don't need to happen. But that probably wouldn't lead to 30-second run times; the response would be seen and accepted as soon as it came in, even if the probe was retransmitted first.
More helpful information would be the version and platform of Nmap (nmap --version), whether your command is run with root or administrator privileges, and what kind of network you are scanning on. The question is tagged wifi, but you don't say whether the target is on the same link as you, or several network hops away.
Most importantly for you, you should learn what -T5 really means so that you can make rational decisions when tweaking performance variables. Not only is -T5 not properly "fast mode," but you have set the round-trip timeout to be 3 times longer than -T5 defaults to, which is probably a good signal that the rest of the variables are not where they need to be. Try -T4 or even the default -T3 and see if the timing stabilizes. I would not be surprised if it turns out to be nearly as fast as your best -T5 times.
In my case, adding min-parallelism in original command improved scan time from 26.54 second to 3.26 second:
nmap -n -sn -T5 --max-rtt-timeout 1s --min-parallelism 100 172.27.192.0-255
in command:
I'm working on a virtual machine under Debian with EGLIBC 2.13 in order to learn memory address.
So I wrote a simple code giving me the address of a test variable, but everytime I exec this script, I'm getting a totally different address.
Here's two screens from 2 distincts executions :
What's causing this ? The fact I'm working on a VM or my GLIBC version ?
I guess it's GLIBC to prevent buffer overflow but I can't find my answer on the web.
And is it totally random ?
First, Glib (from GTK) is not GNU libc (a.k.a. glibc)
Then, you are observing the effect of ASLR (address space layout randomization). Don't try to disable it on a server directly connected to the Internet, it is a valuable security measure.
ASLR is mostly provided by the Linux kernel (e.g. when handing mmap(2) without MAP_FIXED, as most implementations of malloc do, and probably also at execve(2) time for the initial stack). Changing your libc (e.g. to musl-libc) won't disable it.
You could disable system-wide ASLR on a laptop (or on a Linux system running inside some VM) using proc(5): run
echo 0 > /proc/sys/kernel/randomize_va_space
as root. Be careful, by doing that you are decreasing the security of your system.
I don't know what you call totally random, but ASLR is random enough. IIRC, (but I might be wrong) the middle 32 bits of the 64-bits address (assuming a 64 bits Linux system) are quite random, to the point of making result of mmap (hence of malloc using it) practically unpredictable and non-reproducible.
BTW, to see ASLR in practice, try several times (with ASLR enabled) the following command
cat /proc/self/maps
this command displays a textual representation of the address space (in virtual memory) of the process running that cat command. You'll see different outputs when you run it several times !
For debugging memory leaks, use valgrind. With a recent GCC 4.9 or better (or recent Clang/LLVM) compiler, the address sanitizer is also useful, so you could compile with gcc then -Wall -Wextra to get all the warnings even the extra ones, then -g to get debug info, then -fsanitize=address
I’m doing a study project on wifi signal quality. What I want to do is use Raspberry Pi’s to monitor as many metrics as possible on packet level data. I want to do this by putting wifi adapters on monitor mode (using airmon-ng) and than capture the data about the packets using a wireless network protocol analyzer, like tshark.
What I understand from the wireless networks is that you mainly have three parts: a frame part that has the same information independent of what you’re capturing on, which contains things as frame number, frame length and arrival time. (Want to upload images but don't have 10 reputation yet...).
Then the IEEE 802.11 data which contains the necessary stuff for the network to work. When capturing on WLAN this contains the MAC addresses.
And than we have the radiotap header, which contains all kind of information (signal strength db and dbm, noise level, signal quality, TX value, and much more). This one is a bit different, since this information is actually filled or injected by the wifi adapter you use to capture the data with.
In the present flags you can find which values are actually being injected by the wifi adapter. Now my problem is that for my research I really need as much values as possible. I’ve been working for hours but I didn’t succeed in finding a way to capture with anything more than dmb signal strength (if even available). So this is what I tried so far:
The adapters I used so far are the Edimax EW7811UN, the AirPcap Classic, the AirPcap Tx and two similar alfa adapters with Atheros AR9271 chipset. The AR9271 adapters worked out of the box on raspbian (debian for raspberry pi) on the ath9k_htc driver. Putting them on monitor mode and capturing works fine, but only dbm singal strength is given (as in the screenshots above) in the capture. The Edimax was working out of the box on the 8192cu driver, however it clearly doesn’t support monitor mode. I could put it into monitor mode when booting it on the zd1211rw driver but that didn’t even give the dbm signal strength. Strange thing however, is that a friend tried the exact same Edimax adapter and he could capture, and the only difference we could find is that the lsmod says rtl8192cu and not 8192cu. Strangely, forums are saying that 8192cu is the newer version, however this friend had the newest arch linux kernel installed (newer than the raspbian). So I installed Arch Linux on the pi, but still wasn’t able to put the edimax on 8192cu driver in monitor mode. Then I found a package in the aur repos: dkms-8192cu which was supposed to have a patched version. However, after installing it still didn’t work. Also downloading the driver from the realtek website didn’t work. There is some stuff on patching on the aircrack-ng website, but it actually is mentioning injection of frames and doesn’t really look to be what I exactly need.
Than I bought the Airpcap Classic and the Airpcap Tx to see what they could do. First of all, they have zero linux support so that already is a big drawback since l need to use it from the Pi’s. However even in windows the airpcap’s only capture db and dbm noise and signal quality. It does receive some data at dbm noise level, but it’s worthless since it is always at -100 level. I can boot the Airpcap classic and tx have zd1211B chipset so I can boot them on zd1211rw driver but this also gives no dbm signal value or anything else.
So my question is, what exactly determines what’s in the radiotap header? I guess it would be all in the driver, but I need to be exactly sure before I write off every ath9_htc driver based adapter. I’m about to purchase another adapter which runs on carl9170 driver, however I can’t find no guarantee anywhere that it will give me those values. What I did find in the literature is that the madwifi driver gives (or was giving) noise levels, however it is acquired by Atheros so the project stopped and all websites are suggestion just to use ath9k or ath5k drivers. I tried to install it but I failed because it seems to be really outdate software since the project stopped.
It would be of really big help if someone can explain me what exactly determines what’s inside the radiotap headers, and also if someone could share any experience on when they did capture more than only dbm signal strength values from linux.
I am developing a custom network driver for a PHY media which doesn't support full duplex mode.
I want to use TCP/IP traffic with this network driver and on top of this half-duplex PHY media.
But TCP/IP traffic can be full duplex. I would like to implement some mechanism/algorithm in this driver so that this custom network driver will convert TCP/IP traffic to Half duplex in linux.
Please let me know if this can be achieved or how to do it.
So you are trying to write a driver which supports full duplex traffic on a card which actually does NOT support the feature...
Well..you must be aware that the networking subsystem is one of the largest subsystems in the kernel and one of the few which actually uses softirqs (because it is always looking at getting scaled appropriately in this day and age of multiprocessers) and still had to resort to some trickery (NAPI) ir order to manage the deluge of interrupt requests generated by the ever increasing rates of the present day media...why im saying all this is because I just would want to remind you of the real life complexities involved in writing a 'regular' network driver, let alone a 'pseudo full duplex' driver.
Now I believe what you pretty much want is to give an illusion of 'full duplexity' to the...TCP/IP stack ( is it ??) i.e your driver is just another full duplex driver and it (any one of this driver's clients, be it the MAC layer or something like ethtool) can go have a ball with it (in terms of dumping & retrieving packets) in the same manner as it does with (and expects results out of) a 'regular full duplex' driver...
So if this is really the case, I wonder what good giving such an illusion might be? Perhaps you are just experimenting? IN any case, TCP is by default full duplex anyways and by using a half duplex media the data rates anyway are a bit lower (although not exactly half) than those obtained by using a full duplex adapter. I don't think it even matters at the higher layers (in terms of functionality) whether the media is full or half duplex (except may be in the MAC layer?) correct me if Im wrong.
There were (and still are) quite a few half duplex media in use currently and as such there are many media which support both full duplex and half duplex traffic..I fail to see how it will affect the clients of the driver (besides lowering the over all data rate as the only tangible effect)...which means you can pretty much look at any netwrk driver in the kernel and see that it has ways to configure the adapter to use either full or half duplex (and the user space can, say ethtool as one of the ways to toggle this...)...
Anyways, you may want to have a look and perhaps take a few tips from MODBus driver (the bus being by default half duplex) here.
I'm not sure how you're relating MAC layer with TCP layer. Duplex mode is a Ethernet domain and it doesn't propagate to IP and not event to TCP, in Ethernet terms duplexity means you can send or receive a MAC frames exclusively at different times (half duplex) or at the same time (full duplex).
The upper layers of the network stack are completely (at least they should) unaware of this process. Consider the following example, you're sending a huge file over the network using FTP, assuming most normal network systems the stack would be FTP/TCP/IP/Ethernet. From FTP perspective you have a virtual session, from TCP you have a virtual pipe, from IP you just know how to reach the end system and from Ethernet perspective you just know how to reach the next node in the network.
TCP doesn't care your packets are chopped during the transmission nor if your packet is delayed within a certain threshold due an incoming packet arriving It only cares to receive a confirmation receipt that the packet made it to the final destination. I hope I can show my point.