I want to make an IP scan using NMAP, but the operating time varies for some reason. The command can be executed in 2 seconds, an if I launch it again just after, it can take 30 seconds.
This is the command I use :
nmap -n -sn -T5 --max-rtt-timeout 1s
-n : no DNS resolution
-sn : disable port scan
-T5 fast mode
--max-rtt-timeout round trip timeout for the probes 1s
I don't know if my optimisation is good ? And how to make it better ?
Thank you
Some debug output (-d --packet-trace would be a good start) would be very helpful to diagnose this problem. My first thought was that you were telling Nmap to use timeouts that were too short, leading to retransmissions when they don't need to happen. But that probably wouldn't lead to 30-second run times; the response would be seen and accepted as soon as it came in, even if the probe was retransmitted first.
More helpful information would be the version and platform of Nmap (nmap --version), whether your command is run with root or administrator privileges, and what kind of network you are scanning on. The question is tagged wifi, but you don't say whether the target is on the same link as you, or several network hops away.
Most importantly for you, you should learn what -T5 really means so that you can make rational decisions when tweaking performance variables. Not only is -T5 not properly "fast mode," but you have set the round-trip timeout to be 3 times longer than -T5 defaults to, which is probably a good signal that the rest of the variables are not where they need to be. Try -T4 or even the default -T3 and see if the timing stabilizes. I would not be surprised if it turns out to be nearly as fast as your best -T5 times.
In my case, adding min-parallelism in original command improved scan time from 26.54 second to 3.26 second:
nmap -n -sn -T5 --max-rtt-timeout 1s --min-parallelism 100 172.27.192.0-255
in command:
Related
For the project I'm working on, I need to be able to measure the total amount of network traffic for a specific container over a period of time. These periods of time generally are about 20 seconds, and the precision needed realistically is only in kilobytes. Ideally this solution would not involve the use of additional software either in the containers or on the host machine, and would be suitible for linux/windows hosts.
Originally I had planned to use the 'NET I/O' attribute of the 'docker stats' command, though the field is automatically formatted to a more human readable format (i.e. '200 MB') which means for containers that have been running for some time, I can't get the precision I need.
Is there any way to get the raw value of 'NET I/O' or to reset the running count? Outside of this I've explored using something tshark or iptables, but like I said above ideally the solution would not require additional programs. Though if there aren't any good solutions that fit that criteria any other suggestions would be welcome. Thank you!
I want to constantly monitor TCP data coming from a particular source.
I am using this command in a cygwin mintty xterm on a Windows server.
tshark.exe -i 5 -f "tcp port 1234" -T fields -e data | xxd -r -p
This works perfectly as I get a scrolling window of ASCII that is the data being sent to me. When that connection fails (which is what I am trying to debug) then the last data sent is shown in the cygwin window.
However, I notice that the Tshark memory usage is constantly creeping up and after a few hours is quite large.
What can I do about this? I would like to leave this running for several days.
You might want to try modifying your tshark command-line options to include the -b files:N option (and possibly other -b options as well for duration and/or filesize) to start a ring buffer. Doing so will cause tshark to discard its internal state whenever it rolls to the next file in the ringbuffer. Discarding state means freeing all memory associated with that state information, so at least in theory, you can capture forever.
For more information, feel free to read the Wireshark blog article, "To Infinity and Beyond! Capturing Forever with Tshark" by Evan Huus, one of the Wireshark core developers and the person responsible for implementing this feature to discard state information.
I am trying to configure Docker Deamon to follow CIS bencmarks. One of the recommendations is to configure default ulimit parameters when starting. They give the example of
dockerd --default-ulimit nproc=1024:2048 --default-ulimit nofile=100:200
How do I know how to calculate the best settings for nproc and nofile for my particular environment?
I guess, to set those, you would need to know what your application upper limits might be. You would need to first run the applications and put them under the type of loads you expect them to be under and then measure the limits and open files.
See https://unix.stackexchange.com/questions/230346/how-to-check-ulimit-usage for information on how to check the limits. You would need to get the limits for all the PIDs that you are running as containers. I would then pad them a bit to account for headroom.
Even then, you are likely to go down a path of chasing limits constantly as it will probably be difficult to accurately get all of them correct before the application goes to production.
Setup and observation
I have a PC equipped with an Intel i350 T2 NIC where I would like to capture on both interfaces simultaneously using tcpdump. Both interfaces are connected to a 100mbit HUB (sic!) which forwards various traffic from an external traffic source to both interfaces at the "same time", so I can measure the difference of the timestamps done by the respective ethernet MACs.
Capturing simultaneously with:
user#rt:~$ sudo tcpdump -j adapter --time-stamp-precision nano -B 1048576 -i eth2 -w test_eth2.pcap
user#rt:~$ sudo tcpdump -j adapter --time-stamp-precision nano -B 1048576 -i eth3 -w test_eth3.pcap
After that I merge the two files together to compare the timestamps:
user#rt:~$ mergecap -F nseclibpcap -w merged.pcap test_eth2.pcap test_eth3.pcap
Wireshark then shows me, that for a few packets I get a timestamp diff of the duplicate frames of around 20-40nsec (which is nice and sufficient for my application!).
But there are also lots of frames which show a difference of up to tens of microseconds when comparing the respective duplicates.
Environment
user#rt:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 15.04
Release: 15.04
Codename: vivid
user#rt:~$ uname -r
3.19.0-28-generic
user#rt:~$ lscpu | grep "Model name"
Model name: Intel(R) Core(TM) i7-4790 CPU # 3.60GHz
Questions
How/Who does the syncing of the adapter clock to CLOCK_REALTIME (I presume)
How often does this syncing occur.
Is there some room for tweaking without too much effort?
Is it possible to use e.g. phc2sys to sync all adapter clocks to CLOCK_REALTIME?
Would (4) interfere with the mechanisms done by (1)?
THX for help or pointing out my mistakes!
From a quick glance at igb_ethtool.c your NIC indeed seems to be capable of hardware timestamping. The jitter you observe (20-40ns) is just in the range of the expected PHY jitter from synchronizing to the Ethernet clock. (For 100Mbit the clock is 25MHz or 40ns.)
So far looking great, thanks to Intel. Not many NICs/drivers have this capability.
Now the bad part: I doubt anything is syncing CLOCK_REALTIME to the NIC adapter clock at the moment. The clocks are probably free-running at a slightly different frequencies. Those oscillators are usually specified at 50ppm, typical drifts will be around 5ppm, which means they will drift apart by ~5us every second, varying with room temperature. Keep that in mind when using the nanosecond precision. If your system uses NTP you may even see NTP drift adjustments happening.
But the good news is that you probably don't need to synchronize them, unless you really want absolute timestamps. The main reason why your NIC supports hardware timestamping at all is probably to support IEEE1588 PTP (precision time protocol). If you need absolute time with sub-microsecond precision, you should look look at this protocol and/or buy a GPS receiver.
If you just need relative timestamps, you could try -j adapter_unsynced instead of -j adapter, or maybe you could try to stop NTP from trying to drift-correct your system clock. If all this fails, you could try to start linuxptp, which may have the capbability to properly sync NIC and system time even if you don't have a PTP network.
And finally... you are using a HUB, which means Ethernet is running in half-duplex mode, which means... collisions. Unless your NIC is absolutely quiet. I guess in theory this shouldn't matter because you observe the same collisions in both NICs, and frames aren't delayed or queued differently depending on which path they take. But since half-duplex is so rare these days it may be that NIC timestamping support wasn't implemented with that in mind. Typical bugs in such implementations are e.g. returning the timestamp of the previous frames instead of the current one.
Chapter 7.9 of the Intel i350 Datasheet may help you with your 3rd question. It provides the set of PTP registers you can configure in the igb driver.
http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-controller-i350-datasheet.pdf
is it easy to write a script to test whether the network is ever down for the next 24 or 48 hours? I can use ssh to connect to a shell and come back 48 hours later to see if it is still connected to see if the network has ever been down, but can i do it programmatically easily?
The Internet (and your ethernet) is a packet-switched network, which makes the definition of 'down' difficult.
Some things are obvious; for example, if your ethernet card doesn't report a link, then it's down (unless you have redundant connections). But having a link doesn't mean its up.
The basic, 100 mile view of how the Internet works is that your computer splits the data it wants to send into ~1500-byte segments called packets. It then, pretty much blindly, sends them on their way, however your routing table says to. Then that machine repeats the process. Eventually, through many repetitions, it reaches the remote host.
So, you may be tempted to define up as the packet reached its destination. But, what happens if the packet gets corrupted, e.g., due to faulty hardware or interference? The next router will just discard it. Ok, that's fine, you may well want to consider that down. What if a router on the path is too busy, or the link it needs to be sent on is full? The packet will be dropped. You probably don't want to count that as down.
Packets routinely get lost; higher-level protocols (e.g., TCP) deal with it and retransmit the packet. In fact, TCP uses the packet drop on link full behavior to discover the link bandwidth.
You can monitor packet loss with a tool like ping, as the other answer states.
If your need is more adminstrative in nature, and using existing software is an option, you could try something like monit:
http://mmonit.com/monit/
Wikipedia has a list of similar software:
http://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems
You should consider also whether very short outages need to be detected. Obviously, a periodic reachability test cannot guarantee detecting outages shorter than the testing interval.
If you only care about whether there was an outage, not how many there were or how long they lasted, you should be able to automate your existing ssh technique using expect pretty easily.
http://expect.nist.gov/
Most platforms support a ping command that can be used to find out if a network path exists to an IP address somewhere "else". Where, exactly, to check depends on what you are really trying to answer.
If the goal is to discover the reliability of your first hop to your ISP's network, then pinging a router or their DNS regularly might be sufficient.
If your concern is really the connection to a single node (your mention of leaving an ssh session open implies this) then just pinging that node is probably the best idea. The ping command will usually fail in a way that a script can test if the connection times out.
Regardless, it is probably a good idea to check at a rate no faster than once a minute, and slower than that is probably sufficient.