I'm needing to ping about 2500 servers at one time, in intervals of about 15-30 minutes. This is to show semi-real time server status information. This could potentially scale to tens of thousands of sites eventually, so I need to keep that in mind while thinking about this.
I'm using an Ubuntu 10.10 VPS (Bash) and using Ruby.
Is there any way to go about doing this?
Edit: I should also note that I only care if the server is online. So first packet received should suffice.
I'd consider shelling to nmap or its like. It's well tuned to that purpose, being quite fast, and it contains enough different ways to ping to satisfy any need. Here's using nmap to discover all hosts on a segment of my network:
wayne#treebeard:~$ nmap -sP 10.0.0.0/24
Starting Nmap 5.00 ( http://nmap.org ) at 2010-12-08 09:16 MST
Host gw (10.0.0.1) is up (0.00036s latency).
Host 10.0.0.2 is up (0.0071s latency).
Host isengard.internal.databill.com (10.0.0.3) is up (0.00062s latency).
...
Host arod.internal.databill.com (10.0.0.189) is up (0.0046s latency).
Host 10.0.0.254 is up (0.00042s latency).
Nmap done: 256 IP addresses (43 hosts up) scanned in 3.00 seconds
Here we've scanned for all hosts from 10.0.0.0 through 10.0.0.255.
-sP is a "ping scan", a pretty generic host discovery mechanism that can be run as an ordinary user. There are other types of scan that nmap does, many of them needing root privileges.
In Ruby, you'll use backtick or IO.popen to run nmap and capture its results:
output = `nmap -sP 10.0.0.0/24
output.each_line.find_all do |lines|
line =~ /^Host/
end.each do |line|
# Whatever you want to do for each host
end
If you supply the -oX switch, nmap will output xml, which may be easier to parse (thanks, tadman).
Related
I am trying to setup the docker which can successfully scan the subnet device's mac address by using nmap. And I've spent 3 days to figure out how to do it but still failed.
For example:
The host IP: 10.19.201.123
The device IP: 10.19.201.101
I've setup docker container which can ping 10.19.201.123 and 10.19.201.101 both successfully. But when I use nmap to scan mac address from docker container, I got below:
~$sudo nmap -sP 10.19.201.101
Starting Nmap 7.01 ( https://nmap.org ) at 2018-05-29 08:57 UTC
Nmap scan report for 10.19.201.101
Host is up (0.00088s latency).
Nmap done: 1 IP address (1 host up) scanned in 0.39 seconds
However, if I use nmap to scan mac address from VM (10.19.201.100), I got:
~$sudo nmap -sP 10.19.201.101
Starting Nmap 7.01 ( https://nmap.org ) at 2018-05-29 17:16 CST
Nmap scan report for 10.19.201.101
Host is up (0.00020s latency).
MAC Address: 0F:01:H5:W3:0G:J5(ICP Electronics)
Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds
PLEASE, who can help or give prompts of how to do it?
For who is still struggling with this issue, I've figured out how to do it on Windows 10.
The solution is to make the container running on the same LAN as your local host, so nmap can scan the LAN device successfully. Below is the way to make your docker container run on the host LAN.
Windows 10 HOME
Change the virtual box setting
Stop VM first by administrator docker-machine stop default
Open Virtual Box
Select default VM and click Settings
Go to Network page, and enable new Network Adapter on Adapter 3
(DO NOT CHANGE Adapter 1 & 2)
Attached Adapter 3 to bridged Adapter with your physical network and click OK
Start VM by administrator docker-machine start default
Open Docker Quickstart Terminal to run container, the new container should be run on the LAN now.
Windows 10 PROFESSIONAL/ENTERPRISE
Create vSwitch with physical network adapter
Open Hyper-V Manager
Action list- > Open Virtual Switch Manager
Create new virtual switch -> select Type: External
Assign your physical network adapter to the vSwitch
Check "Allow management operating system to share this network adapter" and apply change
Go to Control Panel\All Control Panel Items\Network Connections.
Check the vEthernet you just created, and make sure the IPV4 setting is correct. (sometimes the dhcp setting will be empty and you need to reset again here)
Go back to Hyper-V Manager, and go into Setting page of MobyLinuxVM (ensure it's shut down, if it's not, Quit Docker)
Add Hardware > Network Adapter, select the vSwitch you just created and apply change
Modify Docker source code
Find the MobyLinux creation file: MobyLinux.ps1
(normally it's located at: X:\Program Files\Docker\Docker\resources)
Edit the file, and find the function: function New-MobyLinuxVM
Find below line in the function:
$vmNetAdapter = $vm | Hyper-V\Get-VMNetworkAdapter
Update it to:
$vmNetAdapter = $vm | Hyper-V\Get-VMNetworkAdapter | Select-Object -First 1
Save file by administrator
Restart Docker, and the container should run on the LAN now.
I can see from a tcpdump that an internal linux server is trying to contact an outside computer approximately every 15 min: one udp-packet on port 6881 (bittorrent), that's all.
As this server isn't supposed to contact anyone, I want to find out what evil soul generated this packet, i.e. I need some information about the process (e.g. pid, file, ...).
Because the timespan is so short, I can't use netstat or lsof.
The process is likely to be active about half of a microsecond, then it gets a destination unreachable (port unreachable) from firewall.
I have ssh access to the machine.
How can I capture network packets per PID? suggests to use the tcpdump option -k, however, linux tcpdump has no such option.
You can't do this with TCPDump, obviously, but you can do this from the host itself. Especially since it's UDP with no state, and since you can't predict when the process will be listening, you should look into using the kernel audit capabilities. For example:
auditctl -a exit,always -F arch=b64 -F a0=2 -F a1\&=2 -S socket -k SOCKET
This instructs the kernel to generate an audit event whenever there is a socket call. With this done, you can then wait until you see the suspicious packet leave the machine and then use ausearch to track down not only the process, but the binary that made the call.
I'm trying to use distributing programming in Erlang.
But I had a problem, I can't communicate two Erlang's nodes to communicate.
I tried to put the same atom in the "Magical cookies", but it didn't work.
I tried to use command net:ping(node), but reponse was pang (didn't reconigze another node), or used nodes(), to see if my first node see the second node, but it didn't work again.
The first and second node is CentOS in VMWare, using bridge connection in network adaptor.
I entered command ping outside Erlang between VM's and they reconigze each one.
I start the first node, but the second node open process, but can't find the node pong.
(pong#localhost)8> tut17:start_pong().
true
(ping#localhost)5> c(tut17).
{ok,tut17}
(ping#localhost)6> tut17:start_ping(pong#localhost).
<0.55.0>
Thank you!
A similar question here.
The distribution is provided by a daemon called Erlang Port Mapper Daemon. By default it listens on port 4369 so you need to make sure that that port is opened between the nodes. Additionally, each started Erlang VM opens an additional port to communicate with other VMs. You can see those ports with epmd -names:
g#someserv1:~ % epmd -names
epmd: up and running on port 4369 with data:
name hbd at port 22200
You can check if the port is opened by doing telnet to it, e.g.:
g#someserv1:~ % telnet 127.0.0.1 22200
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
Connection closed by foreign host.
You can change the port to the port you want to check, e.g. 4369, and also the IP to the desired IP. Doing ping is not enough because it uses its own ICMP protocol which is different that TCP used by the Erlang distribution to communicate, e.g. ICMP may be allowed but TCP may be blocked.
Edit:
Please follow this guide Distributed Erlang to start an Erlang VM in distributed mode. Then you can use net_adm:ping/1 to connect to it from another node, e.g.:
(hbd#someserv1.somehost.com)17> net_adm:ping('hbd#someserv2.somehost.com').
pong
Only then epmd -names will show the started Erlang VM on the list.
Edit2:
Assume that there are tho hosts, A and B. Each one runs one Erlang VM. epmd -names run on each host shows for example:
Host A:
epmd: up and running on port 4369 with data:
name servA at port 22200
Host B:
epmd: up and running on port 4369 with data:
name servB at port 22300
You need to be able to do:
On Host A:
telnet HostB 4369
telent HostB 22300
On Host B:
telnet HostA 4369
telnet HostA 22200
where HostA and HostB are those hosts' IP addresses (.e.g HostA is IP of Host A, HostB is IP of Host B).
If the telnet works correctly then you should be able to do net_adm:ping/1 from one host to the other, e.g. on Host A you would ping the name of Host B. The name is what the command node(). returns.
You need to make sure you have a node name for your nodes, or they won't be available to connect with. E.g.:
erl -sname somenode#node1
If you're using separate hosts, then you need to make sure that the node names are resolvable to ip addresses somehow. An easy to way to do this is using /etc/hosts.
# Append a similar line to the /etc/hosts file
10.10.10.10 node1
For more helpful answers, you should post what you see in your terminal when you try this.
EDIT
It looks like your shell is auto picking "localhost" as the node name. You can't send messages to another host with the address "localhost". When specifying the name on the shell, try using the # syntax to specify the node name as well:
# On host 1:
erl -sname ping#host1
# On host 2
erl -sname pong#host2
Then edit the host file so host1 and host2 will resolve to the right IP.
How I can say a port is open or closed. What's the exact meaning of Open port and closed port.
My favorite tool to check if a specific port is open or closed is telnet. You'll find this tool on all of the operating systems.
The syntax is: telnet <hostname/ip> <port>
This is what it looks like if the port is open:
telnet localhost 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
This is what it looks like if the port is closed:
telnet localhost 9999
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Based on your use case, you may need to do this from a different machine, just to rule out firewall rules being an issue. For example, just because I am able to telnet to port 3306 locally doesn't mean that other machines are able to access port 3306. They may see it as closed due to firewall rules.
As far as what open/closed ports means, an open port allows data to be sent to a program listening on that port. In the examples above, port 3306 is open. MySQL server is listening on that port. That allows MySQL clients to connect to the MySQL database and issue queries and so on.
There are other tools to check the status of multiple ports. You can Google for Port Scanner along with the OS you are using for additional options.
A port that's opened is a port to which you can connect (TCP)/ send data (UDP). It is open because a process opened it.
There are many different types of ports. These used on the Internet are TCP and UDP ports.
To see the list of existing connections you can use netstat (available under Unix and MS-Windows). Under Linux, we have the -l (--listen) command line option to limit the list to opened ports (i.e. listening ports).
> netstat -n64l
...
tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN
...
udp 0 0 0.0.0.0:53 0.0.0.0:*
...
raw 0 0 0.0.0.0:1 0.0.0.0:* 7
...
In my example, I show a TCP port 6000 opened. This is generally for X11 access (so you can open windows between computers.)
The other port, 53, is a UDP port used by the DNS system. Notice that UDP port are "just opened". You can always send packets to them. You cannot create a client/server connection like you do with TCP/IP. Hence, in this case you do not see the LISTEN state.
The last entry here is "raw". This is a local type of port which only works between processes within one computer. It may be used by processes to send RPC events and such.
Update:
Since then netstat has been somewhat deprecated and you may want to learn about ss instead:
ss -l4n
-- or --
ss -l6n
Unfortunately, at the moment you have to select either -4 or -6 for the corresponding stack (IPv4 or IPv6).
If you're interested in writing C/C++ code or alike, you can read that information from /proc/net/.... For example, the TCP connections are found here:
/proc/net/tcp (IPv4)
/proc/net/tcp6 (IPv6)
Similarly, you'll see UDP files and a Unix file.
Programmatically, if you are only checking one port then you can just attempt a connection. If the port is open, then it will connect. You can then close the connection immediately.
Finally, there is the Kernel direct socket connection for socket diagnostics like so:
int s = socket(
AF_NETLINK
, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK
, NETLINK_SOCK_DIAG);
The main problem I have with that one is that it does not really send you events when something changes. But you can read the current state in structures which is safer than attempting to parse files in /proc/....
I have some code handling such a socket in my eventdispatcher library. Only it still has to do a poll to get the data since the kernel does not generate events on its own (i.e. a push is much better since it only has to happen once when an event actually happens).
I'm looking at ZeroMQ Realtime Exchange Protocol (ZRE) as inspiration for building an auto-discovery of peers in a distributed application.
I've built a simple prototype application using UDP in Python following this model. It seems it has the (obivious, in retrospect) limitation that it only works for detecting peers if all peers are on other machines. This due to the socket bind operation on the discovery port.
Reading up on SO_REUSEADDR and SO_REUSEPORT tells me that I can't exactly do this with the UDP broadcast scheme as described in ZRE.
If you needed to build an auto-discovery mechanism for distributed applications such that multiple application instances (possibly with different versioN) can run on the same machine, how would you build it?
You should be able to bind each server instance to a different address. The entire subnet 127.0.0.0/8 should resolve to your localhost, so you can set up - for example - one service listening on 127.0.0.1, another listening on 127.0.0.2, etc. Anything from 127.0.0.1 to 127.255.255.254.
# works as expected
nc -l 127.0.0.100 3000 &
nc -l 127.0.0.101 3000 &
# shows error "nc: Address already in use"
nc -l 127.0.0.1 3000 &
nc -l 127.0.0.1 3000 &