DOCKER: Linux Container on Windows 10, how to use nmap to scan device's mac address - docker

I am trying to setup the docker which can successfully scan the subnet device's mac address by using nmap. And I've spent 3 days to figure out how to do it but still failed.
For example:
The host IP: 10.19.201.123
The device IP: 10.19.201.101
I've setup docker container which can ping 10.19.201.123 and 10.19.201.101 both successfully. But when I use nmap to scan mac address from docker container, I got below:
~$sudo nmap -sP 10.19.201.101
Starting Nmap 7.01 ( https://nmap.org ) at 2018-05-29 08:57 UTC
Nmap scan report for 10.19.201.101
Host is up (0.00088s latency).
Nmap done: 1 IP address (1 host up) scanned in 0.39 seconds
However, if I use nmap to scan mac address from VM (10.19.201.100), I got:
~$sudo nmap -sP 10.19.201.101
Starting Nmap 7.01 ( https://nmap.org ) at 2018-05-29 17:16 CST
Nmap scan report for 10.19.201.101
Host is up (0.00020s latency).
MAC Address: 0F:01:H5:W3:0G:J5(ICP Electronics)
Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds
PLEASE, who can help or give prompts of how to do it?

For who is still struggling with this issue, I've figured out how to do it on Windows 10.
The solution is to make the container running on the same LAN as your local host, so nmap can scan the LAN device successfully. Below is the way to make your docker container run on the host LAN.
Windows 10 HOME
Change the virtual box setting
Stop VM first by administrator docker-machine stop default
Open Virtual Box
Select default VM and click Settings
Go to Network page, and enable new Network Adapter on Adapter 3
(DO NOT CHANGE Adapter 1 & 2)
Attached Adapter 3 to bridged Adapter with your physical network and click OK
Start VM by administrator docker-machine start default
Open Docker Quickstart Terminal to run container, the new container should be run on the LAN now.
Windows 10 PROFESSIONAL/ENTERPRISE
Create vSwitch with physical network adapter
Open Hyper-V Manager
Action list- > Open Virtual Switch Manager
Create new virtual switch -> select Type: External
Assign your physical network adapter to the vSwitch
Check "Allow management operating system to share this network adapter" and apply change
Go to Control Panel\All Control Panel Items\Network Connections.
Check the vEthernet you just created, and make sure the IPV4 setting is correct. (sometimes the dhcp setting will be empty and you need to reset again here)
Go back to Hyper-V Manager, and go into Setting page of MobyLinuxVM (ensure it's shut down, if it's not, Quit Docker)
Add Hardware > Network Adapter, select the vSwitch you just created and apply change
Modify Docker source code
Find the MobyLinux creation file: MobyLinux.ps1
(normally it's located at: X:\Program Files\Docker\Docker\resources)
Edit the file, and find the function: function New-MobyLinuxVM
Find below line in the function:
$vmNetAdapter = $vm | Hyper-V\Get-VMNetworkAdapter
Update it to:
$vmNetAdapter = $vm | Hyper-V\Get-VMNetworkAdapter | Select-Object -First 1
Save file by administrator
Restart Docker, and the container should run on the LAN now.

Related

weblogic in docker trying to use the public port in the container when 7101:7001 port mapping is used

I am starting a WebLogic 12.2.1.4 admin server in docker from my docker-compose.yml file.
I use different port mapping, not the default 7001.
My docker port mapping is this: 7101:7001
Everything works fine, except this: I constantly get the following exception when I click on the Deployment menu on the web console:
<Feb 12, 2021 5:11:21,002 PM UTC> <Notice> <JMX> <BEA-149535> <JMX Resiliency Activity Server=All Servers : Resolving connection list DomainRuntimeServiceMBean>
javax.ws.rs.ProcessingException: java.net.ConnectException: Tried all: '1' addresses, but could not connect over HTTP to server: 'localhost', port: '7101'
failed reasons:
[0] address:'localhost/127.0.0.1',port:'7101' : java.net.ConnectException: Connection refused
The WL admin server tries to use the public docker port 7101 in the container but actually, WL is listening on the default 7001 port inside the container. Port 7101 is only used from the host machine, and of course, WL is not listening on port 7101 in the container.
My workaround is the following:
Check the IP address of the admin-server container with docker inspect <container-name>
Open the WL console using the container private IP address, e.g.: http://172.19.0.2:7001/console
In this case, the exception does not appear
But if I open the WL console from http://localhost:7101/console which is the mapped port to the host machine by docker, then the exception appears
Maybe this is a WL user interface issue? But I am not sure.
Any idea why this happening?

PyCharm cannot use interpreter in server docker-machine(Channel disconnected before any data was reveived)

I am using MAC OS + PyCharm(for pro) and ubuntu16.04 server.
When I try to connect to the server's docker using local pycharm, I get the following error:
( Cannot connect: java.io.IOException: Channel disconnected before any data was reveived )
( I changed the docker daemon port. )
And, I tried searching for ports, and worked well.
Port Scanning host: 163.---.---.178
Open TCP Port: 7561
Even if I tried changing tcp to https, it gave an error.
How can I fix it?
1.set this item first, then reboot your laptop. Run pycharm as administrator, it works for me.

Joining a Docker swarm

I have 2 VMs.
On the first I run:
docker swarm join-token manager
On the second I run the result from this command.
i.e.
docker swarm join --token SWMTKN-1-0wyjx6pp0go18oz9c62cda7d3v5fvrwwb444o33x56kxhzjda8-9uxcepj9pbhggtecds324a06u 192.168.65.3:2377
However, this outputs:
Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused"
Any idea what's going wrong?
If it helps I'm spinning up these VMs using Vagrant.
Just add the port to firewall on master side
firewall-cmd --add-port=2377/tcp --permanent
firewall-cmd --reload
Then again try docker swarm join on second VM or node side
I was facing similar issue. and I spent couple of hours to figure out the root cause and share to those who may have similar issues.
Environment:
Oracle Cloud + AWS EC2 (2 +2)
OS: 20.04.2-Ubuntu
Docker version : 20.10.8
3 dynamic public IP+ 1 elastic IP
Issues
create two instances on the Oracle cloud at beginning
A instance (manager) docker swarm init --advertise-addr success
B instance (worker) docker join as worker is worker success
when I try to promo B as manager, encountered error
Unable to connect to remote host: No route to host
5. mesh routing is not working properly.
Investigation
Suspect it is related to network/firewall/Security group/security list
ssh to B server (worker), telnet (manager) 2377, with same error
Unable to connect to remote host: No route to host
3. login oracle console and add ingress rule under security list for all of relative port
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
4. try again but still not work with telnet for same error
5. check the OS level firewall. if has disable it.
systemctl ufw disable
6. try again but still not work with same result
7. I suspect there have something wrong with oracle cloud, then I decide try to use AWS install the same version of OS/docker
8. add security group to allow all of relative ports/protocol and disable ufw
9. test with AWS instance C (leader/master) + D (worker). it works and also can promote D to manager. mesh routing was also work.
10. confirm the issue with oracle cloud
11. try to join the oracle instance (A) to C as worker. it works but still cannot promote as manager.
12. use journalctl -f  to investigate the log and confirm there have socket timeout from A/B (oracle instances) to AWS instance(C)
13. relook the A/B, found there have iptables block request
14. remove all of setup in the iptables
# remove the rules
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
15. remove all of setup in the iptables
Root Cause
It caused by firewall either in cloud security/WAF/ACL level or OS firewall/rules. e.g. ufw/iptables
I did firewall-cmd --add-port=2377/tcp --permanent firewall-cmd --reload already on master side and was still getting the same error.
I did telnet <master ip> 2377 on worker node and then I did reboot on master.
Then it is working fine.
It looks like your docker swarm manager leader is not running on port 2377. You can check it by firing this command on your swarm manager leader vm. If it is working just fine then you will get similar output
[root#host1]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
tilzootjbg7n92n4mnof0orf0 * host1 Ready Active Leader
Furthermore you can check the listening ports in leader swarm manager node. It should have port tcp 2377 for cluster management communications and tcp/udp port 7946 for communication among nodes opened.
[root#host1]# netstat -ntulp | grep dockerd
tcp6 0 0 :::2377 :::* LISTEN 2286/dockerd
tcp6 0 0 :::7946 :::* LISTEN 2286/dockerd
udp6 0 0 :::7946 :::* 2286/dockerd
In the second vm where you are configuring second swarm manager you will have to make sure you have connectivity to port 2377 of leader swarm manager. You can use tools like telnet, wget, nc to test the connectivity as given below
[root#host2]# telnet <swarm manager leader ip> 2377
Trying 192.168.44.200...
Connected to 192.168.44.200.
For me I was on linux and windows. My windows docker private network was the same as my local network address. So docker daemon wasn't able to find in his own network the master with the address I was giving to him.
So I did :
1- go to Docker Desktop app
2- go to Settings
3- go to Resources
4- go to Network section and change the Docker subnet address (need to be different from your local subnet address).
5- Then apply and restart.
6- use the docker join on the worker again.
Note: All this steps are performed on the node where the error appear. Make sure that the ports 2377, 7946 and 4789 are opens on the master (you can use iptables or ufw).
Hope it works for you.

stopping the ip connection between a client and the server for 30seconds and rewarm up of the link after that period

Here is my configuration:
On the server I have a dhcp server that gives IP addresses to clients (connected on eth1) in order for the clients to be connected to the internet (on eth0).
For a special operational use, I would like to stop the IP connection between a client and the server for 30 seconds and rewarm up the link after that period. Currently I have tried to use iptable black list to put the client IP inside the black ip list with this command:
command 1: sudo iptables -I INPUT -s '.$ip.' -j DROP
then I use another iptable command to resume the ip link with this command:
command 2: sudo iptables -D INPUT -s '.$ip.' -j DROP
the both commands are encapsulated in a PHP program stored in an Ubuntu server and launched from a Windows workstation. Both command works perfectly but, unfortunately, I never get the internet connection back. From a Windows command screen I can monitor the behaviour of the line with ping commands. Here are the result of command 1:stop the network
Here are the result of command 2:start the network
Here is my Question:
Can someone thell me how to rewarm up the local IP socket in order to relaunch the tcp connection after command 2?
Another way of doing this may could have be to stop using iptable command and dynamically modify the lease time of the client IP address given by dhcp service.
Stopping the connection: by anticipating the end of lease period of the IP address. Rewarming up the connection: by attempting a http command to invoke the establishment of a tcp connection with a new lease time.
Can someone tell me how to overcome this?
Thanks very much.

Erlang - Nodes don't recognize

I'm trying to use distributing programming in Erlang.
But I had a problem, I can't communicate two Erlang's nodes to communicate.
I tried to put the same atom in the "Magical cookies", but it didn't work.
I tried to use command net:ping(node), but reponse was pang (didn't reconigze another node), or used nodes(), to see if my first node see the second node, but it didn't work again.
The first and second node is CentOS in VMWare, using bridge connection in network adaptor.
I entered command ping outside Erlang between VM's and they reconigze each one.
I start the first node, but the second node open process, but can't find the node pong.
(pong#localhost)8> tut17:start_pong().
true
(ping#localhost)5> c(tut17).
{ok,tut17}
(ping#localhost)6> tut17:start_ping(pong#localhost).
<0.55.0>
Thank you!
A similar question here.
The distribution is provided by a daemon called Erlang Port Mapper Daemon. By default it listens on port 4369 so you need to make sure that that port is opened between the nodes. Additionally, each started Erlang VM opens an additional port to communicate with other VMs. You can see those ports with epmd -names:
g#someserv1:~ % epmd -names
epmd: up and running on port 4369 with data:
name hbd at port 22200
You can check if the port is opened by doing telnet to it, e.g.:
g#someserv1:~ % telnet 127.0.0.1 22200
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
Connection closed by foreign host.
You can change the port to the port you want to check, e.g. 4369, and also the IP to the desired IP. Doing ping is not enough because it uses its own ICMP protocol which is different that TCP used by the Erlang distribution to communicate, e.g. ICMP may be allowed but TCP may be blocked.
Edit:
Please follow this guide Distributed Erlang to start an Erlang VM in distributed mode. Then you can use net_adm:ping/1 to connect to it from another node, e.g.:
(hbd#someserv1.somehost.com)17> net_adm:ping('hbd#someserv2.somehost.com').
pong
Only then epmd -names will show the started Erlang VM on the list.
Edit2:
Assume that there are tho hosts, A and B. Each one runs one Erlang VM. epmd -names run on each host shows for example:
Host A:
epmd: up and running on port 4369 with data:
name servA at port 22200
Host B:
epmd: up and running on port 4369 with data:
name servB at port 22300
You need to be able to do:
On Host A:
telnet HostB 4369
telent HostB 22300
On Host B:
telnet HostA 4369
telnet HostA 22200
where HostA and HostB are those hosts' IP addresses (.e.g HostA is IP of Host A, HostB is IP of Host B).
If the telnet works correctly then you should be able to do net_adm:ping/1 from one host to the other, e.g. on Host A you would ping the name of Host B. The name is what the command node(). returns.
You need to make sure you have a node name for your nodes, or they won't be available to connect with. E.g.:
erl -sname somenode#node1
If you're using separate hosts, then you need to make sure that the node names are resolvable to ip addresses somehow. An easy to way to do this is using /etc/hosts.
# Append a similar line to the /etc/hosts file
10.10.10.10 node1
For more helpful answers, you should post what you see in your terminal when you try this.
EDIT
It looks like your shell is auto picking "localhost" as the node name. You can't send messages to another host with the address "localhost". When specifying the name on the shell, try using the # syntax to specify the node name as well:
# On host 1:
erl -sname ping#host1
# On host 2
erl -sname pong#host2
Then edit the host file so host1 and host2 will resolve to the right IP.

Resources