Good morning,
I'm a new user of docker technology and I'm facing some issue.
Any help will be appreciated (I update the question, thanks to the questions provided by #JRichardsz).
Here the problem:
Docker Host: Ubuntu 20.04.2 LTS. This is a VMWare virtual machine
In the host no process is using port 80. (Running "sudo lsof -i :80" return no process name, if I do not start any docker image)
if I stop the running docker image by "docker compose down", I have 2 network interfaces:
docker0: flags=4099 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:2f:a6:ae:d2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160: flags=4163 mtu 1500
inet 192.168.128.60 netmask 255.255.255.0 broadcast 192.168.128.255
inet6 fe80::250:56ff:fea6:60bb prefixlen 64 scopeid 0x20
ether 00:50:56:a6:60:bb txqueuelen 1000 (Ethernet)
RX packets 2929 bytes 251888 (251.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 424 bytes 53706 (53.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
when I start my docker image (this is a docker image provided by Xibo CMS developers), the network interfaces related to the docker image appears, and I'm able to view bridge information:
bridge name bridge id STP enabled interfaces
br-33e8e1916c0d 8000.024261f9ddaa no veth0b9040e
veth3c6ecf0
veth821b253
vethe879c11
docker0 8000.02422fa6aed2 no
when the docker is active, command "sudo lsof -i :80" provides me:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 25417 root 4u IPv4 122292 0t0 TCP *:http (LISTEN)
docker-pr 25423 root 4u IPv6 122300 0t0 TCP *:http (LISTEN)
so web service inside docker image is active on all interfaces.
If I open a browser inside the Linux docker Host, looking at url http://127.0.0.1/ I can access the web server page without any problem
If I open a browser at http://192.168.128.80/ in a PC connected in the same subnet of ens160, I'm not able to reach the webserver.
Please note, no firewall is active (if I shutdown docker image by "docker compose down" command, and try from a PC connected in the same subnet of ens160 to open a telnet connection to port 80 in 192.168.128.80, I can capture packets using tcpdump)
After stopping the docker image, I performed the following commands on the docker host PC:
sudo python3 -m http.server 80
connecting from a different PC using putty telnet on port 80, I can see messages on the python example command line (so no firewall issues)
Is there anyone that can help me find why the webservice inside docker image can be accessed only when opening webpage from Linux Docker Host machine?
Any help will be very helpful.
Related
As I understand, IPv6 addresses are allocated in blocks. Each machine gets a range of IPv6 address and any IPv6 address in that range would point to it.
Basis for this assumption:
https://stackoverflow.com/a/15266701/681671
The /64 is the prefix length. It is the number of bits in the address
that is fixed. So a /64 indicates that the first 64 bits of the
128-bit IPv6 address are fixed. The remaining bits (64 in this case)
are flexible, and you can use all of them. This means that when your
ISP gives you a /64 they are giving you 264 addresses (that is
18,446,744,073,709,551,616 addresses).
Edit: I confirmed using Wireshark that the packets sent to any IP in that /64 range do get routed to my server.
Looking at this line from ifconfig output
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine.
However I am unable to bind any service to any IPv6 address other than
2a01:2e8:d2c:e24c:0000:0000:0000:0001
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0002 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0001:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:1000:0000:0000:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0001 80 Only this works
nc -l <IP> <PORT> opens up a simple TCP server on the specified IP and port.
The error I get is nc: Cannot assign requested address
I want to run multiple instances of a service on same port but different IPv6 addresses. Since public IPv6 address are abundantly available to each machine, I thought of utilizing the same.
ifconfig:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 88.77.66.55 netmask 255.255.255.255 broadcast 88.77.66.55
inet6 fe80::9300:ff:fe33:64c1 prefixlen 64 scopeid 0x20<link>
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
ether 96:00:00:4e:31:e4 txqueuelen 1000 (Ethernet)
RX packets 26788391 bytes 21199864639 (21.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21940989 bytes 20045216536 (20.0 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
OS: Ubuntu 18.04
VPS Host: Hetzner
I am actually trying to run multiple nginx docker containers mapped to port 80 on different IPv6 addresses of the host. That is when I encountered the problem. The nc -l test is just to simplify the problem description.
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine
That assumption is wrong. The prefix length has the same meaning as the IPv4 netmask. It determines which addresses are on your local network, not which addresses belong to your local host.
This is all you need:
ip route add local 2a01:2e8:d2c:e24c::/64 dev lo
Credit: Can I bind a (large) block of addresses to an interface?
To re-iterate and expand upon Sander's answer:
You must bind each individual IP address to the nic, network interface card, before it considers the traffic to send up the stack.
Wireshark sets the nic in promiscuous mode,i.e. send all traffic received.
There is a practical limit to how many IP addresses can be assigned on a system, MUCH less than the 2^64 required by the op post. Storing the addresses alone would be more than any system's memory.
Unlike IPV4 with its, 127.0.0.0/8, 2^24 loopback addresses, IPV6 only defines a single address 0::1/128.
The only practical solution would be to treat the entire IPV6 subnet as a "reverse" NAT using IP masquerading(NAT). This solution would require a second instance acting as the NAT "router". The rules would rewrite the destination addresses to a single address/port.
I have installed HortonWorks Docker sandbox as peer instructions.
Which seems to be running, when I type:
sudo docker ps
It is shown that the sandbox is runing:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
23dbac10e27b hortonworks/sandbox-hdp:3.0.1 "/usr/sbin/init" 20 minutes ago Up 20 minutes 22/tcp, 4200/tcp, 8080/tcp sandbox-hdp
But when I visit localhast:8080 on the browser I do not get any response.
I also read that I should try ifconfig to verify the ip address:
Not sure what I should be looking in here:
br-193585a7edfa: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:62ff:fe32:c2fc prefixlen 64 scopeid 0x20<link>
ether 02:42:62:32:c2:fc txqueuelen 0 (Ethernet)
RX packets 5 bytes 256 (256.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 3241 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
EDIT:
I´m startung it with this command, no porst specified:
docker start sandbox-hdp
As shown in the instructions:
Also I get the same ports mapping that is shown in the documentation:
For container docker create separate network.
If you need mapping of port for container in this network to port in host system you should specify it with -p
you need
docker start -p <port_at_host_system>:<port_in_container> <image>
so run
docker start -p 8080:8080 sandbox-hdp
I am trying to connect(bind) to an OpenDJ server in Docker.
(I know how to connect to regular (not Docker) OpenDJ server)
OpenDJ seems to run, but when I try to connect to it with a ldap browser, it says "Unabled to connect"
--- Server Status ---
Server Run Status: Started
Open Connections: 1
--- Server Details ---
Host Name: 14e1e92e962e
Administrative Users: cn=Directory Manager
Installation Path: /opt/opendj
Instance Path: /opt/opendj/data
Version: OpenDJ Server 4.4.3
Java Version: 1.8.0_111
Administration Connector: Port 4444 (LDAPS)
--- Connection Handlers ---
Address:Port : Protocol : State
-------------:------------------------:---------
-- : LDIF : Disabled
0.0.0.0:161 : SNMP : Disabled
0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled
0.0.0.0:1636 : LDAPS : Enabled
0.0.0.0:1689 : JMX : Disabled
0.0.0.0:8080 : HTTP : Disabled
--- Data Sources ---
Base DN: dc=example,dc=com
Backend ID: userRoot
Entries: 1
Replication:
[root#localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
14e1e92e962e openidentityplatform/opendj "/opt/opendj/run.sh" 18 hours ago Up 18 hours
[root#localhost ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:5ff:fe0f:a03 prefixlen 64 scopeid 0x20<link>
ether ******** txqueuelen 0 (Ethernet)
RX packets 5 bytes 254 (254.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 647 (647.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.89 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::1db8:91e1:5276:4f9 prefixlen 64 scopeid 0x20<link>
ether ******** txqueuelen 1000 (Ethernet)
RX packets 796434 bytes 512206712 (488.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 479946 bytes 41277150 (39.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root#localhost ~]# docker run -it 1e03b62c213e /bin/bash
Instance data Directory is empty. Creating new DJ instance
BASE DN is dc=example,dc=com
Password set to password
Running /opt/opendj/bootstrap/setup.sh
Setting up default OpenDJ instance
Configuring Directory Server ..... Done.
Configuring Certificates ..... Done.
Creating Base Entry dc=example,dc=com ..... Done.
Starting Directory Server ...... Done.
To see basic server configuration status and configuration, you can launch
/opt/opendj/bin/status
Server Run Status: Started
The LDAP server is running at 192.168.0.89 with a port of 1389. So I try to connect with the below. I am unable to fetch Base DN as well. I tried putting the BaseDN manually too. I tried 172.17.0.1, but no luck. (It seems to be a docker ip. (ifconfig))
Question :
But with docker, do I need a different hostname? or IP? Or need additional configuration setup? (BTW, I put IP in hostname and successfully connected many times.)
Error message :
Error while opening connection
- Unable to connect
java.lang.Exception: Unable to connect
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper$1.run(DirectoryApiConnectionWrapper.java:251)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.runAndMonitor(DirectoryApiConnectionWrapper.java:1312)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.doConnect(DirectoryApiConnectionWrapper.java:281)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.connect(DirectoryApiConnectionWrapper.java:172)
at org.apache.directory.studio.connection.core.jobs.OpenConnectionsRunnable.run(OpenConnectionsRunnable.java:111)
at org.apache.directory.studio.connection.core.jobs.StudioConnectionJob.run(StudioConnectionJob.java:109)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:60)
Unable to connect
You need to publish ports 1389 and 1636.
Change your docker run command to
docker run -it -p 1389:1389 -p 1636:1636 <image ID> /bin/bash
You can also run your container is host networking mode where you don't need port mapping.
docker run -it --net=host <image ID> /bin/bash
Hope this helps.
look at your docker ps command, you do not publish any ports
add this to your docker run command:
-p 1389:1389 -p 1636:1636
On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.
I am running two VM on OpenStack Mirantis. For Simplicity let's call host-1 and host-2. I am unable to communicate neither from Container to Container on different hosts not Container to Public Internet On each Host I have installed Docker ver 1.12.3 and run the following things --
tee Dockerfile <<-'EOF'
FROM centos
RUN yum -y install net-tools bind-utils iputils*
EOF
Later --
docker build -t crazy:3 .
On host-1 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
On host-2 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
Host-1 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 18 bytes 1663 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 4604 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host-2 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:3 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 808 (808.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 508 (508.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Then again
On host-1:--
ping6 2001:1b76:2400:e2:0:242:ac11:3
On host-2:--
ping6 2001:1b76:2400:e2:0:242:ac11:2
All are same output i,e --
PING 2001:1b76:2400:e2:0:242:ac11:3(2001:1b76:2400:e2:0:242:ac11:3) 56 data bytes
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=1 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=2 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=3 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=4 Destination unreachable: Address unreachable
Both hosts ip route are same i,e --
2001:1b76:2400:e2:f816:3eff:fe69:c2f2 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256 expires 28133sec
2001:1b76:2400:e2::/64 dev docker0 proto kernel metric 256
2001:1b76:2400:e2::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
Both containers ip route are same i,e --
2001:1b76:2400:e2::1 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via 2001:1b76:2400:e2::1 dev eth0 metric 1024
Both hosts ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
Both containers ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 0