init docker swarm with docker machine: context deadline exceeded - docker

I'm learing Docker machine while encount some problems.
My computer is mac and use Docker for mac. I create 2 vm,vm1& vm2 by docker-machine,and try to init a swarm who has nodes-vm1,vm2 and my mac.My steps are below:
1. create an image called "sprinla/cms:latest" and a docker-compose.yml
version: "3"
services:
web:
image: sprinla/cms:latest
deploy:
replicas: 1
ports:
- "80:80"
networks:
- webnet
command: /data/start.sh
networks:
webnet:
2.create 2 vms.Here is vm info:
yuxrdeMBP:~ yuxr$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
vm1 - virtualbox Running tcp://192.168.99.100:2376 v17.12.0-ce
vm2 - virtualbox Running tcp://192.168.99.101:2376 v17.12.0-ce
init swarm on my mac host:
yuxrdeMBP:~ yuxr$ docker swarm init
Swarm initialized: current node (uf6rg1v91exlwntlskyj8iim7) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3qb32l84n0s8vl74rj9d6psm7bzdany3piw55ohtrq0q7ly814-c5km5zg3kj9d6vn6vrtt6xxtg 192.168.65.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
4 join vm1 to swarm,then comes the problem
yuxrdeMBP:~ yuxr$ docker-machine ssh vm1 "docker swarm join --token SWMTKN-1-3qb32l84n0s8vl74rj9d6psm7bzdany3piw55ohtrq0q7ly814-c5km5zg3kj9d6vn6vrtt6xxtg 192.168.65.2:2377"
Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
exit status 1
5.cat the docker log :
time="2018-01-03T17:13:50.387854642Z" level=debug msg="Calling GET /_ping"
time="2018-01-03T17:13:50.388228524Z" level=debug msg="Calling GET /_ping"
time="2018-01-03T17:13:50.388521374Z" level=debug msg="Calling POST /v1.35/swarm/join"
time="2018-01-03T17:13:50.388583426Z" level=debug msg="form data: {\"AdvertiseAddr\":\"\",\"Availability\":\"\",\"DataPathAddr\":\"\",\"JoinToken\":\"*****\",\"ListenAddr\":\"0.0.0.0:2377\",\"RemoteAddrs\":[\"192.168.65.2:2377\"]}"
time="2018-01-03T17:13:55.392578452Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:02.394608777Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:09.395720474Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:10.393743738Z" level=error msg="Handler for POST /v1.35/swarm/join returned error: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the \"docker info\" command to see the current swarm status of your node."
time="2018-01-03T17:14:16.398095265Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:23.399587783Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:25.399943337Z" level=error msg="cluster exited with error: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
below is my mac ifconfig info:
yuxrdeMBP:~ yuxr$ ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether ac:bc:32:81:97:37
inet6 fe80::4d8:6b2:718a:5d3b%en0 prefixlen 64 secured scopeid 0x5
inet 192.168.199.169 netmask 0xffffff00 broadcast 192.168.199.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 0e:bc:32:81:97:37
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 36:9f:65:fd:34:c3
inet6 fe80::349f:65ff:fefd:34c3%awdl0 prefixlen 64 scopeid 0x7
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 6a:00:00:e3:4c:30
media: autoselect <full-duplex>
status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 6a:00:00:e3:4c:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 6a:00:00:e3:4c:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::441e:c0e3:5429:2abb%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::7820:5bac:4735:7f82%utun1 prefixlen 64 scopeid 0xc
inet6 fd44:5cb3:4ab4:5d08:7820:5bac:4735:7f82 prefixlen 64
nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::26f2:e964:8dfb:e884%utun2 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
gpd0: flags=8862<BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1400
ether 02:50:41:00:01:01
vboxnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
ether 0a:00:27:00:00:00
inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255
Why????
mac host has ip, 192.168.99.1 ,vm1 has ip 192.168.99.100,vm2 has ip 192.168.99.101,they are in the same network,why can't vm1 nor vm2
join the mac host's swarm?
ANOTHER QUESTION:if i use vm1 as swarm manager,run "docker swarm join" commad on the mac host,when join as worker,it can join but can't use;when join as manager will has error:
yuxrdeMBP:~ yuxr$ docker swarm join --token SWMTKN-1-49w1hd28hs1mtj3sgmd0o3q7n59zgppvd18vs0iwhcnjemzmwb-7mk35zdnaslt1p41gninvwlud 192.168.99.100:2377
Error response from daemon: manager stopped: can't initialize raft node: rpc error: code = Unknown desc = could not connect to prospective new cluster member using its advertised address: rpc error: code = Unavailable desc = grpc: the connection is unavailable
THANK YOU FOR HELP ME !!!

There is no routing between the Mac host and Docker for Mac. So on a Mac you can only setup multi-node swarms between VMs, and the standard Docker for Mac cannot participate in a multi-node swam. This is a limitation on how networking is implemented on OSX.
See the documentation, where this is explained.
Also see this issue for more background.

For Me, this error got resolved by making the Security groups to Inbound Rules to All traffic in AWS.

I got the same error when trying to join a swarm cluster as a worker Used 2 VMs from Google cloud for this..
Manager node was working fine ..docker info--> swarm did not give any errors. but when i try to join the worker nodes with the token .. i got this error "Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node. " while docker info showed me
"rpc error: code = DeadlineExceeded desc = context deadline exceeded in swarm error"
tried a lot of different things finally below solution worked.
solution. -->. i used "docker swarm init --force-new-cluster". in one of the vms i tried to join the as a worker.. and then i used "docker swarm leave --force" on the existing manager node .. and the joined that one as a worker to the newly created cluster. Other vm also also worked when tried to join as workers for the new cluster..
ubuntu - 18.04
docker version -20.10.17

Related

Retrieve bridge's IP within docker container or provide via environment variable

When I create a docker network, a bridge is added:
docker network create DUMMY
Now executing ifconfig gives:
br-8a429249b4d9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.255.136.1 netmask 255.255.255.0 broadcast 10.255.136.255
inet6 fe80::42:88ff:fe9b:9a33 prefixlen 64 scopeid 0x20<link>
ether 02:42:88:9b:9a:33 txqueuelen 0 (Ethernet)
RX packets 9 bytes 388 (388.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 74 bytes 11136 (11.1 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Is it possible to retrieve that IP 10.255.136.1 from a container running inside the DUMMY network?
I am dockerizing an application which requires the source IP (which, in this case, is another application running on the host) to be whitelisted in some configuration file and I believe that should be the bridge's IP. Hence my question to retrieve that IP from within the actual container. Or alternatively, is there a way to provide that IP to the container via an environment variable?
The ip address of the bridge will be the default gateway inside the container. In other words, you can just parse the output from e.g. ip route to find the bridge address.
For example, if I create a DUMMY network, I get:
6: br-ea7804d337bc: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:8e:97:b4:02 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.1/16 brd 172.20.255.255 scope global br-ea7804d337bc
valid_lft forever preferred_lft forever
inet6 fe80::42:8eff:fe97:b402/64 scope link
valid_lft forever preferred_lft forever
If I start a container on that network:
docker run -it --rm --net DUMMY alpine sh
I have the following routing table:
/ # ip route
default via 172.20.0.1 dev eth0
172.20.0.0/16 dev eth0 scope link src 172.20.0.2
And I can get the ip address itself by running that output through awk:
/ # ip route | awk '$1 == "default" {print $3}'
172.20.0.1

Hortonworks docker sandbox is not loading in the browser

I have installed HortonWorks Docker sandbox as peer instructions.
Which seems to be running, when I type:
sudo docker ps
It is shown that the sandbox is runing:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
23dbac10e27b hortonworks/sandbox-hdp:3.0.1 "/usr/sbin/init" 20 minutes ago Up 20 minutes 22/tcp, 4200/tcp, 8080/tcp sandbox-hdp
But when I visit localhast:8080 on the browser I do not get any response.
I also read that I should try ifconfig to verify the ip address:
Not sure what I should be looking in here:
br-193585a7edfa: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:62ff:fe32:c2fc prefixlen 64 scopeid 0x20<link>
ether 02:42:62:32:c2:fc txqueuelen 0 (Ethernet)
RX packets 5 bytes 256 (256.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 3241 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
EDIT:
I´m startung it with this command, no porst specified:
docker start sandbox-hdp
As shown in the instructions:
Also I get the same ports mapping that is shown in the documentation:
For container docker create separate network.
If you need mapping of port for container in this network to port in host system you should specify it with -p
you need
docker start -p <port_at_host_system>:<port_in_container> <image>
so run
docker start -p 8080:8080 sandbox-hdp

How to connect to OpenDJ LDAP server (Docker)

I am trying to connect(bind) to an OpenDJ server in Docker.
(I know how to connect to regular (not Docker) OpenDJ server)
OpenDJ seems to run, but when I try to connect to it with a ldap browser, it says "Unabled to connect"
--- Server Status ---
Server Run Status: Started
Open Connections: 1
--- Server Details ---
Host Name: 14e1e92e962e
Administrative Users: cn=Directory Manager
Installation Path: /opt/opendj
Instance Path: /opt/opendj/data
Version: OpenDJ Server 4.4.3
Java Version: 1.8.0_111
Administration Connector: Port 4444 (LDAPS)
--- Connection Handlers ---
Address:Port : Protocol : State
-------------:------------------------:---------
-- : LDIF : Disabled
0.0.0.0:161 : SNMP : Disabled
0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled
0.0.0.0:1636 : LDAPS : Enabled
0.0.0.0:1689 : JMX : Disabled
0.0.0.0:8080 : HTTP : Disabled
--- Data Sources ---
Base DN: dc=example,dc=com
Backend ID: userRoot
Entries: 1
Replication:
[root#localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
14e1e92e962e openidentityplatform/opendj "/opt/opendj/run.sh" 18 hours ago Up 18 hours
[root#localhost ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:5ff:fe0f:a03 prefixlen 64 scopeid 0x20<link>
ether ******** txqueuelen 0 (Ethernet)
RX packets 5 bytes 254 (254.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 647 (647.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.89 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::1db8:91e1:5276:4f9 prefixlen 64 scopeid 0x20<link>
ether ******** txqueuelen 1000 (Ethernet)
RX packets 796434 bytes 512206712 (488.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 479946 bytes 41277150 (39.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root#localhost ~]# docker run -it 1e03b62c213e /bin/bash
Instance data Directory is empty. Creating new DJ instance
BASE DN is dc=example,dc=com
Password set to password
Running /opt/opendj/bootstrap/setup.sh
Setting up default OpenDJ instance
Configuring Directory Server ..... Done.
Configuring Certificates ..... Done.
Creating Base Entry dc=example,dc=com ..... Done.
Starting Directory Server ...... Done.
To see basic server configuration status and configuration, you can launch
/opt/opendj/bin/status
Server Run Status: Started
The LDAP server is running at 192.168.0.89 with a port of 1389. So I try to connect with the below. I am unable to fetch Base DN as well. I tried putting the BaseDN manually too. I tried 172.17.0.1, but no luck. (It seems to be a docker ip. (ifconfig))
Question :
But with docker, do I need a different hostname? or IP? Or need additional configuration setup? (BTW, I put IP in hostname and successfully connected many times.)
Error message :
Error while opening connection
- Unable to connect
java.lang.Exception: Unable to connect
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper$1.run(DirectoryApiConnectionWrapper.java:251)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.runAndMonitor(DirectoryApiConnectionWrapper.java:1312)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.doConnect(DirectoryApiConnectionWrapper.java:281)
at org.apache.directory.studio.connection.core.io.api.DirectoryApiConnectionWrapper.connect(DirectoryApiConnectionWrapper.java:172)
at org.apache.directory.studio.connection.core.jobs.OpenConnectionsRunnable.run(OpenConnectionsRunnable.java:111)
at org.apache.directory.studio.connection.core.jobs.StudioConnectionJob.run(StudioConnectionJob.java:109)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:60)
Unable to connect
You need to publish ports 1389 and 1636.
Change your docker run command to
docker run -it -p 1389:1389 -p 1636:1636 <image ID> /bin/bash
You can also run your container is host networking mode where you don't need port mapping.
docker run -it --net=host <image ID> /bin/bash
Hope this helps.
look at your docker ps command, you do not publish any ports
add this to your docker run command:
-p 1389:1389 -p 1636:1636

docker network through a specific physical interface

So I'm trying to create a network (docker network create) so that its traffic will pass through an specific physical network interface (NIC); I have two: <iface1> (internal), and <iface2> (external).
I need the traffics of both NICs to be physically separated.
METHOD 1:
I think macvlan is the driver should use to create such network.
For most of what I found on the internet, the solutions refer to Pipework (deprecated now) and temporary docker-plugins (deprecated too).
For what most closely has helped me is this1
docker network create -d macvlan \
--subnet 192.168.0.0/16 \
--ip-range 192.168.2.0/24 \
-o parent=wlp8s0.1 \
-o macvlan_mode=bridge \
macvlan0
Then, in order for the container to be visible from the host, I need to do this in the host:
sudo ip link add macvlan0 link wlp8s0.1 type macvlan mode bridge
sudo ip addr add 192.168.2.10/16 dev macvlan0
sudo ifconfig macvlan0 up
Now the container and the host see each other :) BUT the container can't access the local network.
The idea, is that the container can access internet.
METHOD 2:
As I will use <iface2> manually, I'm ok if by default the traffic goes through <iface1>.
But no matter in which order I get the NICs up (I also tried removing the LKM for <iface2> temporarely); the whole traffic is always overtaken by the external NIC <iface2>.
And I found that it happens because the route table updates automatically at some "random" time.
In order to force the traffic to go through <iface1>, I have to (in the host):
sudo route del -net <net> gw 0.0.0.0 netmask 255.0.0.0 dev <iface2>
sudo route del default <iface2>
Now, I can verify (in several ways) that the traffic just goes through <iface1>.
But the moment that the route table updates (automatically), all traffic moves to <iface2>. Damn!
I'm sure there's a way to make the route table "static" or "persistent".
EDIT (18/Jul/2018):
The main idea is to be able to access internet through a docker container using only one of two available physical network interfaces.
My environment:
On the host created for vm virbr0 bridge with ip address 192.168.122.1 and up vm instance with interface ens3 and ip address 192.168.122.152.
192.168.122.1 - is gateway for 192.168.122.0/24 network.
Into vm:
Create network:
# docker network create --subnet 192.168.122.0/24 --gateway 192.168.122.1 --driver macvlan -o parent=ens3 vmnet
Create docker container:
# docker run -ti --network vmnet alpine ash
Check:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:c0:a8:7a:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 192.168.122.152
PING 192.168.122.152 (192.168.122.152): 56 data bytes
^C
--- 192.168.122.152 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1): 56 data bytes
64 bytes from 192.168.122.1: seq=0 ttl=64 time=0.471 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.471/0.471/0.471 ms
Ok, I up another vm with ip address 192.168.122.73 and check from docker:
/ # ping 192.168.122.73 -c2
PING 192.168.122.73 (192.168.122.73): 56 data bytes
64 bytes from 192.168.122.73: seq=0 ttl=64 time=1.630 ms
64 bytes from 192.168.122.73: seq=1 ttl=64 time=0.984 ms
--- 192.168.122.73 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.984/1.307/1.630 ms
From docker instance I can't ping interface on vm, but I can access to local network.
/ # ip n|grep 192.168.122.152
192.168.122.152 dev eth0 used 0/0/0 probes 6 FAILED
On vm I add macvlan0 nic:
# ip link add macvlan0 link ens3 type macvlan mode bridge
# ip addr add 192.168.122.100/24 dev macvlan0
# ip l set macvlan0 up
From the docker I can ping 192.168.122.100:
/ # ping 192.168.122.100 -c2
PING 192.168.122.100 (192.168.122.100): 56 data bytes
64 bytes from 192.168.122.100: seq=0 ttl=64 time=0.087 ms
64 bytes from 192.168.122.100: seq=1 ttl=64 time=0.132 ms
--- 192.168.122.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.109/0.132 ms

Unable to connect to Docker service

I am running a Docker image in a MAC machine and when I logged into the container, I see the ip address as "172.17.0.2"( cat /etc/hosts).
How does docker choose the IP?
Is there any IP range that Docker choose?
What if I run multiple container on the same host? Will it be different?
/etc/resolve.conf gives some IP. What is that IP and where does it get?
How to connect to Docker service using the internal IP, say 172.17.0.2
ping CONTAINER_ID -> returns the IP 172.17.0.2
How does it resolve the hostname?
I tried reading through networking but it doesn't help.
Also, I am running my service in the port 8443. Still, I am unable to connect.
I tried running,
docker run -net host -p 8443:8443 IMAGE
Still no luck.
Tried the below approach also.
docker run -p MY_MACHINE_IP:8080:8080 IMAGE
Tried with,
http://MY_MACHINE_IP:8080
http://localhost:8080
None of the above works.
ifconfig output,
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 60:f8:1d:b2:cb:0c
inet6 fe80::49d:a511:dc4e:7960%en0 prefixlen 64 secured scopeid 0x5
inet 10.231.168.63 netmask 0xffe00000 broadcast 10.255.255.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 02:f8:1d:b2:cb:0c
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 0a:71:96:61:e4:eb
inet6 fe80::871:96ff:fe61:e4eb%awdl0 prefixlen 64 scopeid 0x7
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:30
media: autoselect <full-duplex>
status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 72:00:07:57:48:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::3f17:8946:c18d:5d25%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::20aa:76fd:d68:7fb2%utun2 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::e42a:c616:4960:2c43%utun3 prefixlen 64 scopeid 0x10
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1342
inet 17...... --> 17.... netmask 0xff000000
inet6 fe80::93df:7780:862c:8a06%utun1 prefixlen 64 scopeid 0x12
nd6 options=201<PERFORMNUD,DAD>
for the first 4 question you can find here some information, in general the docker network is the responsable about manager the network.
Usually I specify the prots like this:
docker run -p 8443:8443 IMAGE
and it work.
An reference to an existing topic is here
1. How does docker choose the IP?
When docker installed in your machine it will create docker0 interface. It will gives ip address to your container whenever it launch.
you can verify the ip range for docker0 by ifconfig command.
2. Is there any IP range that docker choose?
Yes, Please refer my answer 1.
3. What if i run multiple container on the same host? Will it be different?
Yes, It will be different from the range of docker0 interface until you create your own network using docker network create for more refer : Docker Networking
4./etc/resolve.conf gives some IP. What is that IP and where does it get?
It's internal DNS of docker network you can give your DNS ip in vi /etc/systemd/system/docker.service.d/docker.conf add your DNS server on line like below:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock -g "/opt/docker_storage" --dns <replace-dns-ip>
5. How to connect to docker service using the internal IP, say 172.17.0.2
You have to expose port to connect like docker run -p 8443:8443 <image-name>
after that you can connect by telnet localhost 8443 or curl http://172.17.0.2:8443
Most important
Add the following to /etc/sysctl.conf
net.ipv4.ip_forward = 1 and apply settings by
sysctl -p /etc/sysctl.conf
Hope this will help.
Thank you!
Docker manages all of this internal networking machinery itself. This includes allocating IP(v4) addresses from a private range, a NAT setup for outbound connections, and a DNS service to allow containers to communicate with each other.
A stable, reasonable setup is:
Run docker network create mynet, once, to create a non-default network. (Docker Compose will do this for you automatically.)
Run your containers with --net mynet.
When containers need to communicate with each other, they can use other containers' --name as DNS names (you can connect to http://other-container-name).
If you need to reach a container from elsewhere, publish its service port using docker run -p or the Docker Compose ports: section. It can be reached using the host's DNS name or IP address and the published port.
Never ever use the container-private IP addresses (directly).
Never use localhost unless you're absolutely sure about what it means. (It's a correct way to reach a published port from a browser running on the host that's running the containers; it's almost definitely not what you mean from within a container.)
The problems I've seen with the container-private IP addresses tend to be around the second time you use them: because you relaunched the container and the IP address changed; because it worked from your local host and now you want to reach it from somewhere else.
To answer your initial questions briefly: (1-2) Docker assigns them itself from a network that can be configured but often defaults to 172.17.0.0/16; (3) different containers have different private IP addresses; (4-5) Docker provides its own DNS service and /etc/resolv.conf points there; (6) ICMP connectivity usually doesn't prove much and you don't need to ping containers (use dig or nslookup for DNS debugging, curl for actual HTTP requests).

Resources