I have 2 virtual machines (VM1 with IP 192.168.56.101 and VM2 with IP 192.16.56.102 which can ping each other) and these are the steps I'm doing:
- Create consul container on VM1 with 'docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap'
- Create swarm manager on VM1 with 'docker run -d -p 3376:3376 swarm manage -H 0.0.0.0:3376 --advertise 192.168.56.101:3376 consul://192.168.56.101:8500
- Create swarm agents on each VM with 'docker run -d swarm join --advertise <VM-IP>:2376 consul://192.168.56.101:8500
If i run docker -H 0.0.0.0:3376 info I can see both nodes connected to the swarm and they are both healthy. I can also run container and they are scheduled to the nodes. However, If I create a network and assign a few nodes to this network and then SSH into one node and try to ping every other node I can only reach the nodes which are running on the same virtual machine.
Both Virtual Machines have these DOCKER_OPTS:
DOCKER_OPTS = DOCKER_OPTS="--cluster-store=consul://192.168.56.101:8500 --cluster-advertise=<VM-IP>:0 -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock"
I don't have a direct quote, but from what I've read on Docker GitHub issue tracker, ICMP packets (ping) are never routed between containers on different nodes.
TCP connection to explicitly opened ports should work. But as of Docker 1.12.1 it is buggy.
Docker 1.12.2 has some bug fixes wrt establishing a connection to containers on other hosts. But ping is not going to work across hosts.
You can only ping containers on the same node because you attach them to a local scope network.
As suggested in the comments, if you want to ping containers across hosts (meaning from a container on VM1 to a container on VM2) using docker swarm (or docker swarm mode) without explicitly opening ports, you need to create an overlay network (or globally scoped network) and assign/start containers on that network.
To create an overlay network:
docker network create -d overlay mynet
Then start the containers using that network:
For Docker Swarm mode:
docker service create --replicas 2 --network mynet --name web nginx
For Docker Swarm (legacy):
docker run -itd --network=mynet busybox
For example, if we create two containers (on legacy Swarm):
docker run -itd --network=mynet --name=test1 busybox
docker run -itd --network=mynet --name=test2 busybox
You should be able to docker attach on test2 to ping test1 and vice-versa.
For more details you can refer to the networking documentation.
Note: If containers still can't ping each other after the creation of an overlay network and attaching containers to it, check the firewall configurations of the VMs and make sure that these ports are open:
data plane / vxlan: UDP 4789
control plane / gossip: TCP/UDP 7946
Related
I have a network 10.0.0.0/24 with 1 Oracle db-host01 ip address 10.0.0.100 and 2 docker hosts Docker01 10.0.0.15 and Docker02 10.0.0.16 and swarm is configured. I have configured a overlay network "overnet" with network address 192.168.6.0/24.
I have executed the below cmd to run a web container on overlay network.
docker run -i -t -d -p 9090:6000 --name portal --network overnet portal:1.0
but the web container is ip address 192.168.6.2 is not communicating with Oracale DB 10.0.0.100.
I can ping DB ip 10.0.0.100 from web container.
how I can make communication possible and can run this container as service as well.
regards
Sohail
Overlay networks are not attachable by default, which means that standalone containers cannot use them.
You can specify that a network should be attachable using the --attachable flag, such as
$ docker network create -d overlay --attachable overnet
If you are unable to modify the network, create a service for your container using
docker service create --network overnet --publish 9090:6000 --name portal portal:1.0
at which point it will be able to use the overlay network.
My requirement is to send scapy layer 2 packet from docker to outer world.
Docker will be installed in a host linux machine.
The problem is if I am using port mapping and --network host docker is starting and stopped within 3 seconds. Below is the command
sudo docker run -itd -p 30022:22 -p 36901:6901 -p 35901:5901 --privileged --network host --name
The reason for port mapping is to run multiple container in single host with vnc and no vnc and ssh.
The reason for --network host is to replicate host machine's ethernet interfaces in docker container to send scapy layer 2 packet to outer world.
Host machine info :- 4.15.0-54-generic #58~16.04.1-Ubuntu x86_64
container info :- 4.15.0-54-generic #58~16.04.1-Ubuntu x86_64
I'm trying to run multiple containers with the same ports on docker.
For this, I have created a network in brigde mode and specified a subnet.
docker network create -d --subnet 192.168.99.0/24 mynetwork
Then connected the docker containers to it with a static IP.
docker run -i -t -d -p 2377:2377 -p 7946:7946 -p 4789:4789-name container image
docker network connect --ip 192.168.99.98 mynetwork container
I did this with three containers (using different IP's), after starting the second one I got:
Error response from daemon: driver failed programming external connectivity on endpoint container(...): Bind for 0.0.0.0:7946 failed: port is already allocated
As far as I'm concerned, I should not be getting this error, due to bridge mode.
The docker run -p option allocates a port on the host system; those are shared across all containers, independently of what Docker-private network they’re using. These also will conflict with non-Docker processes running on the host.
If your goal is just to be able to communicate between containers on the same network, you don’t need a -p option at all. They can use each others’ --name and the port the service inside the container is listening on to connect.
If you’re trying to run multiple Docker container stacks at the same time, you need to decide which specific instance port 2377 on your host will route to, and change the other container’ -p option.
Specifically setting the Docker-internal private IP addresses (or worrying about them at all) is almost never necessary. I’d delete those --subnet and --ip options. To communicate between containers, put them on the same network as described above; from outside you need a (unique) -p option.
I have a container1 running a service1 on port1
also
I have a container2 running a service2 on port2
How can I access service2:port2 from service1:port1?
I mention that the container are linked together.
I ask if there is a way to do it without accessing the docker0 IP (where the port is visible)
thanks
The preferred solution is to place both containers on the same network, use the build-in dns discovery to reach the other node by name, and you'll be able to access them by the container port, not the host published port. By CLI, that looks like:
docker network create testnet
docker run -d --net testnet --name web nginx
docker run -it --rm --net testnet busybox wget -qO - http://web
The busybox shows a sample client container connecting to the nginx container with the name web, over port 80. Note that this port didn't need to be published to be reachable by other containers.
Setting up multi-container environments with their own network is a common task for docker-compose, so I'd recommend looking into this tool if you find yourself doing this a lot.
Docker commands that I have used to spin the consul container-
Created static Ip for container 1 = docker network create --subnet=172.18.0.0/16 C1
Run a consul container to that Ip:
docker run -d --net C1 --ip 172.18.0.10 -p 48301:8301/tcp -p 48400:8400/tcp -p 48600:8600/tcp -p 48300:8300/tcp -p 48302:8302/tcp -p 48302:8302/udp -p 48500:8500/tcp -p 48600:8600/udp -p 48301:8301/udp --name=test1 consul agent -client=172.18.0.10 -bind=172.18.0.10 -server -bootstrap -ui
Similarly created static Ip for containter 2 - docker network create --subnet=172.19.0.0/16 C2
docker run -d --net C2 --ip 172.19.0.10 -p 58301:8301/tcp -p 58400:8400/tcp -p 58600:8600/tcp -p 58300:8300/tcp -p 58302:8302/tcp -p 58302:8302/udp -p 58500:8500/tcp -p 58600:8600/udp -p 58301:8301/udp --name=test2 consul agent -client=172.19.0.10 -bind=172.19.0.10 -server -bootstrap -ui -join 192.168.99.100:48301
The consul container test2 at IP 172.19.0.10:8301 is not able to gossip with the 172.18.0.10:8301. I get the No Acknowledgement received message.
I also tried the --link to link both containers. But that didn't work.
Can anyone let me know if I am doing everything correct?
When you create a user-defined network on the docker daemon, there are some properties of these networks that you have to be aware of.
Each container in the network can immediately communicate with other containers in the network. Though, the network itself isolates the containers from external networks. Docker documentation
That effectively says what you are experiencing. The containers can not talk to each other because they are isolated from each other (reside in different networks).
To the point of --link, it is not supported in user-defined networks.
Within a user-defined bridge network, linking is not supported. Docker documentation
The solution would be to simply put both containers in the same network. I don't see an apparent need to use two different networks from your description. Just use a different --ip for the second one.