Docker swarm ingress - unable to connect through two networks - docker

I tried to run docker swarm over two different networks.
First network is 10.10.100.x/24
Second network is 10.10.150.x/24
Both networks can see each other. There are no firewall rules between them to block any traffic.
Specifically I tested 7946 TCP and UDP and 4789 UDP. I can confirm, that I can connect from the first network to the second network on both ports and both protocols. And also from the second network to the first network without any issue.
Docker swarm is up and running and used engine is 20.10.11
I can see that all nodes have status=Ready and availability=Active.
Ingress network is default:
and I can see all peers listed there as well.
But when I deploy any service to any node with port -p 20000:80, then I can see this node only from the network where it was deployed.
If service lands on the first network, it is accessible only through nodes from the first network, not from the second.
If service lands on the second network, it is accessible only through nodes from the second network, not from the first.
Any thoughts how to fix this?
Thanks
update 1:
Tried to run swarm with additional parameter docker swarm init --default-addr-pool 172.100.0.0/16. Result remains the same.
update 2:
Based on the advice from #BMitch
I verified with sudo tcpdump -nn -s0 -v port 4789 or 7946, that port 7946 works (UDP and TCP).
I also verified with the previous tcpdump command and nc -z -v -u 10.10.150.200 4789 (run from the first network), that port 4789 works as well.

Same issue for me, routing and overlay work great but ingress load balancer only works through the same site endpoints that runs the container.
Oddly I discovered ingress load balancer works cross sites when using nc -l as server socket, making the whole even more obscure to me.
REM: Underlay network is wireguard VPN (L3 point-to-point)

In the end - problem was in the NAT. Our second network was behind NAT, which caused this issue. Once we removed NAT, everything worked.

Related

Docker outbound connection is blocked

I am new to the docker world. I am currently setting up an custom application which relies on docker. The setup requires docker container to connect my outside network. All is working well but the container is unable to connect to outside network. After initial investiagtion and tcpdump I came to know that docker container can connect to the outside world. The packet is forwaded to the docker0 interface and docker0 has forwarded the packet to the eth0(physical interface), the eth0 then forwards the packet to the outside world and receives reponse. Now this is where the problem is, the eth0 after receiving reposne doesnot forward the packet to the docker0 interface and finally to the end host.
The below is the iptable run which are all set to default.
Also tcpdump output
Inteface present on th host
HOST OS is SUSE Ent linux 15.3
Docker version 20.10.17-ce, build a89b84221c85
I will be very much grateful for any reponse.
Thank You

how to block external access to docker container linux centos 7

I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.

Docker swarm overlay Connect: no route to host

I have a swarm with 2 nodes. One is an ubuntu VM on azure and the other one is my VM on my local machine.
When the containers try to make requests to each other with I get this dial tcp 10.0.0.88:9999: connect: no route to host
I've enabled in the 2 nodes all the swarm communication ports needed: tcp 2377 udp/tcp 7946 and udp 4789.
Communication works if I run everything local.
Any ideas?
Thanks
An overlay network doesn't create connectivity between two nodes, it requires connectivity, and then uses that to connect containers running on each node. From the prerequisites, each node needs to be able to reach the overlay ports on every other node in the cluster. See the documentation for more details:
https://docs.docker.com/network/overlay/

Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded

I am trying to set up docker swarm with an overlay network. I have some hosts on aws while others are laptops running Ubuntu(same as on aws). Every node has a static public IP. I have created an overlay network as:
docker network create --driver=overlay --attachable test-net
I have created a swarm network on one of the aws hosts. Every other node is able to join that swarm network.
However when I run docker run -it --name alpine2 --network test-net alpine on any node not on aws, I get the error: docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded.
But if I run the same on any aws host, then everything is working fine. Is there anything more I need to do in terms of networking/ports If there are some nodes on aws while others are not?
I have opened the ports required for swarm networking on all machines.
EDIT: All the nodes are marked as "active" when listing in the manager node.
UPDATE Solved this issue by opening the respective ports. It now works if all the nodes are Linux based. But when I try to make a swarm with the manager as Linux(ubuntu) os, mac os machines are not able to join the swarm.
check if the node in drain state:
docker node inspect --format {{.Spec.Availability}} node
if yes then update the state:
docker node update --availability active node
here is the explanation:
Resolution
When a node is in drain state, it is expected behavior that you should
not be able to allocate swarm mode resources such as multi-host
overlay network IP addresses to the node.However, swarm mode does not
currently provide a messaging mechanism between the swarm leader where
IP address management occurs back to the worker node that requested
the IP address. So docker run fails with context deadline exceeded.
Internal engineering issue escalation/292 has been opened to provide a
better error message in a future release of the Docker daemon.
source
Check if the below ports are opened on both machines.
TCP port 2377
TCP and UDP port 7946
UDP port 4789
You may use ufw to allow the ports:
ufw allow 2377/tcp
I had a similar issue, managed to fix it by making sure the ENGINE VERSION of the nodes were the same.
sudo docker node ls
Another common cause for this is Ubuntu server installer installing docker using snap, and that package is buggy. Uninstall with snap and install using apt. And reconsider Ubuntu. :-/

Port forwarding Ubuntu - Docker

I have following problem:
Assume that I started two Docker containers on host machine: A and B.
docker run A -ti -p 2000:2000
docker run B -ti -p 2001:2001
I want to be able to get to each of this containers FROM INTERNET by:
http://example.com:2000
http://example.com:2001
How to reach that?
The rest of the equation here is just normal TCP / IP flow. You'll need to make sure of the following:
If the host has some an implicit deny for incoming traffic on its physical interface, you will need to open up ports 2000 and 2001, just like you would for any service (Docker or not).
If the host is behind a NAT or other external means of routing, you'll need to punch holes for those ports there as well.
You'll need the external IP address (either the one attached to the host or the one in front of the NAT allowing access to the ports).
As far as Docker is concerned, you've done what is required to open the ports to the service running in that container correctly.

Resources