Docker compose api cannot connect to host MongoDB database - docker

I've moved my Mongodb from a container to a local service (it was really flaky when containerised). Problem is I cannot connect from a Node api into the locally running MongoDB service. I can get this working on my Mac, but not on Ubuntu. I've tried:
- DB_HOST=mongodb://172.17.0.1:27017/proto?authSource=admin
- DB_HOST=mongodb://localhost:27017/proto?authSource=admin
// this works locally, but not on my Ubuntu server
- DB_HOST=mongodb://host.docker.internal:27017/proto?authSource=admin
Tried adding this to my docker file:
ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts && \
Also tried network bridge to no avail. Example docker compose
version: '3.3'
services:
search-api:
build: ../search-api
environment:
- PORT=3333
- DB_HOST=mongodb://host.docker.internal:27017/search?authSource=admin
- DB_USER=dbuser
- DB_PASS=password
ports:
- 3333:3333
restart: always

Problem can be caused by MongoDb not listening on the correct ip address and therefore blocking your access.
Either make sure you're listening to a specific ip or listening to all: 0.0.0.0
On linux the config file is per default installed here: /etc/mongod.conf
Configuration specific Ip address:
net:
bindIp: 172.17.0.1 #being your host's ip address
port: 27017
Configuration open to all connections:
net:
bindIp: 0.0.0.0
port: 27017
To get your hosts ip address (from within a container)
On docker-for-mac and docker-for-windows you can use host.docker.internal
While on linux you need to run ip route show in the container.
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
(copied from here: How to access host port from docker container)

Related

Can't Connect to Docker container within local network

Trying to run QuakeJS within the docker container. I'm new to docker (and networking). Couldn't connect. Decided to start easier and ran nginxdemos/helloworld. Still can't connect to the server (running Ubuntu Server).
Tried:
docker run -d -p 8080:80 nginxdemos/hello
Probably relevant ip addr:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 18:03:73:be:70:eb brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.89.233.61/20 metric 100 brd 10.89.239.255 scope global dynamic eno1
valid_lft 27400sec preferred_lft 27400sec
inet6 fe80::1a03:73ff:febe:70eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:7c:bb:47 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe7c:bb47/64 scope link
valid_lft forever preferred_lft forever
Here's docker-network-ls:
NETWORK ID NAME DRIVER SCOPE
5671ad4b57fe bridge bridge local
a9348e40fb3c host host local
fdb16382afbd none null local
ufw-status
To Action From
-- ------ ----
8080 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT 172.17.0.0/16 on docker0
But when I try to access in a web browser (chrome and firefox) at 172.17.0.0:8080 (or many other permutations) I just end up in a time out. I'm sure this is a stupid think but I'm very stuck.
UPDATE
I installed a basic apache server and it worked fine. So it's something with Docker. I think.
UPDATE AGAIN
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7bbfee83954 nginxdemos/hello "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp, :::8080->80/tcp relaxed_morse
I can use curl localhost:8080 and see the nginx page
I was playing with ufw but disabled it, not worried about network security. Tried ufw-docker too
FINAL UPDATE
Restarting Docker worked :|
When you publish a port with the -p option to docker run, the syntax is -p <host port>:<container port>, and you are saying, "expose the service running in the container on port <container port> on the Docker host as port <host port>.".
So when you run:
docker run -d -p 8080:80 nginxdemos/hello
You could open a browser on your host and connect to http://localhost:8080 (because port 8080 is the <host_port>). If you have the address of the container, you could also connect to http://<container_ip>:80, but you almost never want to do that, because every time you start a new container it receives a new ip address.
We publish ports to the host so that we don't need to muck about finding container ip address.
running 172.17.0.0:8080 (0.1, 0.2) or 10.89.233.61:8080 result in a timeout
172.17.0.0:8080 doesn't make any sense.
Both 172.17.0.1:8080 and 10.89.233.61:8080 ought to work (as should any other address assigned to a host interface). Some diagnostics to try:
Is the container actually running (docker ps)?
On the docker host are you able to connect to localhost:8080?
It looks like you're using UFW. Have you made any recent changes to the firewall?
If you restart docker (systemctl restart docker), do you see any changes in behavior?

Firewalld And Container Published Ports

On a KVM guest of my RHEL8 host, whose KVM guest is running CentOS7, I was expecting firewalld to by default block outside access to an ephemeral port published to by a Docker Container running nginx. To my surprise the access ISN'T blocked.
Again, the host (myhost) is running RHEL8, and it has a KVM guest (myguest) running CentOS7.
The firewalld configuration on myguest is standard, nothin' fancy:
[root#myguest ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: http https ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Here are the eth0 and eth1 interfaces that fall under the firewalld public zone:
[root#myguest ~]# ip a s dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:96:9c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.100.111/24 brd 192.168.100.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe96:9cfc/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root#myguest ~]# ip a s dev eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:66:6c:a1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe66:6ca1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
On myguest I'm running Docker, and the nginx container is publishing its Port 80 to an ephemeral port:
[me#myguest ~]$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06471204f091 nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:49154->80/tcp focused_robinson
Notice that in the prior firewall-cmd output I was not permitting access via this ephemeral TCP Port 49154 (or to any other ephemeral ports for that matter). So, I was expecting that unless I did so, outside access to nginx would be blocked. But to my surprise, from another host in the home network running Windows, I was able to access it:
C:\Users\me>curl http://myguest:49154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.
.etc etc
If a container publishes its container port to an ephemeral one on the host (myguest in this case), shouldn't the host firewall utility protect access to that port in the same manner as it would a standard port? Am I missing something?
But I also noticed that in fact the nginx container is listening on a TCP6 socket:
[root#myguest ~]# netstat -tlpan | grep 49154
tcp6 0 0 :::49154 :::* LISTEN 23231/docker-proxy
It seems, then, that firewalld may not be blocking tcp6 sockets? I'm confused.
This is obviously not a production issue, nor something to lose sleep over. I'd just like to make sense of it. Thanks.
The integration between docker and firewalld has changed over the years, but based on your OS versions and CLI output I think you can get the behavior you expect by setting AllowZoneDrifting=no it /etc/firewalld/firewalld.conf 1 on the RHEL-8 host.
Due to zone drifting, it possible for packets received in a zone with --set-target=default (e.g. public zone) to drift to a zone with --set-target=accept (e.g. trusted zone). This means FORWARDed packets received in zone public will be forwarded to zone trusted. If your docker containers are using a real bridge interface, then this issue may apply to your setup. Docker defaults to SNAT so usually this problem is hidden.
Newer firewalld 2 releases have completely removed this behavior, because as you have found it's both unexpected and a security issue.

How to get container's ip on bridge network

I am deploying a mariadb cluster like this.
(host) $ cat docker-compose.yaml
version: '3.6'
services:
parent:
image: erkules/galera
command: ["--wsrep-cluster-name=local-test", "--wsrep-cluster-address=gcomm://"]
hostname: parent
child:
image: erkules/galera
command: ["--wsrep-cluster-name=local-test", "--wsrep-cluster-address=gcomm://parent"]
depends_on:
- parent
deploy:
replicas: 5
(host) $ sudo docker stack deploy --compose-file docker-compose.yaml mariadb
Now I am trying to find the ips of the containers within the bridge network, so that I can try the connect to the db servers from host machine. I can find like this,
(host) $ docker exec $(docker ps -q | head -n 1) /sbin/ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
176: eth0#if177: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.1.7/24 brd 10.0.1.255 scope global eth0
valid_lft forever preferred_lft forever
182: eth1#if183: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:08 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.8/16 brd 172.19.255.255 scope global eth1
valid_lft forever preferred_lft forever
(host) $ mysql -h 172.19.0.8 -u root
Welcome to the MariaDB monitor. ...
But I have to do some dirty parsing. So I am wondering if there is an elegant way to get this using only docker provided commands. Example, for ips in the overlay network, we can use inspect command to get a json output.
(host) $ docker ps -q | xargs docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Id }}'
10.0.1.7 c8e3dfc13c60c6925e55dff1c8dad5fb8e9bbb2335743671e45cd2f4d47fabab
10.0.1.8 7fbede8ffa63e007544c28efcc2ec2418ad44b2012e849489c25536a8408e9f6
10.0.1.6 fe0b7dcdd26fa3edecc025a5b6be0bfab04bce4d448587e5488e414dba595758
10.0.1.10 2fe03472255577db0b2d54f40422be15915121fedf3873d9e09082d5caad7f2f
10.0.1.9 0b34241582be3218d022cc58c95ce21a8be0c46dcd2ff7bca64a02a11427953a
10.0.1.4 5acc231db33b494a83010f0d6397b11365d14ca264f52bc477c642a9eda0be3f
Edit1: I want to keep the deploying part as general as possible. I don't want to publish ports. Then I have to assign a different port to every single container.
Edit2: Apparently, for multi-host swarm overlay network, Docker uses docker_gwbridge interface.
So I can do docker network inspect docker_gwbridge to get the ips for each container.
Generally, when we use compose, the order of our docker-compose.yml containers id, take .2... .3 .... etc.
So, try creating a network (docker create network...) and put at the last of your docker-compose.yml por example:
rabbitmq:
ports:
- "8201:8080"
volumes:
- /share:/share
container_name: rabbitmq-int1
hostname: rabbitmq-int1
cpu_shares: 10
mem_limit: 2000000000
networks:
compose_net:
ipv4_address: 172.12.0.3
networks:
compose_net:
external:
name: network_compose
Where "network_compose" is the network previously created in Host/Server (docker create network...)
This command show your network details, contain list containers join in your network and their IP. Hope this helpfull
docker network ls // To get list network running -> get network id
docker network inspect network_id // Now you can get container IP

docker swarm container connect to host port

I have a swarm cluster in which I created a global service to run on all docker hosts in the cluster.
The goal is to have each container instance for this service connect to a port listening on the docker host.
For further information, I am following this Docker Daemon Metrics guide for exposing the new docker metrics API on all hosts and then proxying that host port into the overlay network so that Prometheus can scrape metrics from all swarm hosts.
I have read several docker github issues #8395 #32101 #32277 #1143 - from this my understanding is the same as outlined in the Docker Daemon Metrics. In order to connect to the host from within a swarm container, I should use the docker-gwbridge network which by default is 172.18.0.1.
Every container in my swarm has a network interface for the docker-gwbridge network:
326: eth0#if327: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:ff:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 scope global eth0
valid_lft forever preferred_lft forever
333: eth1#if334: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
Also, every container in the swarm has a default route that is via 172.0.0.1:
/prometheus # ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print $2 }'
172.18.0.1
/prometheus # netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
172.18.0.1
/prometheus # ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 src 10.0.1.9
10.255.0.0/16 dev eth0 src 10.255.0.6
172.18.0.0/16 dev eth1 src 172.18.0.4
Despite this, I cannot communicate with 172.18.0.1 from within the container:
/ # wget -O- 172.18.0.1:4999
Connecting to 172.18.0.1:4999 (172.18.0.1:4999)
wget: can't connect to remote host (172.18.0.1): No route to host
On the host, I can access the docker metrics API on 172.18.0.1. I can ping and I can make a successful HTTP request.
Can anyone shed some light as to why this does not work from within the container as outlined in the Docker Daemon Metrics guide?
If the container has a network interface on the 172.18.0.1 network and has routes configured for 172.18.0.1 why do pings fail to 172.18.0.1 from within the container?
If this is not a valid approach for accessing a host port from within a swarm container, then how would one go about achieving this?
EDIT:
Just realized that I did not give all the information in the original post.
I am running docker swarm on a CentOS 7.2 host with docker version 17.04.0-ce, build 4845c56. My kernel is a build of 4.9.11 with vxlan and ipvs modules enabled.
After some further digging I have noted that this appears to be a firewall issue. I discovered that not only was I unable to ping 172.18.0.1 from within the containers - but I was not able to ping my host machine at all! I tried my domain name, the FQDN for the server and even its public IP address but the container could not ping the host (there is network access as I can ping google/etc).
I disabled firewalld on my host and then restarted the docker daemon. After this I was able to ping my host from within the containers (both domain name and 172.18.0.1). Unfortunately this is not a solution for me. I need to identify what firewall rules I need to put in place to allow container->host communication without requiring firewalld being disabled.
Firstly, I owe you a huge THANK YOU. Before I read your EDIT part, I'd spent literally day and night to solve a similar issue, and never realized that the devil is the firewall.
Without disabling the firewall, I have solved my problem on Ubunt 16.04, using
sudo ufw allow in on docker_gwbridge
sudo ufw allow out on docker_gwbridge
sudo ufw enable
I'm not much familiar with CentOS, but I do believe the following should help you, or at least serve as a hint
sudo firewall-cmd --permanent --zone=trusted --change-interface=docker_gwbridge
sudo systemctl restart firewalld
You might have to restart docker as well.

docker-compose how to run container with bind 1-to-1 ports on ip aliasing interface

i have many IP's on my interface:
inet 10.100.131.115/24 brd 10.100.131.255 scope global br0
valid_lft forever preferred_lft forever
inet 10.100.131.120/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
inet 10.100.131.121/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
inet 10.100.131.122/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
docker-compose.yml:
version: '2'
services:
app:
image: app
network_mode: "bridge"
volumes:
- /root/docker/app/project/:/root/:ro
ports:
- "7999:7999"
network_mode: "bridge"
if i up single container all good:
docker-compose ps
Name Command State Ports
docker_app_1 /bin/sh -c uwsgi --ini wsg ... Up 0.0.0.0:7999->7999/tcp
but when i trying scale my app i got error (_ofc, because 7999 is alredy used by docker_app_1_):
docker-compose scale app=2
WARNING: The "app" service specifies a port on the host.
If multiple containers for this service are created on a single host, the port will clash.
Creating and starting docker_app_2 ... error
ERROR: for docker_app_2 Cannot start service app: b'driver failed programming external connectivity on endpoint docker_app_2 (xxxxxxxxxxxxxxxxx...):
Bind for 0.0.0.0:7999 failed: port is already allocated'
Can i tell docker-compose to use all IP's from interface which using IP alising?
i need 1 IP from interface:7999 -> docker container:7999
You can map specific IP's to a container rather than the default of 0.0.0.0. This is not scaling a single service though.
services:
whatever:
ports:
- '10.100.131.121:7999:7999/tcp'
another:
ports:
- '10.100.131.122:7999:7999/tcp'

Resources