How to get container's ip on bridge network - docker

I am deploying a mariadb cluster like this.
(host) $ cat docker-compose.yaml
version: '3.6'
services:
parent:
image: erkules/galera
command: ["--wsrep-cluster-name=local-test", "--wsrep-cluster-address=gcomm://"]
hostname: parent
child:
image: erkules/galera
command: ["--wsrep-cluster-name=local-test", "--wsrep-cluster-address=gcomm://parent"]
depends_on:
- parent
deploy:
replicas: 5
(host) $ sudo docker stack deploy --compose-file docker-compose.yaml mariadb
Now I am trying to find the ips of the containers within the bridge network, so that I can try the connect to the db servers from host machine. I can find like this,
(host) $ docker exec $(docker ps -q | head -n 1) /sbin/ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
176: eth0#if177: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.1.7/24 brd 10.0.1.255 scope global eth0
valid_lft forever preferred_lft forever
182: eth1#if183: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:08 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.8/16 brd 172.19.255.255 scope global eth1
valid_lft forever preferred_lft forever
(host) $ mysql -h 172.19.0.8 -u root
Welcome to the MariaDB monitor. ...
But I have to do some dirty parsing. So I am wondering if there is an elegant way to get this using only docker provided commands. Example, for ips in the overlay network, we can use inspect command to get a json output.
(host) $ docker ps -q | xargs docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Id }}'
10.0.1.7 c8e3dfc13c60c6925e55dff1c8dad5fb8e9bbb2335743671e45cd2f4d47fabab
10.0.1.8 7fbede8ffa63e007544c28efcc2ec2418ad44b2012e849489c25536a8408e9f6
10.0.1.6 fe0b7dcdd26fa3edecc025a5b6be0bfab04bce4d448587e5488e414dba595758
10.0.1.10 2fe03472255577db0b2d54f40422be15915121fedf3873d9e09082d5caad7f2f
10.0.1.9 0b34241582be3218d022cc58c95ce21a8be0c46dcd2ff7bca64a02a11427953a
10.0.1.4 5acc231db33b494a83010f0d6397b11365d14ca264f52bc477c642a9eda0be3f
Edit1: I want to keep the deploying part as general as possible. I don't want to publish ports. Then I have to assign a different port to every single container.
Edit2: Apparently, for multi-host swarm overlay network, Docker uses docker_gwbridge interface.
So I can do docker network inspect docker_gwbridge to get the ips for each container.

Generally, when we use compose, the order of our docker-compose.yml containers id, take .2... .3 .... etc.
So, try creating a network (docker create network...) and put at the last of your docker-compose.yml por example:
rabbitmq:
ports:
- "8201:8080"
volumes:
- /share:/share
container_name: rabbitmq-int1
hostname: rabbitmq-int1
cpu_shares: 10
mem_limit: 2000000000
networks:
compose_net:
ipv4_address: 172.12.0.3
networks:
compose_net:
external:
name: network_compose
Where "network_compose" is the network previously created in Host/Server (docker create network...)

This command show your network details, contain list containers join in your network and their IP. Hope this helpfull
docker network ls // To get list network running -> get network id
docker network inspect network_id // Now you can get container IP

Related

Can't Connect to Docker container within local network

Trying to run QuakeJS within the docker container. I'm new to docker (and networking). Couldn't connect. Decided to start easier and ran nginxdemos/helloworld. Still can't connect to the server (running Ubuntu Server).
Tried:
docker run -d -p 8080:80 nginxdemos/hello
Probably relevant ip addr:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 18:03:73:be:70:eb brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.89.233.61/20 metric 100 brd 10.89.239.255 scope global dynamic eno1
valid_lft 27400sec preferred_lft 27400sec
inet6 fe80::1a03:73ff:febe:70eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:7c:bb:47 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe7c:bb47/64 scope link
valid_lft forever preferred_lft forever
Here's docker-network-ls:
NETWORK ID NAME DRIVER SCOPE
5671ad4b57fe bridge bridge local
a9348e40fb3c host host local
fdb16382afbd none null local
ufw-status
To Action From
-- ------ ----
8080 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT 172.17.0.0/16 on docker0
But when I try to access in a web browser (chrome and firefox) at 172.17.0.0:8080 (or many other permutations) I just end up in a time out. I'm sure this is a stupid think but I'm very stuck.
UPDATE
I installed a basic apache server and it worked fine. So it's something with Docker. I think.
UPDATE AGAIN
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7bbfee83954 nginxdemos/hello "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp, :::8080->80/tcp relaxed_morse
I can use curl localhost:8080 and see the nginx page
I was playing with ufw but disabled it, not worried about network security. Tried ufw-docker too
FINAL UPDATE
Restarting Docker worked :|
When you publish a port with the -p option to docker run, the syntax is -p <host port>:<container port>, and you are saying, "expose the service running in the container on port <container port> on the Docker host as port <host port>.".
So when you run:
docker run -d -p 8080:80 nginxdemos/hello
You could open a browser on your host and connect to http://localhost:8080 (because port 8080 is the <host_port>). If you have the address of the container, you could also connect to http://<container_ip>:80, but you almost never want to do that, because every time you start a new container it receives a new ip address.
We publish ports to the host so that we don't need to muck about finding container ip address.
running 172.17.0.0:8080 (0.1, 0.2) or 10.89.233.61:8080 result in a timeout
172.17.0.0:8080 doesn't make any sense.
Both 172.17.0.1:8080 and 10.89.233.61:8080 ought to work (as should any other address assigned to a host interface). Some diagnostics to try:
Is the container actually running (docker ps)?
On the docker host are you able to connect to localhost:8080?
It looks like you're using UFW. Have you made any recent changes to the firewall?
If you restart docker (systemctl restart docker), do you see any changes in behavior?

Docker compose: services cannot connect to each other

I've been following this tutorial on docker services and swarms. But I'm having some trouble with networking between different docker containers.
The following is my docker-compose.yml file, it basically contains two services. One is just a redis image connected to two networks (although the second is useless for now). And the other is my application which needs to connect to redis. For that reason I opted to give the redis service a static IP.
version: "3"
services:
my_redis:
image: redis
ports:
- "6379:6379"
networks:
first_network:
ipv4_address: 172.20.1.1
second_network:
ipv4_address: 172.30.1.1
my_app:
build:
context: .
dockerfile: Dockerfile_my_app
image: my_app_image
depends_on:
- my_redis
deploy:
replicas: 1 # 4
networks:
- first_network
networks:
first_network:
ipam:
config:
- subnet: 172.20.1.0/24
second_network:
ipam:
config:
- subnet: 172.30.1.0/24
And the following is my Dockerfile for my_app:
FROM python:3.7
WORKDIR /app
COPY . /app
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
CMD ip a && wait-for-it.sh 172.20.1.1:6379 && PYTHONPATH=. python3 my_app.py
Now the problem I'm having is that for some reason, my app cannot connect to the redis service. So I tried the following:
I tried running the redis container alone, using the following command: sudo docker run -p 6379:6379 redis and then I used wait-for-it to make sure that localhost:6379 was up and running and it was.
I thought then maybe docker stack deploy is creating the app service before the redis service, So I added the depends_on part in the docker-compose file.
I found out that even depends_on only guarantees order of starting (not being ready, i.e. not running all commands before proceeding to next image) and that I have to find a different solution. Based on that I also changed the Dockerfile_my_app to run wait-for-it before it actually runs my app. Didn't work.
Lastly, I didn't know what else to do, I ran ip a to see if my_app service is getting the right IP. and it is getting an IP in the right range:
my_test_my_app.1.jydyydckzyfh#snode-01 | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
my_test_my_app.1.jydyydckzyfh#snode-01 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
my_test_my_app.1.jydyydckzyfh#snode-01 | inet 127.0.0.1/8 scope host lo
my_test_my_app.1.jydyydckzyfh#snode-01 | valid_lft forever preferred_lft forever
my_test_my_app.1.jydyydckzyfh#snode-01 | 1010: eth0#if1011: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
my_test_my_app.1.jydyydckzyfh#snode-01 | link/ether 02:42:ac:14:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
my_test_my_app.1.jydyydckzyfh#snode-01 | inet 172.20.1.3/24 brd 172.20.1.255 scope global eth0
my_test_my_app.1.jydyydckzyfh#snode-01 | valid_lft forever preferred_lft forever
my_test_my_app.1.jydyydckzyfh#snode-01 | 1012: eth1#if1013: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
my_test_my_app.1.jydyydckzyfh#snode-01 | link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
my_test_my_app.1.jydyydckzyfh#snode-01 | inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
my_test_my_app.1.jydyydckzyfh#snode-01 | valid_lft forever preferred_lft forever
my_test_my_app.1.jydyydckzyfh#snode-01 | wait-for-it.sh: waiting 15 seconds for 172.20.1.1:6379
my_test_my_app.1.jydyydckzyfh#snode-01 | wait-for-it.sh: timeout occurred after waiting 15 seconds for 172.20.1.1:6379
So in short, IPs are getting assigned correctly. However, the my_app service cannot connect to the redis service. Is there any reason why? Am I missing something in the compose file?
Any help would be appreciated.
Try changing wait-for-it.sh 172.20.1.1:6379 in Dockerfile_my_app to:
wait-for-it.sh my_redis:6379
and see if that works.
Explanation:
If i'm not wrong, once containers are part of the same network, they are able to communicate with each other via the service names declared in the docker-compose file.
See https://docs.docker.com/compose/compose-file/ under 'Networks' and https://docs.docker.com/compose/networking/ for detailed information

Docker compose api cannot connect to host MongoDB database

I've moved my Mongodb from a container to a local service (it was really flaky when containerised). Problem is I cannot connect from a Node api into the locally running MongoDB service. I can get this working on my Mac, but not on Ubuntu. I've tried:
- DB_HOST=mongodb://172.17.0.1:27017/proto?authSource=admin
- DB_HOST=mongodb://localhost:27017/proto?authSource=admin
// this works locally, but not on my Ubuntu server
- DB_HOST=mongodb://host.docker.internal:27017/proto?authSource=admin
Tried adding this to my docker file:
ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts && \
Also tried network bridge to no avail. Example docker compose
version: '3.3'
services:
search-api:
build: ../search-api
environment:
- PORT=3333
- DB_HOST=mongodb://host.docker.internal:27017/search?authSource=admin
- DB_USER=dbuser
- DB_PASS=password
ports:
- 3333:3333
restart: always
Problem can be caused by MongoDb not listening on the correct ip address and therefore blocking your access.
Either make sure you're listening to a specific ip or listening to all: 0.0.0.0
On linux the config file is per default installed here: /etc/mongod.conf
Configuration specific Ip address:
net:
bindIp: 172.17.0.1 #being your host's ip address
port: 27017
Configuration open to all connections:
net:
bindIp: 0.0.0.0
port: 27017
To get your hosts ip address (from within a container)
On docker-for-mac and docker-for-windows you can use host.docker.internal
While on linux you need to run ip route show in the container.
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
(copied from here: How to access host port from docker container)

Docker exposed port unavailable on browser, though the former run works

I use docker on Virtual Box and I try to add one more image-running.
The first 'run' that I had is
$ docker run -Pit --name rpython -p 8888:8888 -p 8787:8787 -p 6006:6006 -p
8022:22 -v /c/Users/lenovo:/home/dockeruser/hosthome
datascienceschool/rpython
$ docker port rpython
8888/tcp -> 0.0.0.0:8888
22/tcp -> 0.0.0.0:8022
27017/tcp -> 0.0.0.0:32781
28017/tcp -> 0.0.0.0:32780
5432/tcp -> 0.0.0.0:32783
6006/tcp -> 0.0.0.0:6006
6379/tcp -> 0.0.0.0:32782
8787/tcp -> 0.0.0.0:8787
It works fine, on local browser through those tcp.
But about the second 'running'
docker run -Pit -i -t -p 8888:8888 -p 8787:8787 -p 8022:22 -p 3306:3306 --
name ubuntu jhleeroot/dave_ubuntu:1.0.1
$ docker port ubuntu
22/tcp -> 0.0.0.0:8022
3306/tcp -> 0.0.0.0:3306
8787/tcp -> 0.0.0.0:8787
8888/tcp -> 0.0.0.0:8888
It doesn't work.
root#90dd963fe685:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Any idea about this?
Your first command ran an image (datascienceschool/rpython) that presumably kicked off a python app which must have listened to the port you were testing.
Your second command ran a different image (jhleeroot/dave_ubuntu:1.0.1) and from the pasted output you are only running a bash shell in that image. Bash isn't listening on these ports inside the container, so docker will forward to a closed port and the browser will see a closed connection.
Docker doesn't run the server for your listening ports, it relies on you to run that inside the container and it just forwards the requests.

docker-compose how to run container with bind 1-to-1 ports on ip aliasing interface

i have many IP's on my interface:
inet 10.100.131.115/24 brd 10.100.131.255 scope global br0
valid_lft forever preferred_lft forever
inet 10.100.131.120/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
inet 10.100.131.121/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
inet 10.100.131.122/24 brd 10.100.131.255 scope global secondary br0
valid_lft forever preferred_lft forever
docker-compose.yml:
version: '2'
services:
app:
image: app
network_mode: "bridge"
volumes:
- /root/docker/app/project/:/root/:ro
ports:
- "7999:7999"
network_mode: "bridge"
if i up single container all good:
docker-compose ps
Name Command State Ports
docker_app_1 /bin/sh -c uwsgi --ini wsg ... Up 0.0.0.0:7999->7999/tcp
but when i trying scale my app i got error (_ofc, because 7999 is alredy used by docker_app_1_):
docker-compose scale app=2
WARNING: The "app" service specifies a port on the host.
If multiple containers for this service are created on a single host, the port will clash.
Creating and starting docker_app_2 ... error
ERROR: for docker_app_2 Cannot start service app: b'driver failed programming external connectivity on endpoint docker_app_2 (xxxxxxxxxxxxxxxxx...):
Bind for 0.0.0.0:7999 failed: port is already allocated'
Can i tell docker-compose to use all IP's from interface which using IP alising?
i need 1 IP from interface:7999 -> docker container:7999
You can map specific IP's to a container rather than the default of 0.0.0.0. This is not scaling a single service though.
services:
whatever:
ports:
- '10.100.131.121:7999:7999/tcp'
another:
ports:
- '10.100.131.122:7999:7999/tcp'

Resources