Docker network not using all NIC's - docker

My docker container is not reachable on all host network interfaces.
My host server has 2 network interfaces (and ip adresses). When running my docker container without a specific defined docker network, it works and the container is reachable on both ip adresses.
But when I'm running with a self defined docker network and add it to the docker-compose file only 1 ip is working. The other timesout. Why does this happen?
Docker-compose file
version: '3.7
services:
servicename-1:
#network_mode: "host"
image: nginxdemos/hello
init: true
ports:
- 8081:80
volumes:
omitted
environment:
ommitted
networks:
- a-netwerk-1
networks:
a-netwerk-1:
external:
name: a-network-1
docker inspect network:
[
{
"Name": "a-network-1",
"Id": "df4ab5e3285c75b71f8f88f66c4c5d85ad8f2f9b17e66f960b11778007810b96",
"Created": "2020-01-30T10:55:14.853289976+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.29.0.0/16",
"Gateway": "172.29.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2f2d5b2e22b3066085246ea53d1ca2c9f963b5e9138ae7202d8382be98428476": {
"Name": "test_testservicename_1",
"EndpointID": "c750b0d9d6ae82fec109da15d385b936f79f09bf814dd3b8d03642a2f03d46e2",
"MacAddress": "02:42:ac:1d:00:02",
"IPv4Address": "172.29.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Related

ping: proxy: Temporary failure in name resolution in docker container network -- "proxy" is container name here running in docker

Here I am having two proxy and my_ngnix containers running inside docker
I have two containers inside one network "bridge" as below
C:\> docker network inspect bridge
[
{
"Name": "bridge",
"Id": "82ffe522177d113af71d150c96b5e43df8946b9f17f901152cc2b4b96caf313a",
"Created": "2022-12-25T14:38:40.7492988Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"008e6142e13e91624e89b26ba697bff60765965a09daedafa6db766f30b6beb9": {
"Name": "proxy",
"EndpointID": "ed0536b6b97d9ad00b8deeb8dc5ad6f91e9809af3fc4032dfca4abd02760cc71",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"e215ead97a7e6580d3f6fec0f25790771af9f878b4194d61fa041b218a2117bf": {
"Name": "my_ngnix",
"EndpointID": "f0e7a6e824c18587a1cb32de89dbc8a1ec2fa62dcc7fe38516d294d7fdb19606",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
the moment I hit the command
C:\> docker container exec -it my_ngnix ping proxy
it shows me error : ping: proxy: Temporary failure in name resolution
can anyone help me here ---
I have installed all required update on container so I can use ping command
why container is not able to ping each other using container name ---same command work with ip address
I tried to use the same command using IP address as below
docker container exec -it my_ngnix ping 172.17.0.3
it works , however it wont work with the container name.

docker does not connect to bridge network

I have a container and I want to connect to DB, the docker host machine has a IP X.X.2.26 and the database X.X.2.27. I tried to connect the network in bridge mode. But I can't connect to te database. The host machine has connection to database.
This is my docker-compose.yml
version: '3.7'
​
networks:
sfp:
name: sfp
driver: bridge
​
services:
sfpapi:
image: st/sfp-api:${VERSION-latest}
container_name: "sfp-api"
restart: always
ports:
- "8082:8081"
networks:
- sfp
environment:
- TZ=America/Mexico_City
- SPRING_DATASOURCE_URL
- SPRING_DATASOURCE_USERNAME
- SPRING_DATASOURCE_PASSWORD
​
app:
image: st/sfp-app:${VERSION-latest}
container_name: "app"
restart: always
ports:
- "8081:80"
networks:
- sfp
environment:
- API_HOST
If I check the networks, it was created successfuly.
docker network ls
NETWORK ID NAME DRIVER SCOPE
86a58ac8a053 bridge bridge local
1890c6433c09 host host local
bab0a88222a3 none null local
01a411ad42df sfp bridge local
But if I see the inspect to the network, I can't see added containers
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "86a58ac8a05398bb827252b2dbe4c99e52aedf0896be6aa6c4358c41cf0e766e",
"Created": "2022-04-06T12:50:09.922881204-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
This is the inspect to container
docker inspect --format "{{ json .NetworkSettings.Networks }}" sfp-api
{"sfp":{"IPAMConfig":null,"Links":null,"Aliases":["sfpapi","bab30efe892b"],"NetworkID":"2076ee845b06df6ace975e1cf3fd360eb174ee97a9ae608911c243b08e98aa42","EndpointID":"3837a6f55449a59267aea7bbafc754d0fab6fedad282e280cce9d880d0c299a7","Gateway":"172.26.0.1","IPAddress":"172.26.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:1a:00:03","DriverOpts":null}}

get the IP adress to use for a docker container on a network

My NAS is where I run my containers. It sits on 192.168.1.23 on my network.
I am running a few containers inside a user-defined network. Here is the docker network inspect (I've removed the containers manually) :
[
{
"Name": "traefik2_proxy",
"Id": "fb2924fe59fbb0436c72f11cb028df832a473a165162ecf08b7e3a946cfa2d3c",
"Created": "2020-05-13T23:23:16.16424119+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.90.0/24",
"Gateway": "192.168.90.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
I have a specific container which is in that network at IP address 192.168.90.16 for which I have exposed the 9118 port using the following in my docker-compose :
ports:
- target: 9118
published: 9118
protocol: tcp
This is the portainer screenshot :
I was expecting to be able to connect to that container using 192.168.1.23:9118 but I tried to no avail.
What am I missing ? Which setting do I need to change for that container to be visible at that port on my NAS IP address ?
The port that the container was listening to was incorrect. I needed to modify the ports configuration to:
ports:
- target: 9117
published: 9118
protocol: tcp

Docker Cannot Resolve Hostname

I am trying to link my containerized VueJS frontend with my containerized Spring Boot API backend, with great difficulty.
Whenever I try to make an HTTP request to my API using the container name, I get the following.
OPTIONS http://api:4505/user/sign-in net::ERR_NAME_NOT_RESOLVED
Here is my docker-compose file:
version: "3"
services:
mongodb:
image: mongo
container_name: mongo
ports:
- "27017:27017"
api:
image: registry.gitlab.com/darragh.oflah/api:latest
container_name: api
ports:
- "4505:4505"
links:
- mongodb
web:
image: registry.gitlab.com/darragh.oflah/web:latest
container_name: web
ports:
- "80:8080"
links:
- api
Here what I get when I run sudo docker network inspect tmp_default
So it would seem that the network is set up correctly
[
{
"Name": "tmp_default",
"Id": "75ab7c89cb5a80aa7eddd7c5a3f7f4aafb911cfe96a923ad3db2219552366fd7",
"Created": "2020-02-04T16:55:49.131485109Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6ae3db08be2ed22245a173a677ae1b0f28eca878aa84e43744a320589cbda5af": {
"Name": "mongo",
"EndpointID": "b399bb72f28b6d47a93927712a665dcc725d27a6ba2ee432e715db00c9cbc835",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"fa7e4066e436181ce2991e048790f8de518af31fb97cf9351316ff8f41824449": {
"Name": "api",
"EndpointID": "bf8d071683bfa4ecbd215f3dd534d0e278702ed4377552ef242e5c65b01c3fa1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"fb544ef5389afd74d53f45d6de968008632b65340f161527a9c7aa4214aa7674": {
"Name": "web",
"EndpointID": "2dc3e8a452c241916a2e9f7e25b33ee7997fe2d25ec3543e5d34e888e50d905c",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "tmp",
"com.docker.compose.version": "1.21.2"
}
}
]
In the net work tab, the request is saying failed, which would indicate to me that the request is failing to leave the container at all.
Your compose file creates a docker network. You're running "web" in a container on that network. All containers in the docker network can access other containers via the hostname. However, the browser is running on your host machine. Your host machine is not in the docker network. Therefore your browser won't be able to access the api container via the hostname. If you type http://api:4505/user/sign-in into the browser url bar, it's doing the same thing, and you'll also get an error.

Docker swarm network not recognizing service/container on worker node. Using Traefik

I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open

Resources