Address docker container by name when net="host" - docker

I'm having some strange issues with docker's --net="host"
When deploying a container in a custom network, I can address it with it's name. But when specifying --net="host" instead of --net="customnetwork", no ports are exposed, and I cannot address the container in any way. Here's the host network inspection
docker network inspect host
[
{
"Name": "host",
"Id": "663f54513dc2b631d6f81457a49374da3bc3193ac0617497c018c47520600e22",
"Scope": "local",
"Driver": "host",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Containers": {
"836433bfa612f84fa3d73dec0f920e47affc529b64636f5e1bf38a8b7ced2d75": {
"Name": "elasticsearch",
"EndpointID": "79950c18d12d6c7a6715135287d48c7963bed21c7b09b28f6d443b7040eea697",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
},
"86d4bd0c232a350371131c300c417877e0fb0c54b831f85093b0d2228d9b4f1a": {
"Name": "mongo",
"EndpointID": "492022820dd1e0a634e63c97962575b0bffe1c137163cce4df30aa8da39d1159",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
},
"e4b9179849afa529a6249b067c27953d0b187afdd5fac76112bb9b1369ae9556": {
"Name": "graylog",
"EndpointID": "e705eede1ddf897c8c4fc45ae6fcf77db8228b1a0c2d318f6045846e52951b93",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
}
},
"Options": {}
}
]
As you can see, no IP addresses are assigned, and nothing changed anywhere.
I'm relatively new to docker, and maybe I missed something.
Some info regarding background:
I'm using docker to deploy graylog2. It references mongodb and elasticserach by their hostname, and needs to have the ability to listen to ports on the host to catch incoming log messages. The ports are configured post deployment.
Docker version: 1.10.3, build 20f81dd
Any help is appreciated.
Thanks for your time.

When you do a --net=host, there is no separate network, there's no port to forward since it's just listening on your host (you can't forward to yourself, especially since you have the port in use), and there's no separate IP, it's the IP of your host.

Related

Problems with network connectivity and docker on Synology

I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?

How to reach internet urls from inside containers

Is there any way to get access to internet from inside a docker container?
My container have to reach some urls in order to work...
My containers are:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
457c79c831b6 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-1
b9b39e82a6b2 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-0
fb795905ec64 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s server --h…" 15 hours ago Up 10 minutes 0.0.0.0:6443->6443/tcp k3d-k3s-default-server
As you can see, they are running rancher/k3s:--- image.
I've took a look on logs:
E0205 08:07:07.844781 6 kuberuntime_manager.go:729] createPodSandbox for pod "vault-helm-1580888075-agent-injector-b7647bf59-vght5_default(7210fa15-5ba4-4c61-9e2c-2bce05cd3bc0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
It seems it's not able to reach registry-1.docker.io repository.
However, I'm able to pull images from my host:
$ docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
bdbbaa22dec6: Pull complete
Digest: sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest
My host is working behind a coorporate proxy:
$ cat /etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.49.0.1:8080/"
Environment="HTTPS_PROXY=http://10.49.0.1:8080/"
Environment="NO_PROXY="localhost,127.0.0.1,::1"
Also, I've tried to test if containers are able to reach proxy ip:
$ docker exec -it 457c79c831b6 sh
/ # ping 10.49.0.1
PING 10.49.0.1 (10.49.0.1): 56 data bytes
<no response>
EDIT
/etc/resolv.conf content:
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
EDIT 2
Network related inspecttion of k3d master container node:
$ docker inspect k3d-k3s-default-server | grep -i networks -A10
"NetworkSettings": {
"Bridge": "",
"SandboxID": "57705be8c175394ac122b95f070321dbe48d4c7b7752482391fc243562babb75",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"6443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "6443"
--
"Networks": {
"k3d-k3s-default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"k3d-k3s-default-server",
"fb795905ec64"
],
"NetworkID": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"Gateway": "172.18.0.1",
EDIT 3
$ docker network inspect k3d-k3s-default
[
{
"Name": "k3d-k3s-default",
"Id": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"Created": "2020-02-04T17:40:01.13490488+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"457c79c831b6a76ae9b78cf360ae437eed04b18bd18429ac2e8436801ba0f4f7": {
"Name": "k3d-k3s-default-worker-1",
"EndpointID": "af38a2ecd618cf31df3dd4c88dea58ddc54de621e580934eb308105835f549d1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"b9b39e82a6b2ef0863cbc8ed9f09cabbbcf8618fc14a2877feac9218b6803575": {
"Name": "k3d-k3s-default-worker-0",
"EndpointID": "87aacc1963289bca9097586cfc28fa17c7a98ee7716d5918a4c83143c35c8b00",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"fb795905ec64f99aac5ed1ad654d3e44a73e702327d15a91e4f60df4e5d03724": {
"Name": "k3d-k3s-default-server",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"app": "k3d",
"cluster": "k3s-default"
}
}
]

Docker: requests between containers in one network

I have 2 containers - backend & frontend. I run them on remote server with this commands:
docker run -p 3000:3000 xpendence/api-checker:0.0.1
docker run -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
As documentation says, containers connect to 'bridge' network by default. And I see this containers inside there:
# docker network inspect bridge
[
{
"Name": "bridge",
"Id": "27f9d6240b4022b6ccbfff93daeff32d2639aa22f7f2a19c9cbc21ce77b435",
"Created": "2019-05-12T12:26:35.903309613Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"82446be7a9254c79264d921059129711f150a43ac412700cdc21eb5312522ea4": {
"Name": "rebounder-backend-0017a",
"EndpointID": "41fb5be38cff7f052ebbbb9d31ee7b877f664bb620b3063e57cd87cc6c7ef5c9",
"MacAddress": "03:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"da82a03c5d3bfe26cbd750da7f8872cf22dc9d43117123b9069e9ab4e17dbce6": {
"Name": "elastic_galileo",
"EndpointID": "13878a6db60ef854dcbdf6b7e729817a1d96fbec6364d0c18d7845fcbc040222",
"MacAddress": "03:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
I send requests from frontend to backend, but they not reach it:
GET http://localhost:8099/log net::ERR_CONNECTION_REFUSED
GET http://172.17.0.2:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://172.17.0.2/16:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://0.0.0.0:8099/log net::ERR_CONNECTION_REFUSED
Please give me advice, how to solve problem?
Requests to backend from outside are ok.
Although your two containers link to the same default bridge, but this doesn't mean they can visit each other.
In the past, we suggest to use --link to make container directly talk to each other without the host participate, but now this is deprecated.
Instead, you need to use user-defined bridge.
Containers connected to the same user-defined bridge network automatically expose all ports to each other.
User-defined bridges provide automatic DNS resolution between containers.
Steps as follows:
docker network create my-net
docker run --network my-net -p 3000:3000 xpendence/api-checker:0.0.1
docker run --network my-net -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
Detail references to official guide

containers in the same network unable to communicate

I have a Docker bridge network, to which are attached 2 containers:
a Node.js server running on the port 3333
a Flask server running on the port 5000
The network has this configuration:
[
{
"Name": "mynetwork",
"Id": "f94f76533b065d39515b65d20b8645c22617be51ec9335fcfad8ce707ca48841",
"Created": "2019-02-20T17:17:29.029434324+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.1.0.0/16",
"Gateway": "10.1.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"c8084141e36c756710cbfa020f664127f234e407986362331ab127d415c9b074": {
"Name": "nodeContainer",
"EndpointID": "e25f8797c1b7488d7c3810d8f38c4b3dea6b9f19f17558a164b710015fdd9e1a",
"MacAddress": "02:42:0a:01:00:03",
"IPv4Address": "10.1.0.3/16",
"IPv6Address": ""
},
"f9c582d031515f4bba910286118df806a6a2b04a36917234eca09fdf335d4457": {
"Name": "flaskContainer",
"EndpointID": "fbf053f97acc7b9491c536966b640862d366d1599fbfb400915cd8bc26b04f6a",
"MacAddress": "02:42:0a:01:00:02",
"IPv4Address": "10.1.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Normally, those 2 containers communicate (nodeContainer makes requests to http://flaskContainer:5000), but this stopped to work after setting a different Subnet and Gateway (for external network constraints).
In particular, I get an error like ETIMEDOUT 10.1.0.2:3333. This makes me think that the address is correctly resolved, but for some reason, there is no answer (and in fact flaskContainer logs nothing).
As additional information:
docker exec flaskContainer curl flaskContainer
docker exec nodeContainer curl nodeContainer
obviously does not work (Failed to connect to flaskContainer port 80).
docker exec flaskContainer curl flaskContainer:5000
docker exec nodeContainer curl nodeContainer:3333
correctly give results.
docker exec flaskContainer curl nodeContainer:3333
docker exec nodeContainer curl flaskContainer:5000
goes in timeout.
Have you any idea of what can be the reason? How can I solve this?
Thank you

Run multiple containers on same docker network localhost

I want to connect from my app to mongodb on localhost, so they need to have same localhost address.
So the question is: Can two containers share they localhost, or for each container the localhost ip must be different?
I'm doing this for test environment purposes, so I don't want in-memory database, changed mongo uri or any different solution. I just want to connect from A to B by localhost.
To run my network and containers i type:
docker network create --driver bridge isolated_nw
docker run --name mongodb -d -p 27017:27017 --network=isolated_nw mongo:3.4.2
docker run --name roomate-profiles --network=isolated_nw -d -p 8080:8080 sovas/roomate-profiles
My custom docker network:
[
{
"Name": "isolated_nw",
"Id": "3efd6831784c2a8c9e9ea345144fcc6b9180e70c0e1b4b5d1a72219051b24e67",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {
"57d4e2fb1f0c8d776329fd6ce82e5905df00e261ab6923595578dcb35913b03e": {
"Name": "roomate-profiles",
"EndpointID": "5a8158dc1aba6958218d1cca3c98ca911ab2cfa73be839ceece2e7819b244c91",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"8fa815735d7ebb77434f8abf11e58f18faeb5d67e2743903d81f4600bd558c35": {
"Name": "mongodb",
"EndpointID": "7b7a7ed1ad08bbe381fb6d66c6e9fea66ee9b7c581f530bdf4d82f0741bff04b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
application.properties
spring.data.mongodb.uri=mongodb://localhost:27017/admin
localhost won't work since it refers to the roomate-profiles container. But you can do
spring.data.mongodb.uri=mongodb://mongodb:27017/admin
since both containers are connected to the same network. There is also no need to map the mongodb port to the host (unless you need it for something else).

Resources