Docker - host's localhost times out on assigned docker port - docker

I recently installed a new Linux (Ubuntu 18.04) from scratch on my computer and now I run a Postgres Docker container via,
sudo docker run --name oes-dev -p 5440:5432 -d kartoza/postgis:9.6-2.4
But, now curl --verbose localhost:5440 returns,
* Rebuilt URL to: localhost:5440/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5440 (#0)
> GET / HTTP/1.1
> Host: localhost:5440
> User-Agent: curl/7.61.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
I googled for more than half an hour without success. Pretty sure a networking issue (after all, the same image is running on a remote server and I can curl that one just fine), but don't have the slightest idea where to start.
Can someone point me in the right direction pls?
Thanks
Nils
EDIT
sudo docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1beff7b6aaf3 kartoza/postgis:9.6-2.4 "/bin/sh -c /docker-…" About an hour ago Up About an hour 0.0.0.0:5440->5432/tcp oes-dev
EDIT2 sudo docker network inspect bridge:
[
{
"Name": "bridge",
"Id": "5d3e43f0716aec7c29a56eb7f74e396ac72e010cc760078633549c1901f488cc",
"Created": "2018-10-11T18:26:52.277074856+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1beff7b6aaf32b359fa85901db7efa65c7090117fdde93d81d665231f8e32960": {
"Name": "oes-dev",
"EndpointID": "42dab201b145ff71b81e48618ec80cc8124625c89169d8932fb97a150f716e2c",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

Related

ping: proxy: Temporary failure in name resolution in docker container network -- "proxy" is container name here running in docker

Here I am having two proxy and my_ngnix containers running inside docker
I have two containers inside one network "bridge" as below
C:\> docker network inspect bridge
[
{
"Name": "bridge",
"Id": "82ffe522177d113af71d150c96b5e43df8946b9f17f901152cc2b4b96caf313a",
"Created": "2022-12-25T14:38:40.7492988Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"008e6142e13e91624e89b26ba697bff60765965a09daedafa6db766f30b6beb9": {
"Name": "proxy",
"EndpointID": "ed0536b6b97d9ad00b8deeb8dc5ad6f91e9809af3fc4032dfca4abd02760cc71",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"e215ead97a7e6580d3f6fec0f25790771af9f878b4194d61fa041b218a2117bf": {
"Name": "my_ngnix",
"EndpointID": "f0e7a6e824c18587a1cb32de89dbc8a1ec2fa62dcc7fe38516d294d7fdb19606",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
the moment I hit the command
C:\> docker container exec -it my_ngnix ping proxy
it shows me error : ping: proxy: Temporary failure in name resolution
can anyone help me here ---
I have installed all required update on container so I can use ping command
why container is not able to ping each other using container name ---same command work with ip address
I tried to use the same command using IP address as below
docker container exec -it my_ngnix ping 172.17.0.3
it works , however it wont work with the container name.

2 containers cannot talk to each other

I have 2 containers that need to talk to each other. the first container simulates sendmail daemon to mock out sending email. The second is localstack mocking out sending an email alert due do a cloud watch alarm. Apparently I am the only person in the whole world having this problem. The localstack people are at a complete loss on why I cannot have their pro-examples talk to SMTP4DEV.
OS = latest Mac OS Monterey 12.6
container 1 (smtp4dev to simulate sending email):
docker run --rm -it -p 3000:80 -p 2525:25 rnwood/smtp4dev
I have a python program that creates a mail message and when it sends. a mail message shows up in the smtp4dev container.
container 2:
export SMTP_HOST=host.docker.internal:2525
DEBUG=1 DNS_ADDRESS=127.0.0.1 LOCALSTACK_API_KEY=####### SMTP_HOST=host.docker.internal:2525 localstack start
when I run the code at https://github.com/localstack/localstack-pro-samples/tree/master/cloudwatch-metrics-aws. The log in the container is showing the it is trying to send a message to the SMTP4DEV container but fails.
both containers are in the bridge network. I would think that being in the same network they should be able to talk to each other
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "150e446de01292139f0ff57b46cfbc1ab5b091ab589911cb88f3ee3abda983be",
"Created": "2022-10-11T18:37:05.724386042Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5be878e9738953a0e7047da51bc855c80c939d6c21bc5fbbedf61d4e2ddfb6e2": {
"Name": "localstack_main",
"EndpointID": "455353f471e1c9f80fcba66216c5c5face8a07bab9b6b66f9da39fecc221d5b4",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"f16d5edf8b42286ac3484e307761c0bd63a49532f5f019321587c8a9588eb942": {
"Name": "goofy_haibt",
"EndpointID": "8ee0ab91dc3a3c308063b0da8e806c4b0a26a8297e6cc4549ba6dfd12d51a1fa",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I have a workaround. the option LAMBDA_DOCKER_NETWORK can be used to set the network from localstack.
DEBUG=1 DNS_ADDRESS=127.0.0.1 LOCALSTACK_API_KEY=dddddd SMTP_HOST=host.docker.internal:2525 LAMBDA_DOCKER_NETWORK=host localstack start
the real solution would be to use the "DOCKER_FLAGS" environment variable will the value of "--add-host=host.docker.internal:host-gateway". I am have asked LocalStack how I should tell use the host.docker.internal?

Not able to access container by user-defined network name from another container

I created a new network.
Exposed port 8086 for one of my containers and also published this port. Run both containers with --network influx-net.
Check docker network inspect influx-net gives my
docker network inspect influx-net
[
{
"Name": "influx-net",
"Id": "76ad2f3ec76e15d88330739c2b3db866a15d5affb0912c64c7ec7615b14e8570",
"Created": "2021-11-09T17:32:42.479637982+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"92c0f2c5dc5c75db15e5a5ea0bed6ec5047f0c57dba06283ef6e72c8adcc6a3a": {
"Name": "ultimate_hydroponics",
"EndpointID": "529c2c31baaec7685251a584e8f5643b4669966372b629d67f3a9910da5e809c",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"a678409dbc43685fd089ed29f42601b67c46c20b66c277e461a8fe2adad31b5a": {
"Name": "influxdb",
"EndpointID": "73263b9969dc9c67760ec3aab4ebab75e16763c75cd498828a737d9e56ed34ef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
So my both containers are connected to the network influx-net.
But when I trying to ping or wget InfluxDB container (influx-net:8086) from another container by it's network name I'm obtaining nothing.
Also when I do the same with 192.168.111.11:8086 (my pc IP), I'm getting a response.
What is the problem?
Local PC firewall is off.
# create a network
$ docker network create influx-net
# start influx db attached to the network influx-net
$ docker run -d --name idb --net influx-net influxdb
Now, create a new container attached to same network and try connecting:
$ docker run -it --net influx-net ubuntu
root#fc26733c33da:/# telnet idb 8086
Trying 172.18.0.2...
Connected to idb.
Escape character is '^]'.
And it's working.
If you need to connect using IP, inspect the network to get container IP and then use the same to connect.
$ docker network inspect influx-net
[
{
"Name": "influx-net",
...
"Containers": {
"844311255fb9dd74fee1df2dc65023533ad961d1dd6345128cc2c92237ba35e0": {
"Name": "idb",
"EndpointID": "b294cb0661f9f1166833f381a02bcbfa531cdeba6657c51b9595814f5ee323be",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16", 👈 # this one here
"IPv6Address": ""
}, ...
},
"Options": {},
"Labels": {}
}
]
$ docker run -it --net influx-net ubuntu
root#360bbca6534d:/# telnet 172.18.0.2 8086
Trying 172.18.0.2...
Connected to 172.18.0.2.
Escape character is '^]'.

get the IP adress to use for a docker container on a network

My NAS is where I run my containers. It sits on 192.168.1.23 on my network.
I am running a few containers inside a user-defined network. Here is the docker network inspect (I've removed the containers manually) :
[
{
"Name": "traefik2_proxy",
"Id": "fb2924fe59fbb0436c72f11cb028df832a473a165162ecf08b7e3a946cfa2d3c",
"Created": "2020-05-13T23:23:16.16424119+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.90.0/24",
"Gateway": "192.168.90.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
I have a specific container which is in that network at IP address 192.168.90.16 for which I have exposed the 9118 port using the following in my docker-compose :
ports:
- target: 9118
published: 9118
protocol: tcp
This is the portainer screenshot :
I was expecting to be able to connect to that container using 192.168.1.23:9118 but I tried to no avail.
What am I missing ? Which setting do I need to change for that container to be visible at that port on my NAS IP address ?
The port that the container was listening to was incorrect. I needed to modify the ports configuration to:
ports:
- target: 9117
published: 9118
protocol: tcp

Not able to connect docker from host when network change Docker for Windows

I am not able to connect to docker container when there is network/ip change from office to home. But the same works with localhost or 127.0.0.1
I am connecting to VPN to connect to database.
root#1c970ed5cd64:/etc# curl
http://localhost:8090/admin/health_check/all
{“health”:“passed”}root#1c970ed5cd64:/etc# curl
http://192.168.0.103:8090/admin/health_check/all curl: (7) Failed to
connect to 192.168.0.103 port 8090: Connection refused
When i install docker again everything works fine.
I have created docker-machine with external virtual switch pointing to wifi and ran the container with the host ip.
The same working on diffferent machine. Below i have not seen DOCKER_HOST ip when i do inspect the network
PS C:\Users\KH1046> docker network inspect -v 6fad6cc07d43 [
{
"Name": "bridge",
"Id": "6fad6cc07d43f576ca4921559346a4919ea6ffd8172726adccb31b4ffaa23acd",
"Created": "2017-08-12T18:07:33.2239814Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"b81f7e258650a150c72ab255c35d321a9a01aea090c6e03ec38de6c373ebfe46": {
"Name": "konymobilefabric_tomcat",
"EndpointID": "ae18e2a9874ab56967d06450ea7b26c7f47c408ff173c1610ac7bd0b322e24d3",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
} ]
Thanks,
Kusuma

Resources