What is IP of bridge localhost for Docker? - docker

I am dockerizing my application. I have two containers now. One of it wants to talk to another, in it's config I have "mongo": "127.0.0.1" I suppose they should talk through the bridge network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
},
"Labels": {}
}
]
Should I now change "mongo": "127.0.0.1" to "mongo": "0.0.0.0"?

You can check a container IP.
$ docker inspect $(container_name) -f "{{json .NetworkSettings.Networks}}"
You can find IPAddress attribute from the output of json.

Yes, you should use a bridge network. The default "bridge" can be used but won't give you DNS resolution, check https://docs.docker.com/engine/userguide/networking/#user-defined-networks for details.

Best way to use is using with --link option to avoid to many changes.
for ex: --link mongo01:mongo it will instruct Docker to use the container named mongo01 as a linked container, and name it mongo inside your application container
So in your application you can use mongo:27017. without making any changes.
refer this for more details.
https://www.thachmai.info/2015/05/10/docker-container-linking-mongo-node/

Related

Rust TCP Server in Docker Container Not Working

I'm struggling to understand why the tcp server can't accept outside requests.
My Server:
use std::io::Write;
use std::net::TcpListener;
use std::thread;
fn main() {
let listener = TcpListener::bind("0.0.0.0:8080").unwrap();
println!("listening started, ready to accept");
for stream in listener.incoming() {
thread::spawn(|| {
let mut stream = stream.unwrap();
stream.write(b"Hello World\r\n").unwrap();
});
}
}
Docker File
FROM rust:1.31
WORKDIR /usr/src/proto-rust
COPY . .
RUN cargo install --path .
EXPOSE 8080
CMD ["cargo", "run"]
Running Locally, this works
nc 0.0.0.0 8080
After running the following:
docker run --rm -p 8080:8080 --name my-running-app proto-rust
and checking the docker bridge
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "78d335687fcd96ad1051ca17662024708dff9db4d3043b787a43d29edbb8ff58",
"Created": "2022-09-03T01:53:48.606052848Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"21a4f5fec2cb2a880ed01c044ccaf001e120c9b507d34fabf93c9da21957d558": {
"Name": "my-running-app",
"EndpointID": "34a003ea12612d479e47a61478f3a445455f919a66350b9b5fde651e6cb8a12b",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Running either of the following does not work:
nc 172.17.0.2 8080
nc localhost 8080
I'm not sure where to go from here. I think my ip address for the docker container is wrong because netcat can't see the open port? I'm not sure here. Why do I even need the extra address as well as the network bridge? It's wild docker can't just expose on localhost.
I had originally made the server 127.0.0.1, and then changed it to 0.0.0.0 after since docker only binds on localhost. However, I did not run docker build, which meant the update never applied to the docker image. After running
docker build -t proto-rust .
the following works:
nc localhost 8080

Connection closed by foreign host when connecting to docker container via tcp

I have a weird problem with connecting to docker containers via tcp.
My OS is Ubuntu 20.04
What I do.
I start my web server in a container. I have tried official Postgresql image and the problem stays the same. So the problem is probably is not my image.
It listens 0.0.0.0 on port 8080 . I have changed the port several times, so it's not about 8080 only.
I forward 8080 container port to 8080 on host. I have tried forwarding to different ports and the problem stays.
Here's the command
docker run --rm --name my-web-container -p8080:8080 my-web-image
The is try to wget localhost:8080 and it hangs for a while and then says
Connection closed by foreign host.
telnet localhost 8080 works for some time and then says the same thing
# telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
If I wget localhost:8080 from within the container everything is fine.
If I add --net=host , to the command, starting the container, the problem goes away.
So I suppose there is something wrong with docker network. I could always use --net=host, but that obviously creates problems.
This appeared out of the blue, I didn't do anything. No system configuration, no installing new software.
I have tried
docker network inspect bridge
That gave the following.
[
{
"Name": "bridge",
"Id": "0e99160be59fd6417984db68695f6e6d4fa016e1d75a26734bccaff427ea6468",
"Created": "2022-06-08T11:16:47.413799955+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I paid attention to this part
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
And 172.17.0.0 suspiciously looks like some ip address which my internet provider gave me.
I thought maybe I should give bridge network another address. So I have changed /etc/docker/daemon.json from
{
"experimental": true
}
to
{
"experimental": true,
"default-address-pools" :
[
{
"base":"172.26.0.0/16",
"size":24
}
]
}
And then ran service docker restart
After that the problem disappeared.

Unexpected behavior with named Docker NFS volumes in a Swarm cluster

I'm new to Docker Swarm and I'm managing all config via Ansible. I have a four-node Swarm cluster with the following members
manager (also the NFS exporter for volumes shared across the cluster)
cluster0
cluster1
cluster2
cluster[0-2] are workers in the swarm.
I'm configuring a service which makes use of a named volume media, which is an NFS export from manager. The service is constrained to the worker nodes. Once a service is deployed that makes use of this volume, I'd expect to see the following output of docker volume inspect media from any of the worker nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-01-27T12:23:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
and indeed when I docker volume inspect media from the swarm manager, I see:
user#manager:~$ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:58:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
However, this is the output I see from the nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:59:02-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": null,
"Scope": "local"
}
]
As a result, I'm unable to access the NFS-exported files. I'm not entirely sure how to troubleshoot. Is this a limitation of Swarm and if so, how are others working around this issue? If not, what am I missing?

Docker bridge network with swarm scope does not accept subnet and driver options

I want to control which external IP is used to send traffic from my swarm containers, this can be easily used with a bridge network and iptables rules.
This works fine for local-scoped bridge networks:
docker network create --driver=bridge --scope=local --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_local_bridge" my_local_bridge
and on iptables:
sudo iptables -t nat -A POSTROUTING -s 172.123.0.0/16 ! -o my_local_bridge -j SNAT --to-source <my_external_ip>
This is the output of docker network inspect my_local_bridge:
[
{
"Name": "my_local_bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.123.0.0/16"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.name": "my_local_bridge"
},
"Labels": {}
}
]
But if I try to attach a swarm container to this network I get this error:
network "my_local_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"
Alright, great, let's switch the scope to swarm then, right? Wrong, oh so wrong.
Creating the network:
docker network create --driver=bridge --scope=swarm --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_swarm_bridge" my_swarm_bridge
Now let's check docker network inspect my_swarm_bridge:
[
{
"Name": "my_swarm_bridge",
"Id": "...",
"Created": "...",
"Scope": "swarm",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {},
"Labels": {}
}
]
I can now attach it to swarm containers just fine, but neither the options are set, nor the subnet is what I defined...
How can I set these options for "swarm"-scoped bridge networks? Or, how can I set iptables to use a defined external IP if I can't set com.docker.network.bridge.enable_ip_masquerade to false?
Do I need to make a script to check the subnet assigned and manually delete the iptables MASQUERADE rule?
thanks guys
I'm pretty sure you can't use the bridge driver with swarm, and that you should use the overlay driver.
From Docker documentation :
Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
I might not understand your particular use case though ...

Docker: requests between containers in one network

I have 2 containers - backend & frontend. I run them on remote server with this commands:
docker run -p 3000:3000 xpendence/api-checker:0.0.1
docker run -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
As documentation says, containers connect to 'bridge' network by default. And I see this containers inside there:
# docker network inspect bridge
[
{
"Name": "bridge",
"Id": "27f9d6240b4022b6ccbfff93daeff32d2639aa22f7f2a19c9cbc21ce77b435",
"Created": "2019-05-12T12:26:35.903309613Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"82446be7a9254c79264d921059129711f150a43ac412700cdc21eb5312522ea4": {
"Name": "rebounder-backend-0017a",
"EndpointID": "41fb5be38cff7f052ebbbb9d31ee7b877f664bb620b3063e57cd87cc6c7ef5c9",
"MacAddress": "03:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"da82a03c5d3bfe26cbd750da7f8872cf22dc9d43117123b9069e9ab4e17dbce6": {
"Name": "elastic_galileo",
"EndpointID": "13878a6db60ef854dcbdf6b7e729817a1d96fbec6364d0c18d7845fcbc040222",
"MacAddress": "03:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
I send requests from frontend to backend, but they not reach it:
GET http://localhost:8099/log net::ERR_CONNECTION_REFUSED
GET http://172.17.0.2:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://172.17.0.2/16:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://0.0.0.0:8099/log net::ERR_CONNECTION_REFUSED
Please give me advice, how to solve problem?
Requests to backend from outside are ok.
Although your two containers link to the same default bridge, but this doesn't mean they can visit each other.
In the past, we suggest to use --link to make container directly talk to each other without the host participate, but now this is deprecated.
Instead, you need to use user-defined bridge.
Containers connected to the same user-defined bridge network automatically expose all ports to each other.
User-defined bridges provide automatic DNS resolution between containers.
Steps as follows:
docker network create my-net
docker run --network my-net -p 3000:3000 xpendence/api-checker:0.0.1
docker run --network my-net -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
Detail references to official guide

Resources