Can't connect to localhost of the host machine from inside of my Docker container - docker

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)

Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

Related

How docker process communication between different containers on default bridge on the same host?

Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.

Connect Docker container to specific network interface (e.g. eth1)?

I have been here, here, and all over the Docker documentation. I think I need this explained in much simpler terms.
My Docker container needs to be able to do two things:
Connect to a device (a camera) on the host machine
Connect to a specific network interface (eth1) to send data
To satisfy them both, I have chosen to run the container as a command given by a superservice.
My docker-compose.yml looks like this:
version: "3"
services:
superservice:
image: docker
command: docker run -it --device=/dev/vchiq my/image-to-run:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
replicas: 1
mode: replicated
stdin_open: true
tty: true
This way I can still docker stack deploy my service onto a swarm (otherwise, specifying --device doesn't work with swarm mode). When I initiate the swarm I do so with docker swarm init --advertise-addr my:wlan:ip:addr --data-path-addr eth1.
I thought this would route the packets that I'm sending from my container through eth1 to the destination.
But, when I tcpdump -i eth1 nothing goes through it. It's all still going through wlan0.
Why this is happening, and how to fix it?
The way I have found to do this is with a simple sudo route set default eth1. For some reason I thought that setting --data-path-addr=eth1 would also set the network interface I used. Apparently, this is not the case.
I haven't spent much time with networking, though that's on my list to learn, so perhaps I'll be able to address the issue with a better solution with some more sophisticated routing. For now, I just sudo route delete default eth1 when I want to return to the wlan0 interface.
This is an ok solution for me right now because when the machine is deployed it only uses the eth1 interface.

Docker-Compose multiple networks, make ports available to outside the host

I am currently deploying a docker network with backend and fronend.
All containers are part of a network basic and one container should be accessible from outside the host machine.
When using docker-toolbox on windows, it works fine. I can access all containers with forwarded ports outside the host machine
ports:
- 8080:8080
My problem is, that on Redhat 7, I didn't find a solution do make it accessible wihtout manipulating the iptable so far. I can access all containers with mapped ports inside my host machine. But for making them accessible from outside my hostemachine, I need to do: sysctl net.ipv4.conf.all.forwarding=1
sudo iptables -P FORWARD ACCEPT
I think there should be an easier way to user docker networks to do this, right?
There was an external setting, which was continously resetting the forwarding.
It was nothing directly related to Docker(-Compose).

Docker for windows: how to access container from dev machine (by ip/dns name)

Questions like seem to be asked but I really don't get it at all.
I have a window 10 dev machine host with docker for windows installed. Besides others networks it has DockerNAT netwrok with IP 10.0.75.1
I run some containers with docker-compose:
version: '2'
services:
service_a:
build: .
container_name: docker_a
It created some network bla_default, container has IP 172.18.0.4, ofcource I can not connect to 172.18.0.4 from host - it doesn't have any netwrok interface for this.
What should I do to be able to access this container from HOST machine? (by IP) and if possible by some DNS name? What should I add to my docker-compose.yml, how to configure netwroks?
For me it should be something basic, but I really don't understand how all this stuff works and to access to container from host directly.
Allow access to internal docker networks from dev machine:
route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2
Then use this https://github.com/aacebedo/dnsdock to enable DNS discovery.
Tips:
> docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock --net bridge -p 53:53/udp aacebedo/dnsdock:latest-amd64
> add 127.0.0.1 as DNS server on dev machine
> Use labels described in docs to have pretty dns names for containers
So the answer on the original question:
YES WE CAN!
Oh, this not actual.
MAKE DOCKER GREAT AGAIN!
The easiest option is port mapping: https://docs.docker.com/compose/compose-file/#/ports
just add
ports:
- "8080:80"
to the service definition in compose. If your service listens on port 80, requests to localhost:8080 on your host will be forwarded to the container. (I'm using docker machine, so my docker host is another IP, but I think localhost is how docker for windows appears)
Treating the service as a single process listening on one (or a few) ports has worked best for me, but if you want to start reading about networking options, here are some places to dig in:
https://docs.docker.com/engine/userguide/networking/
Docker's official page on networking - a very high level introduction, with most of the detail on the default bridge behavior.
http://www.linuxjournal.com/content/concerning-containers-connections-docker-networking
More information on network layout within a docker host
http://www.dasblinkenlichten.com/docker-networking-101-host-mode/
Host mode is kind of mysterious, and I'm curious if it works similarly on windows.

How to set up container using Docker Compose to use a static IP and be accessible outside of VM host?

I'd like to specify a static IP address for my container in my docker-compose.yml, so that I can access it e.g. using https://<ip-address> or ssh user#<ip-address> from outside the host VM.
That is, making it possible for other machines on my company network to access the Docker container directly, on a specific static IP address. I do not wish to map specific ports, I wish to be able to access the container directly.
A starting point for the docker-compose.yml:
master:
image: centos:7
container_name: master
hostname: master
Is this possible?
I'm using the Virtualbox driver, as I'm on OS X.
So far it is not possible, it will be in the Docker v 1.10 (that should be released in a couple of weeks from now).
Edit:
See the PR on GH.
I believe extra hosts entry is a solution.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
See extra_hosts.
Edit:
As pointed by M. Auzias in comments, i misunderstood the question. Answer is incorrect.
You could specify the IP address of the container with --ip parameter when running it, so in that it way the IP would always be the same for the container. After that you could ssh to your host VM, and then "attach" to the container.
Otherwise, I'm not sure... Maybe try and run the container with --net=host
From https://docs.docker.com/engine/userguide/networking/dockernetworks/
The host network adds a container on the hosts network stack. You’ll
find the network configuration inside the container is identical to
the host.

Resources