How to set up container using Docker Compose to use a static IP and be accessible outside of VM host? - docker

I'd like to specify a static IP address for my container in my docker-compose.yml, so that I can access it e.g. using https://<ip-address> or ssh user#<ip-address> from outside the host VM.
That is, making it possible for other machines on my company network to access the Docker container directly, on a specific static IP address. I do not wish to map specific ports, I wish to be able to access the container directly.
A starting point for the docker-compose.yml:
master:
image: centos:7
container_name: master
hostname: master
Is this possible?
I'm using the Virtualbox driver, as I'm on OS X.

So far it is not possible, it will be in the Docker v 1.10 (that should be released in a couple of weeks from now).
Edit:
See the PR on GH.

I believe extra hosts entry is a solution.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
See extra_hosts.
Edit:
As pointed by M. Auzias in comments, i misunderstood the question. Answer is incorrect.

You could specify the IP address of the container with --ip parameter when running it, so in that it way the IP would always be the same for the container. After that you could ssh to your host VM, and then "attach" to the container.
Otherwise, I'm not sure... Maybe try and run the container with --net=host
From https://docs.docker.com/engine/userguide/networking/dockernetworks/
The host network adds a container on the hosts network stack. You’ll
find the network configuration inside the container is identical to
the host.

Related

Issues with Docker networking on a GCP instance

I'm trying to build and run a simple Docker container (using docker-compose to do this) on a GCP Instance (Ubuntu 20.04), and it seems that the container cannot access the internet, unless I run it using
docker run --net=host [...]
or use in my docker-compose.yml something like:
service:
build:
...
network: host
network_mode: host
...
I'm wondering why it is so, that a simple docker container on a standard GCP instance with Ubuntu 20.04 should require some specific configuration to access Internet, and why I see almost no mention of this while searching for this issue on the web.
Am I doing something wrong, is there a better way to do this?
See Container networking for Docker and the principle is applied consistently across other container runtimes too.
Using --net=host or network_mode: host binds container(s) to the host's network.
Rather than broadly publishing all of a container's or service's ports to the host network (and thus making them host public), you can be more precise using --publish=[HOST-PORT]:[CONTAINER-PORT] or ports to expose container ports as host ports (and potentially remap these too).
One (of several advantages) to the not-published-by-default behavior is that you must take a second step to publish a container's ports to a host where there is increased possibility that the service may be accessed (via its ports) by undesired actors.

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

How to access host port from docker container which is bound to localhost

I have services defined in my own docker-compose.yaml file, and they
have their own bridged network to communicate with each other.
One of this services needs access to services running on the host machine.
According to this answer I added the following lines to my service within the docker-compose.yaml file:
extra_hosts:
- "host.docker.internal:host-gateway"
This works, despite the fact that the services running on the host need to bind to 0.0.0.0. If I bind to localhost, I'm not able to access them. But I don't want to expose the port to anyone else.
Is there a way to achieve this with bridged network mode?
I'm using the following versions:
Docker version 20.10.5, build 55c4c88
docker-compose version 1.28.5, build unknown
and I'm running on aarch64
the solution was just a misunderstanding from other readings. e.g.
another SO article
a baeldung article
As I explicitly defined an additional bridged network within my docker-compose.yaml file I assumed that I had to bind the service on the host to the IP address of that particular interface (I checked the IP address of the container and then looked up the address on the host's interface list) which was 172.20.0.1)
But docker0 was 172.17.0.1 (which should be the default one).
After binding the service on the host to the docker0 IP address, and adding
extra_hosts:
- "host.docker.internal:host-gateway"
to `docker-compose.yaml', I was able to access it, but it was also blocked from anyone else.
The explanation why this is working is probably, as explained here, b/c the IP route within each docker container includes the docker0 IP address, even if you have your own network set up.
Please correct me in case I mixed something up.

I am migrating from local to remote Docker, can I discover the daemon's public IP?

I am using Docker Compose to deploy my applications. In my docker-compose.yml I have a container my-frontend which must know the public IP of the backend my-backend. The image my-frontend is NodeJS application which runs in the client's browser.
Before I did this:
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND=http://localhost:81
This works fine when I deploy to a local Docker daemon and when the client runs locally.
I am now migrating to a remote Docker daemon. In this situation, the client does not run on the same host as the Docker daemon any more. Hence, I need to alter the environment variable BACKEND in my-frontend:
environment:
- BACKEND=http://<ip-of-daemon>:81
When I hardcode <ip-of-daemon> to the actual ip of the Docker daemon, everything is working fine. But I am wondering if there is a way to dynamically fill this in? So I can use the same docker-compose.yml for any remote Docker daemon.
With Docker Compose, your Docker containers will all appear on the same machine. Perhaps you are using tools like Swarm or Kubernetes in order to distribute your containers on different hosts, which would mean that your backend and frontend containers would indeed be accessible via different public IP addresses.
The usual way of dealing with this is to use a frontend proxy like Traefik on a single entry point. This means that from the browser's perspective, the IP address for your frontend and backend is the same. Internally, the proxy will use filtering rules to direct traffic to the correct LAN name. The usual approach is to use a URL path prefix like /backend/.
You correctly mentioned in the comments that, assuming your frontend container is accessible on a static public IP, you could just internally proxy from there to your backend, using NginX. That should work just fine.
Either of these approaches will allow a single IP to appear to "share" ports - this resolves the problem of wanting to listen on the same IP on 80/443 in more than one container. You need to try to avoid non-standard ports for backend calls, since some networks can block them (e.g. mobile networks, corporate firewalled environments).
I am not sure what an alternative would be to those approaches. You can certainly obtain a machine's public IP if you can run code on the host, but if your container orchestration is sending containers to machines, the only code that will run is inside each container, and I don't believe public IP information is exposed there.
Update based on your use-case
I had initially assumed from your question that you were expecting your containers to spin up on arbitrary hosts in a Docker farm. In fact, your current approach confirmed in the comments is a number of non-connected Docker hosts, so whenever you deploy, your containers are guaranteed to share a public IP. I understand the purpose behind your question a bit better now - you were wanting to specify a base URL for your backend, including a fully-qualified domain, non-standard port, and URL path prefix.
As I indicated in the discussion, this is probably not necessary, since you are able to put a proxy URL path prefix (/backend) in your frontend NginX. This negates the need for a non-standard port.
If you wanted to specify a custom backend prefix (e.g. /backend/v1 to version your API) then you could do that in env vars in your Docker Compose config.
If you need to refer to the backend's fully-qualified address in your JavaScript for the purposes of connecting to AJAX/WebSocket servers, you can just derive this from window.location.host. In your dev env this will be a bare IP address, and in your remote envs, it sounds like you have a domain.
Addendum
Some of the confusion on this question was about what sort of IP addresses we are referring to. For example:
I believe that the public IP of my-backend is equal to the docker daemon's IP
Well, your Docker host has several IP addresses, and the public address is just one of them. For example, the virtual network interface docker0 is the LAN IP of your Docker host, and if you ask for the IP of your Docker host, that would indeed be a correct answer (though of course it is not accessible on the public internet).
In fact, I would say the LAN address belongs to the daemon (since Docker sets it up) and the public IP does not (it is a feature of the box, not Docker).
In any of your Docker hosts, try this command:
ifconfig docker0
That will give you some information about your host's IP, and is useful if a Docker container wishes to contact the host (e.g. if you want to connect to a service that is not running in a container). It is quite useful to pass the IP herein into a container as an env var, in order to allow this connection to take place.
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND="${BACKEND_ENV}"
Where BACKEND_ENV is and enviroment variable setted to the the docker daemon's ip.
In the machine where is docker-compose executed set the environment variable before.
export BACKEND_ENV="http://remoteip..."
Or just start the frontend pointing to the remote address
docker run -p 80:80 -e BACKEND='http://remote_backend_ip:81' my-frontend:latest

Docker for windows: how to access container from dev machine (by ip/dns name)

Questions like seem to be asked but I really don't get it at all.
I have a window 10 dev machine host with docker for windows installed. Besides others networks it has DockerNAT netwrok with IP 10.0.75.1
I run some containers with docker-compose:
version: '2'
services:
service_a:
build: .
container_name: docker_a
It created some network bla_default, container has IP 172.18.0.4, ofcource I can not connect to 172.18.0.4 from host - it doesn't have any netwrok interface for this.
What should I do to be able to access this container from HOST machine? (by IP) and if possible by some DNS name? What should I add to my docker-compose.yml, how to configure netwroks?
For me it should be something basic, but I really don't understand how all this stuff works and to access to container from host directly.
Allow access to internal docker networks from dev machine:
route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2
Then use this https://github.com/aacebedo/dnsdock to enable DNS discovery.
Tips:
> docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock --net bridge -p 53:53/udp aacebedo/dnsdock:latest-amd64
> add 127.0.0.1 as DNS server on dev machine
> Use labels described in docs to have pretty dns names for containers
So the answer on the original question:
YES WE CAN!
Oh, this not actual.
MAKE DOCKER GREAT AGAIN!
The easiest option is port mapping: https://docs.docker.com/compose/compose-file/#/ports
just add
ports:
- "8080:80"
to the service definition in compose. If your service listens on port 80, requests to localhost:8080 on your host will be forwarded to the container. (I'm using docker machine, so my docker host is another IP, but I think localhost is how docker for windows appears)
Treating the service as a single process listening on one (or a few) ports has worked best for me, but if you want to start reading about networking options, here are some places to dig in:
https://docs.docker.com/engine/userguide/networking/
Docker's official page on networking - a very high level introduction, with most of the detail on the default bridge behavior.
http://www.linuxjournal.com/content/concerning-containers-connections-docker-networking
More information on network layout within a docker host
http://www.dasblinkenlichten.com/docker-networking-101-host-mode/
Host mode is kind of mysterious, and I'm curious if it works similarly on windows.

Resources