I am migrating from local to remote Docker, can I discover the daemon's public IP? - docker

I am using Docker Compose to deploy my applications. In my docker-compose.yml I have a container my-frontend which must know the public IP of the backend my-backend. The image my-frontend is NodeJS application which runs in the client's browser.
Before I did this:
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND=http://localhost:81
This works fine when I deploy to a local Docker daemon and when the client runs locally.
I am now migrating to a remote Docker daemon. In this situation, the client does not run on the same host as the Docker daemon any more. Hence, I need to alter the environment variable BACKEND in my-frontend:
environment:
- BACKEND=http://<ip-of-daemon>:81
When I hardcode <ip-of-daemon> to the actual ip of the Docker daemon, everything is working fine. But I am wondering if there is a way to dynamically fill this in? So I can use the same docker-compose.yml for any remote Docker daemon.

With Docker Compose, your Docker containers will all appear on the same machine. Perhaps you are using tools like Swarm or Kubernetes in order to distribute your containers on different hosts, which would mean that your backend and frontend containers would indeed be accessible via different public IP addresses.
The usual way of dealing with this is to use a frontend proxy like Traefik on a single entry point. This means that from the browser's perspective, the IP address for your frontend and backend is the same. Internally, the proxy will use filtering rules to direct traffic to the correct LAN name. The usual approach is to use a URL path prefix like /backend/.
You correctly mentioned in the comments that, assuming your frontend container is accessible on a static public IP, you could just internally proxy from there to your backend, using NginX. That should work just fine.
Either of these approaches will allow a single IP to appear to "share" ports - this resolves the problem of wanting to listen on the same IP on 80/443 in more than one container. You need to try to avoid non-standard ports for backend calls, since some networks can block them (e.g. mobile networks, corporate firewalled environments).
I am not sure what an alternative would be to those approaches. You can certainly obtain a machine's public IP if you can run code on the host, but if your container orchestration is sending containers to machines, the only code that will run is inside each container, and I don't believe public IP information is exposed there.
Update based on your use-case
I had initially assumed from your question that you were expecting your containers to spin up on arbitrary hosts in a Docker farm. In fact, your current approach confirmed in the comments is a number of non-connected Docker hosts, so whenever you deploy, your containers are guaranteed to share a public IP. I understand the purpose behind your question a bit better now - you were wanting to specify a base URL for your backend, including a fully-qualified domain, non-standard port, and URL path prefix.
As I indicated in the discussion, this is probably not necessary, since you are able to put a proxy URL path prefix (/backend) in your frontend NginX. This negates the need for a non-standard port.
If you wanted to specify a custom backend prefix (e.g. /backend/v1 to version your API) then you could do that in env vars in your Docker Compose config.
If you need to refer to the backend's fully-qualified address in your JavaScript for the purposes of connecting to AJAX/WebSocket servers, you can just derive this from window.location.host. In your dev env this will be a bare IP address, and in your remote envs, it sounds like you have a domain.
Addendum
Some of the confusion on this question was about what sort of IP addresses we are referring to. For example:
I believe that the public IP of my-backend is equal to the docker daemon's IP
Well, your Docker host has several IP addresses, and the public address is just one of them. For example, the virtual network interface docker0 is the LAN IP of your Docker host, and if you ask for the IP of your Docker host, that would indeed be a correct answer (though of course it is not accessible on the public internet).
In fact, I would say the LAN address belongs to the daemon (since Docker sets it up) and the public IP does not (it is a feature of the box, not Docker).
In any of your Docker hosts, try this command:
ifconfig docker0
That will give you some information about your host's IP, and is useful if a Docker container wishes to contact the host (e.g. if you want to connect to a service that is not running in a container). It is quite useful to pass the IP herein into a container as an env var, in order to allow this connection to take place.

my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND="${BACKEND_ENV}"
Where BACKEND_ENV is and enviroment variable setted to the the docker daemon's ip.
In the machine where is docker-compose executed set the environment variable before.
export BACKEND_ENV="http://remoteip..."
Or just start the frontend pointing to the remote address
docker run -p 80:80 -e BACKEND='http://remote_backend_ip:81' my-frontend:latest

Related

How to expose a Docker container port to one specific Docker network only, when a container is connected to multiple networks?

From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

Update Prometheus Host/Port in Docker

Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

Set specific IP or name for my docker machine

Is there any way to set either the IP or ideally a ID and hostname in my hosts file in my docker-compose.yml file? At the moment I'm SSH'ing into my docker DB via SequelPro, but if I start up more than one machine I get different IP's which I then need to update in SequelPro every time.
Ideally I cant to be able to docker-compose up -d and then be able to visit myproject.domain.com straight off without having to find the allocated IP each time and change my host file or worry about the allocated IP being different.
Is this possible?
You have a few options; which one is best really depends on your particular needs. You say that you are connecting to your container via SSH, but this sounds like a workaround for something: presumably, your Docker container is offering some sort of useful service other than ssh, and that's what you actually need to access.
The easiest solution is simply to expose the network port for that service on your host using the ports directive in your docker-compose.yaml file. If you just need access locally, you can do something like:
ports:
- "127.0.0.1:8001:8001"
That would expose container port 8001 as port 8001 on your local host. If you need external access to the service (that is, access from some place other than the docker host), you could expose the port on a host interface:
ports:
- "8001:8001"
In that case, you could access the service as <your_host_name_or_ip>:8001.
If this doesn't meet your needs, there are solutions out there that will register container names in DNS, but I haven't used one recently enough to make a suggestion.

Resources