I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.
Related
I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.
How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.
In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.
We're dockerizing our micro services app, and I ran into some discovery issues.
The app is configured as follows:
When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry.
When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)
After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters)
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?
If not, what is the best way to approach this?
I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution)
Or maybe use a dockerized version of Consul and integrate with it?
This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.
Thanks!
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.
The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.
For example, with a docker-compose.yml file like this:
a:
expose:
- "9999"
b:
links:
- a
With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run
ping a
which would send ping requests to the a container from the b container.
So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that
tcp://<container_name>:61001
should work instead of
tcp://localhost:61001
Just make sure you define the link in docker-compose.yml.
Hope this helps
On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).
On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )
If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.
If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.
This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.
Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.
socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.