Can (or should) 2 docker containers interact with each other via localhost? - docker

We're dockerizing our micro services app, and I ran into some discovery issues.
The app is configured as follows:
When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry.
When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)
After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters)
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?
If not, what is the best way to approach this?
I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution)
Or maybe use a dockerized version of Consul and integrate with it?
This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.
Thanks!

But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.
The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.
For example, with a docker-compose.yml file like this:
a:
expose:
- "9999"
b:
links:
- a
With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run
ping a
which would send ping requests to the a container from the b container.
So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that
tcp://<container_name>:61001
should work instead of
tcp://localhost:61001
Just make sure you define the link in docker-compose.yml.
Hope this helps

On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).
On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )
If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.

If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.
This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.
Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.
socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001

Related

Windows 10 docker public ip address for accessing from several containers

I have a few docker-compose running in the background.
I need to connect from one docker-compose container to another.
So when I run curl 10.0.0.3:8080 I am able to get an answer as expected. The problem is that each developer in the team has a different IP address that answers to this curl call.
Once again, there are 2 different docker-compose running, and I want to connect from one to another.
How can I make all PCs docker to answer the same IP address? (I want to avoid environment variable).
for example, I want the IP: 10.0.0.3 to be valid in each team member's PC.
is that possible?
Thaks
Using IP's when working with docker is considered a bad practice and I strongly discourage it. If you use docker-compose then just use the service name to refer to a service. This way even if IP's change you will still be able to connect to your services
Each instance of docker-compose runs the services in its own network. You can also define a network (docker network create xxxxx) and then configure docker-compose to connect to that network. This way all your services will see each other.
If however you decide to go with using IP's, there is a way to set a fixed ip for your service. Check the section IPV4_ADDRESS, IPV6_ADDRESS of the Docker-compose reference.

Read host's ifconfig in the running Docker container

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

Docker and Consul by example: need clarification

I'm learning Docker-Swarm with Consul and found some issues I don't really understand. Basically, I created a Docker-Swarm cluster (node-01 and node-02) with Consul Sevice Discovery. I then run a multi-container application (Express app with Mongo) and I can see it is running on node-02. In order to run it, I have to go in and find the IP address of my node-02 and then open the browser.
It works fine, it's just that I was expecting that I could just go to some virtual IP (or DNS) and that it the Consul service (or Swarm) would then translate it to the correct IP address of node-02 in this example.
Next item is that when I log into Consul web UI, I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case. I was also expecting to get an overview of the 'applications' or 'services' I was running on the node-01 and node-02, but that is also not the case.
My questions are:
Can someone explain why I would need to manually find out on which node in the cluster my app is running. Cannot imagine this is done in larger deployments.
Can someone address why I don't see the 'nodes' and 'services' in the Consul UI?
Note: I tried to be as short as possible though I have been documenting the full setup in a blog post (with screenshots) for those who want to see more details. Go to blog post
Question 1
I would like to access the service without having to use the Swarm agent's IP address
Solution
It is feasible, you just need to start up a reverse proxy such as nginx in a container (here are the official nginx images). At the start up of this container use the option --link with the name of the application. Thus the IP address of this container will added in the file /etc/hosts of the reverse proxy container (remember to use --name and --hostname). Run this reverse proxy container on a specific node.
So the solution to get rid of the IP address issue is to deploy another container on a specific node (and then specific IP address)? Yes! But using --link will make this issue scalable ;)
Question 2
I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case.
What you do mean? What did you expect? Do you need to query the k,v-storage DB?
Check this: https://github.com/vmudigal/microservices-sample
Microservices Sample Architecture

Assign domain name to the container

I am looking for a way to assign a domain name to the container when it is started. For example, I want to start a web server container, and to be able to access web pages via domain name. Is there an easy way to do this ?
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1/SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local).
You can skip step #2 and run your browser inside container too: it will discover the web application container by hostname.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

Resources