Docker Networking - to link or not to link? - docker

We have 1000s of python unit tests and to run them efficiently we parallelize them in batches. Each batch has its own Docker environment consisting of the core application and a mongo instance. It's setup something like this:
docker network create --driver bridge ut_network-1
docker network create --driver bridge ut_network-2
docker run -d --name mongo-unittest-1 --network ut_network-1 mongo:3.4.2
docker run -d --name mongo-unittest-2 --network ut_network-1 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} --link mongo-unittest-{%}:db python discover.py
The connection string is "mongodb://db:27017/mydb"
{%} is the number associated with the environment - so on ut_network-1, the database would be mongo-unittest-1. Note the alias to 'db'.
This works fine but I read that --link will be deprecated.
I thought the solution would be as simple as removing --link and setting the hostname:
docker run -d --hostname db --network ut_network-1 mongo:3.4.2
docker run -d --hostname db --network ut_network-2 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} python discover.py
However, if I do this then the application cannot find the mongo instance. Further:
I can't use --name db because Docker would attempt to create multiple containers called 'db' which it obviously cannot do (even though they are on a different network).
the default hostname of the mongo database is the first few digits of the container id. My unit tests all get the mongo database string from a secrets file which assumes the database is called 'db'.
as I said, if I use --hostname db the core app cannot find the mongo instance. But if I hard-code the container id as the server, then the core application finds the mongo instance fine.
I want to keep the alias 'db' so the unit tests can use one single source for the mongo database string that I don't need to mess with.
The documentation here implies I can use --link.
So am I doing this correctly? Or if not, how should I configure Docker networking such that I can create multiple networks and alias a 'static' hostname for 'db'?
Any advice would be much appreciated.
Thanks in advance!

Yes, links are being deprecated and should be avoided. For the DNS discovery, I thought the hostname would work, but I'm seeing the same results you are seeing. You could use the container name with --name db which has the unique container issue, so I recommend against that for the same reasons you've found. The best solution is to go directly to the goal of a network alias with --network-alias db:
docker run -d --network-alias db --network ut_network-1 mongo:3.4.2
docker run -d --network-alias db --network ut_network-2 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} python discover.py

Related

Unable to connect to MongoDB from Docker Container

I am running Mongo DB image with following command:
docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=test -e MONGO_INITDB_ROOT_PASSWORD=password --name=testdb mongo
This created container and I'm able to connect to this from robo3T.
Now I ran mongo-express image with following command and trying to above DB:
docker run -d -p 8081:8081 -e ME_CONFIG_MONGODB_ADMINUSERNAME=test -e ME_CONFIG_MONGODB_ADMINPASSWORD=password -e ME_CONFIG_MONGODB_SERVER=testdb --name=mongo-ex mongo-express
But I'm getting following error:
UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [testb:27017] on first connect [Error: getaddrinfo ENOTFOUND testb
If I'm creating a custom bridge network and running these two images in that container it's working.
My question is: As the default network is bridge network, and these containers are creating in default bridge network, why are they not able to communicate? Why is it working with custom bridge network?
There are two kinds of "bridge network"; if you don't have a docker run --net option then you get the "default" bridge network which is pretty limited. You almost always want to docker network create a "user-defined" bridge network, which has the standard Docker networking features.
# Use modern Docker networking
docker network create myapp
docker run -d --net myapp ... --name testdb mongo
docker run -d --net myapp ... -e ME_CONFIG_MONGODB_SERVER=testdb mongo-express
# Because both containers are on the same --net, the first
# container's --name is usable as a host name from the second
The "default" bridge network that you get without --net by default forbids inter-container communication, and you need a special --link option to make the connection. This is considered obsolete, and the Docker documentation page describing links notes that links "may eventually be removed".
# Use obsolete Docker networking; may stop working at some point
docker run -d ... --name testdb mongo
docker run -d ... -e ME_CONFIG_MONGODB_SERVER=testdb --link testdb mongo-express
# Containers can only connect to each other by name if they use --link
On modern Docker setups you really shouldn't use --link or the equivalent Compose links: option. Prefer to use the more modern docker network create form. If you're using Compose, note that Compose creates a network named default but this is a "user-defined bridge"; in most cases you don't need any networks: options at all to get reasonable inter-container networking.

How to connect containerized flask server to containerized postgres db docker? [duplicate]

I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.
The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp

What goes in "some-network" placeholder in dockerized redis cli?

I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.

DNS not working between two linked docker containers - getaddrinfo EAI_AGAIN error

I am attempting to setup a temporary environment where I can execute ATs against a web application. To achieve this I have 3 docker containers:
Container 1: Database (mongo_local)
docker build -t mongo_local ./test/AT/mongo
docker run --name mongo_local -d -p 27017:27017 mongo_local
Container 2 (Web application):
docker run --name mywebapp_local -d -p 4431:4431 --link mongo_local -v /applicationdata:/applicationdata mywebapp
Container 3 (Newman test runner):
docker build -t newman_runner ./test/AT/newman
docker run --name newman_runner --link mywebapp_local newman_runner
The web application can access the database successfully using the following connection string: mongodb://mongo_local:27017/mydb, note that I am able to reference mongo_local, I dont have to specify an IP address for the mongo_local container.
The newman test runner runs postman tests against the web application and all tests execute successfully when I specify the IP address of the mywebapp_local container i.e. 10.0.0.4 in the URL, however if I specify the name mongo_local in the URL it does not work.
Hence https://mywebapp_local/api/v1/method1 does not work but https://10.0.0.4/api/v1/method1 does work.
The error Im getting is
getaddrinfo EAI_AGAIN mywebapp_local mywebapp_local:443 at request ...
I've tried using -add-host in the docker run command and this makes no difference. Is there anything obvious that I'm doing wrong?
As you have it set up, the newman_runner container doesn't --link mongo_local and that's why it can't see it.
Docker has been discouraging explicit inter-container links for a while. If you create a Docker-internal network and attach each container to it
docker network create testnet
docker run --net testnet ...
it will be able to see all of the other containers on the same network by their --name without an explicit --link.

Docker containers connection issue

I have two containers. One of them is my application and the other is ElasticSearch-5.5.3. My application needs to connect to ES container. However, I always get "Connection refused"
I run my application with static port:
docker run -i -p 9000:9000 .....
I run ES with static port:
docker run -i -p 9200:9200 .....
How can I connect them?
You need to link both the containers by using --links
Start your ES container with a name es -
$ docker run --name es -d -p 9200:9200 .....
Start your application container by using --links -
$ docker run --name app --links es:es -d -p 9000:9000 .....
That's all. You should be able to access ES container with hostname es from application container i.e app.
try - curl -I http://es:9200/ from inside the application container & you should be able to access ES service running in es container.
Ref - https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#communication-across-links
I suggest one of the following:
1) use docker links to link your containers together.
2) use docker-compose to run your containers.
Solution 1 is considered deprecated, but maybe the easier to get started.
First, run your elasticsearch container giving it a name by using the --name=<your chosen name> flag.
Then, run your application container adding --link <your chosen name>:<your chosen name>.
Then, you can use <your chosen name> as the hostname to connect from the application to your elasticsearch.
Do you have a --network set on your containers? If they are both on the same --network, they can talk to each other over that network. So in the example below, the myapplication container would reference http://elasticsearch:9200 in its connection string to post to Elasticsearch.
docker run --name elasticsearch -p 9200:9200 --network=my_network -d elasticsearch:5.5.3
docker run --name myapplication --network=my_network -d myapplication
Learn more about Docker networks here: https://docs.docker.com/engine/userguide/networking/

Resources