Linking Docker Containers - docker

I have a nodejs app i'm trying to run in a docker container. My app uses mongodb and redis.
I've pulled down both a mongo and redis container and dockerized my app.
I started up my mongo and redis containers like:
docker run -i -t --name redis -d redis
docker run -i -t --name mongo -d mongo
Now, I link my nodejs app container to both of these and run the app:
docker run -i -t --name myapp --link mongo:mongo --link redis:redis mseay/myapp node /myapp/server.js
When I run my app, it fails with the error
Error: Redis connection to localhost:6379 failed - connect ECONNREFUSED
My app cannot connect to either my redis container or mongo even though they're both running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8709c818014a redis:latest "/entrypoint.sh redi 12 minutes ago Up 12 minutes 6379/tcp redis
6d87364ad4c9 mongo:latest "/entrypoint.sh mong 12 minutes ago Up 12 minutes 27017/tcp mongo
Any ideas?

Make sure that you are connecting to your mongodb and redis instance as so:
Note that I have made some changes how you link your containers. The names are important as they are referred later.
docker run -i -t --name myapp --link mongo:MONGODB --link redis:REDIS mseay/myapp node /myapp/server.js
For connecting to MongoDB:
IP = process.env.MONGODB_PORT_27017_TCP_ADDR
PORT = process.env.MONGODB_PORT_27017_TCP_PORT
var mongoUrl = 'mongodb://' + IP + ':' + PORT + '/';
or you can simply use:
var mongoUrl = 'mongodb://' + MONGODB + ':27017/';
Similarly connect to redis database by using its ip as REDIS.
Explanation:
When you create a docker container and link other docker containers via the --link parameter, docker modifies your containers hosts file and inserts the IP of the linked containers against their names (that you choose as --link=container_name:NAME_OF_YOUR_CHOICE).
Hence, if you open a bash in your new container and try to run
ping MONGODB
ping REDIS
you can see that both are reachable, and hence if you try connecting to them, it works (assuming your have mongodb and redis installed in the new container, and that your redis and mongodb instances are running on default ports)
mongo --host=MONGODB
redis-cli -h REDIS

If you are using the official repo for redis
https://registry.hub.docker.com/_/redis/,run the command
docker run --name redis -d redis insted of
docker run -i -t --name redis -d redis
-i -t opens an interactive session
-d opens as a daemon process so both should not be used together .
The linking command seems appropriate.
To check if the container is linked properly with your app,
go into your app using /bin/bash and use env command.You should be able to see two environment variables stating redis host and redis port
This worked for me.Please let us know if you this worked for you also.

Your error message says that you're trying to connect to localhost to get to redis. But you started your container with --link redis:redis, so you should be looking for Redis at hostname redis.

Another cause of "connection refused" can be the Redis config not allowing anything else but 127.0.0.1 to connect. This is for example the default setting if you installed Redis using apt-get install redis-server.
Since the container linking to Redis will get a different originating ip-adress, you will get "Connection refused" when trying to connect.
One solution is to put a hash character in front of the line bind 127.0.0.1 in redis.conf.
This will however allow any host or container to connect to your Redis container, so this is only recommended if you have control over the host, so you can add firewall filters using on the host. Also, make sure that you trust all other containers that are executing on the host, otherwise they will be able to connect to your Redis container. Note that Redis also supports password upon connecting, which would make things safer even though you are sharing the host environment with other peoples containers.

Related

How to connect containerized flask server to containerized postgres db docker? [duplicate]

I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.
The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp

What goes in "some-network" placeholder in dockerized redis cli?

I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.

Cannot connect to redis from different docker container

currently I have one personal docker image uploaded to DockerHub, I am making the changes to connect this application to redis (that is running in another docker container as well)
Now, I have the next message from my app when it tries to connect to redis:
Error saving to redis: dial tcp 127.0.0.1:6379: connect: connection refused
I understand that they're not in the same network, so my app is pointing to 127.0.0.1:6379 that is not executing anything, so, I am looking for the best way to connect those containers in some way that don't make my app depending of the IP where redis is hosted, from my local machine I can use redis, but not from another docker container. Briefly, what I did was:
docker run --name redis_server -p 6379:6379 -d redis
sudo docker run -d --restart=always -p 10000:10000 --link redis_server:redis --name my_app repo/my_app
So, I am looking for solutions on how to make 127.0.0.1:6379 accessible for my_app. I don't use docker-compose by the way
The easiest solution would be to run Docker with "--network=host" which binds the containers to the host network. This is a perfectly fine solution for testing, however it will expose the ports and services respectively to the internet (if you don't have a firewall), which may or may not be secure.
Another way of doing this would be to define a network in the following way:
docker network create -d bridge my-net
And then to run your containers on that same network by running the docker run command with --network=my-net. This way you can reference each container by its name. For example in your case you can ping my_app from the Redis container and vice-versa.

How do I link two running docker containers together?

I'm wondering how to link docker containers that are already running. Is this possible?
For example if I have 2 app (app1 and app2) images and a single running mongo container, I can link them pretty easily when I run the them by doing the following:
docker run -d -name app1 --link mongo:mongo -p 8080:8080 app1
docker run -d -name app2 --link mongo:mongo -p 8081:8081 app2
This works great. However, suppose I have already ran app1 and app2 without linking them to the mongo container at the start, how do I go about linking the applications' containers to the running mongo container?
You need to expose port from your container to host, then container can ping each other via your HOST_IP (from inside container- default is 172.0.0.1).
Example: your app is running on 8080 and your mongo is running on 8000 (exposed port to host)
exec inside your app container and get $HOST_IP using ifconfig
After that, try to ping to your mongo service
curl $HOST_IP:8000 (I'm not sure about this command ^^ if it's not ok, google it)

Docker container cannot connect to linked containers services

I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.

Resources