I am working on a micro-service architecture where we have many different projects and all of them connect to the same redis instance. I want to move this architecture to the Docker to run on development environment. Since all of the projects have separate repositories I can not just simply use one docker-compose.yml file to connect them all. After doing some research I figured that I can create a shared external network to connect all of the projects, so I have started by creating a network:
docker network create common_network
I created a separate project for common services such as mongodb, redis, rabbitmq (The services that is used by all projects). Here is the sample docker-compose file of this project:
version: '3'
services:
redis:
image: redis:latest
container_name: test_project_redis
ports:
- "6379:6379"
networks:
- common_network
networks:
common_network:
external: true
Now when I run docker-compose build and docker-compose up -d it works like a charm and I can connect to the redis from my local machine using 127.0.0.1:6379. But there is a problem when I try to connect to this redis container from an other container.
Here is an other sample docker-compose.yml for another project which runs Node.js (I am not putting Dockerfile since it is irrelevant for this issue)
version: '3'
services:
api:
build: .
container_name: sample_project_api
networks:
- common_network
networks:
common_network:
external: true
There is no problem when I build and run this docker-compose as well but the Node.js project is getting CONNREFUSED 127.0.0.1:6379 error, which obviously it can not connect to the Redis server over 127.0.0.1
So I opened a live ssh into the api container (docker exec -i -t sample_project_api /bin/bash) and installed redis-tools to make some tests.
When I try to ping the redis-cli ping it returns Could not connect to Redis at 127.0.0.1:6379: Connection refused.
I checked the external network to see if all of the containers are connected to it properly, using docker network inspect common_network. There were no problem, all of the containers were listed under Containers, and from there I noticed that sample_project_redis container had an ip address of 192.168.16.3
As a final solution I tried to use internal ip address of the redis container:
From sample_project_api container I run redis-cli -h 192.168.16.3 ping and it return with PONG which it worked.
So my problem is that I can not connect to the redis server from other containers using ip address of 127.0.0.1 or 0.0.0.0 but I can connect using 192.168.16.3 which changes every time I restart docker container. What is the reason behind this ?
Containers have a namespaced network. Each container has its own loopback interface and an ip for the container per network you attach to. Therefore loopback or 127.0.0.1 in one container is that container and not the redis ip. To connect to redis, use the service name in your commands, which docker will resolve to the ip of the container running redis:
redis:6379
Related
I am running docker on mac and am trying to set up a development environment for an angular project in a docker container.
My docker-compose setup currently looks like this:
version: '3.7'
services:
dev:
build:
context: .
dockerfile: "${DOCKERFILE_DIR}"
working_dir: "${CONTAINER_DIR}"
ports:
- "3000:4200"
- "3001:8080"
volumes:
- "nfsmount:${CONTAINER_DIR}"
tty: true
volumes:
nfsmount:
driver: local
driver_opts:
type: nfs
o: addr=host.docker.internal,rw,nolock,hard,nointr,nfsvers=3
device: ":/System/Volumes/Data/${SOURCE_DIR}"
The thing is, when I run ng serve inside the docker container, it serves to the localhost:4200 of the docker container and not to the exposed ports of the container. This means that the port mapping of "3000:4200" is insufficient for me to connect localhost:3000 from my host machine to localhost:4200 of my docker container.
Sure, an easy solution would just be to serve to 0.0.0.0:4200 of my docker container by using ng serve --host 0.0.0.0 instead. However, I am trying to mimic my development environment as much as possible, so I was wondering if there was any other way to connect localhost:4200 to 0.0.0.0:4200 inside my docker container (or better still, connect localhost:3000 of my host machine directly to localhost:4200 of my docker container).
Any help is greatly appreciated!
It is not possible unless you use host networking inside the container. When you create a container, the container gets it's own network namespace. The loopback interface is accessible only to the processing running in the same network namespace i.e., the processes inside the container only and hence cannot be accessed from the host.
Instead of running your container in separate network namespace, you can run it in the host network using network_mode: "host" parameter in the docker compose. This should work for your use case if you insist on not binding to 0.0.0.0 inside the container. - https://docs.docker.com/compose/compose-file/#network_mode
I am trying to connect some app that is a docker-compose service to a MongoDB running in a separate docker container using its own network on the same host machine. What URL should be used by the app to connect to that external network?
My steps...
Created a network and started MongoDB on that network:
docker network create my_app_mongo_db
docker run --name db-mongo -d --network=my_app_mongo_db -p 27017:27017 mongo
Created a docker-compose.yaml like so:
version: "3"
services:
my_app:
image: my_app
container_name: my_app
networks:
- default
- my_app_mongo_db
networks:
default:
my_app_mongo_db:
external: true
The docker-compose -up starts fine and docker network inspect my_app_mongo_db shows that the service is connected to the external network.
Next I am trying to connect to MongoDB using this URL, similarly how I would connect to the DB if it was running as a service:
mongodb://my_app_mongo_db:27017
Yet this approach doesn't work with an external network. What does work is using host.docker.internal:
mongodb://host.docker.internal:27017
But according to this Docker doc:
This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
Any ideas how to resolve this when running on a server?
you need to connect using the Container name not the network , try this:
mongodb://db-mongo:27017
I have two services defined for docker-compose
version: '3'
services:
celery:
build:
context: .
dockerfile: ./docker/celery/Dockerfile
command: celery -A api.tasks worker -l info
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
hostname: "0.0.0.0"
I can start the first service
docker-compose run --service-ports rabbitmq
And everything works well. I can ping and connect to port 5672 for communication from host os.
$ curl 0.0.0.0:5672
AMQP
However, the second service cannot see that port. The following command errors because it cannot connect to 0.0.0.0:5672.
docker-compose run --service-ports celery
How do I setup two docker containers, such that they can see each other?
From inside the Docker container, the loopback address 0.0.0.0 refers to the container itself. Use your host's IP address to reach the other container.
Here's an extensive explanation on how to reach your host from inside a container and various network modes that Docker offers: https://stackoverflow.com/a/24326540/417866
Another approach would be to create a Docker Network, connect both your containers to it, and then they'll be able to resolve each other on that network. Details here: https://docs.docker.com/engine/reference/commandline/network_create/
So the easy answer is to refer to each other by name. In your compose file you reference two services:
rabbitmq
celery
if you use docker-compose up -d (or just docker-compose up) it will create the new containers on a newly created network they share. Docker compose then registers both services to the DNS service for that network via an automatic alias.
So from celery, you could ping rabbitmq via:
ping rabbitmq and on rabbitmq you could ping celery via
ping celery
This applies to all network communications as it's just name resolution. You can accomplish this all manually by creating a new network, assigning them to the hosts, and then registering aliases, but docker-compose does all the hard work.
I have following docker-compose and run it locally:
version: '3.4'
services:
testservice.api:
image: testservice.api
build:
context: .
dockerfile: Services/.../Dockerfile
ports:
- "5101:80"
sql.data:
image: postgres.jchem
build:
context: ../db/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
- "9090:9090"
Now within the sql.data container I try to execute curl http://localhost:5101/...
But I get the exit code (7) Connect failed
When I try to connect via curl http://testservice.api/... it works.
Why can't I connect with localhost:port? And how can I accomplish to connect from within a docker container to another with curl localhost:port?
Why can't I connect with localhost:port?
That's because each container gets its own network interface, hence its own 'localhost'.
And how can I accomplish to connect from within a docker container to another with curl localhost:port?
What you can do is use network_mode: "host" in each compose service so every container use the same host net interface. Though, I recommend you to adapt your apps to be configurable so they get their service dependencies URLs as params (for example).
When you say localhost in inside a container then it will read as the container itself. For a container to be able to communicate with other containers. You need to make sure that those containers are connected on the same network and then you can try accessing them via their DNS (their service name) or their container IP.
I'm running another docker-compose exposing Logstash on port 5044 (using docker-elk). I'm able to make requests to the service on localhost:5044 on my host, so the port is exposed correctly.
I'm then running another docker-compose (Filebeat) but from there I cannot connect to "localhost:5044". Here is the docker compose file:
version: '2'
services:
filebeat:
build: filebeat/
networks:
- elk
networks:
elk:
driver: bridge
Any cluye why the localhost:5044 is not accessable in this docker compose?
First of all, the compose file you linked exposes port 5000, but you say you're trying to connect to port 5044.
Secondly, exposing port 5044 (or 5000) will make the port available to the host machine, not to other containers launched with other compose files.
The way i see it is you can either:
keep the first service as it is and instead of localhost:port on the secon service use your_ip:port , where your_ip can be retrieved from ifconfig -a or something similar and should look like 192.168.x.x
Connect both services to an external created network like so:
first create the network with docker network create foo
link the services to the external network in the compose file:
networks:
test_network:
external: true
Then access change the logstash reference from localhost:port to logstash:port
Good luck