I have following docker-compose and run it locally:
version: '3.4'
services:
testservice.api:
image: testservice.api
build:
context: .
dockerfile: Services/.../Dockerfile
ports:
- "5101:80"
sql.data:
image: postgres.jchem
build:
context: ../db/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
- "9090:9090"
Now within the sql.data container I try to execute curl http://localhost:5101/...
But I get the exit code (7) Connect failed
When I try to connect via curl http://testservice.api/... it works.
Why can't I connect with localhost:port? And how can I accomplish to connect from within a docker container to another with curl localhost:port?
Why can't I connect with localhost:port?
That's because each container gets its own network interface, hence its own 'localhost'.
And how can I accomplish to connect from within a docker container to another with curl localhost:port?
What you can do is use network_mode: "host" in each compose service so every container use the same host net interface. Though, I recommend you to adapt your apps to be configurable so they get their service dependencies URLs as params (for example).
When you say localhost in inside a container then it will read as the container itself. For a container to be able to communicate with other containers. You need to make sure that those containers are connected on the same network and then you can try accessing them via their DNS (their service name) or their container IP.
Related
I am running docker on mac and am trying to set up a development environment for an angular project in a docker container.
My docker-compose setup currently looks like this:
version: '3.7'
services:
dev:
build:
context: .
dockerfile: "${DOCKERFILE_DIR}"
working_dir: "${CONTAINER_DIR}"
ports:
- "3000:4200"
- "3001:8080"
volumes:
- "nfsmount:${CONTAINER_DIR}"
tty: true
volumes:
nfsmount:
driver: local
driver_opts:
type: nfs
o: addr=host.docker.internal,rw,nolock,hard,nointr,nfsvers=3
device: ":/System/Volumes/Data/${SOURCE_DIR}"
The thing is, when I run ng serve inside the docker container, it serves to the localhost:4200 of the docker container and not to the exposed ports of the container. This means that the port mapping of "3000:4200" is insufficient for me to connect localhost:3000 from my host machine to localhost:4200 of my docker container.
Sure, an easy solution would just be to serve to 0.0.0.0:4200 of my docker container by using ng serve --host 0.0.0.0 instead. However, I am trying to mimic my development environment as much as possible, so I was wondering if there was any other way to connect localhost:4200 to 0.0.0.0:4200 inside my docker container (or better still, connect localhost:3000 of my host machine directly to localhost:4200 of my docker container).
Any help is greatly appreciated!
It is not possible unless you use host networking inside the container. When you create a container, the container gets it's own network namespace. The loopback interface is accessible only to the processing running in the same network namespace i.e., the processes inside the container only and hence cannot be accessed from the host.
Instead of running your container in separate network namespace, you can run it in the host network using network_mode: "host" parameter in the docker compose. This should work for your use case if you insist on not binding to 0.0.0.0 inside the container. - https://docs.docker.com/compose/compose-file/#network_mode
I am trying to connect some app that is a docker-compose service to a MongoDB running in a separate docker container using its own network on the same host machine. What URL should be used by the app to connect to that external network?
My steps...
Created a network and started MongoDB on that network:
docker network create my_app_mongo_db
docker run --name db-mongo -d --network=my_app_mongo_db -p 27017:27017 mongo
Created a docker-compose.yaml like so:
version: "3"
services:
my_app:
image: my_app
container_name: my_app
networks:
- default
- my_app_mongo_db
networks:
default:
my_app_mongo_db:
external: true
The docker-compose -up starts fine and docker network inspect my_app_mongo_db shows that the service is connected to the external network.
Next I am trying to connect to MongoDB using this URL, similarly how I would connect to the DB if it was running as a service:
mongodb://my_app_mongo_db:27017
Yet this approach doesn't work with an external network. What does work is using host.docker.internal:
mongodb://host.docker.internal:27017
But according to this Docker doc:
This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
Any ideas how to resolve this when running on a server?
you need to connect using the Container name not the network , try this:
mongodb://db-mongo:27017
I have two docker containers, nginx and php, from which I want to access mysql server running on host machine and sql server on remote machine.
Have tried change the network type from "bridge" to "host" but it returns errors.
version: '2'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- /var/www/:/code
- ./site.conf:/etc/nginx/conf.d/default.conf
networks:
- mynetwork
php:
image: php:fpm
volumes:
- /var/www/:/code
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
I'm expecting php code running in my containers can connect to those two databases.
Note: I don't using docker run to run container, instead I'm using docker-compose up -d so I just want to edit the docker-compose.yml file.
Just make sure the container can access the external database by going online.Bridge" and "host network type can do.
First, you need to make sure you have a correct mysql grant rule, such as %.
1\You can use the ip of the host to access the mysql on the host from the inside of the container.
2\Other mysql instances that belong to the same LAN as the host, access from the container can also be accessed using the LAN ip on the mysql instance.
Ensure the ping is normal,Make sure the ping is working, otherwise your docker installation may have problems, such as problems from iptables.
In your php service declaration you have to add something like:
extra_hosts:
- "local_db:host_ip"
Where local_db is the name you will configure in your database connection string and host_ip is the IP of your host on the local network.
You have to make sure that your php code does not try to connect to "localhost" because that will not work. You need to use the server name "local_db" (in my example).
You do the same thing for the remote database, just make sure the IP is reachable.
You can remove the network declaration because it is not needed.
In order to docker containers has access to each other you should link them. docker service uses link switch to add ID and IP of one container in /etc/hosts file of another.
I have two services defined for docker-compose
version: '3'
services:
celery:
build:
context: .
dockerfile: ./docker/celery/Dockerfile
command: celery -A api.tasks worker -l info
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
hostname: "0.0.0.0"
I can start the first service
docker-compose run --service-ports rabbitmq
And everything works well. I can ping and connect to port 5672 for communication from host os.
$ curl 0.0.0.0:5672
AMQP
However, the second service cannot see that port. The following command errors because it cannot connect to 0.0.0.0:5672.
docker-compose run --service-ports celery
How do I setup two docker containers, such that they can see each other?
From inside the Docker container, the loopback address 0.0.0.0 refers to the container itself. Use your host's IP address to reach the other container.
Here's an extensive explanation on how to reach your host from inside a container and various network modes that Docker offers: https://stackoverflow.com/a/24326540/417866
Another approach would be to create a Docker Network, connect both your containers to it, and then they'll be able to resolve each other on that network. Details here: https://docs.docker.com/engine/reference/commandline/network_create/
So the easy answer is to refer to each other by name. In your compose file you reference two services:
rabbitmq
celery
if you use docker-compose up -d (or just docker-compose up) it will create the new containers on a newly created network they share. Docker compose then registers both services to the DNS service for that network via an automatic alias.
So from celery, you could ping rabbitmq via:
ping rabbitmq and on rabbitmq you could ping celery via
ping celery
This applies to all network communications as it's just name resolution. You can accomplish this all manually by creating a new network, assigning them to the hosts, and then registering aliases, but docker-compose does all the hard work.
I am working on a micro-service architecture where we have many different projects and all of them connect to the same redis instance. I want to move this architecture to the Docker to run on development environment. Since all of the projects have separate repositories I can not just simply use one docker-compose.yml file to connect them all. After doing some research I figured that I can create a shared external network to connect all of the projects, so I have started by creating a network:
docker network create common_network
I created a separate project for common services such as mongodb, redis, rabbitmq (The services that is used by all projects). Here is the sample docker-compose file of this project:
version: '3'
services:
redis:
image: redis:latest
container_name: test_project_redis
ports:
- "6379:6379"
networks:
- common_network
networks:
common_network:
external: true
Now when I run docker-compose build and docker-compose up -d it works like a charm and I can connect to the redis from my local machine using 127.0.0.1:6379. But there is a problem when I try to connect to this redis container from an other container.
Here is an other sample docker-compose.yml for another project which runs Node.js (I am not putting Dockerfile since it is irrelevant for this issue)
version: '3'
services:
api:
build: .
container_name: sample_project_api
networks:
- common_network
networks:
common_network:
external: true
There is no problem when I build and run this docker-compose as well but the Node.js project is getting CONNREFUSED 127.0.0.1:6379 error, which obviously it can not connect to the Redis server over 127.0.0.1
So I opened a live ssh into the api container (docker exec -i -t sample_project_api /bin/bash) and installed redis-tools to make some tests.
When I try to ping the redis-cli ping it returns Could not connect to Redis at 127.0.0.1:6379: Connection refused.
I checked the external network to see if all of the containers are connected to it properly, using docker network inspect common_network. There were no problem, all of the containers were listed under Containers, and from there I noticed that sample_project_redis container had an ip address of 192.168.16.3
As a final solution I tried to use internal ip address of the redis container:
From sample_project_api container I run redis-cli -h 192.168.16.3 ping and it return with PONG which it worked.
So my problem is that I can not connect to the redis server from other containers using ip address of 127.0.0.1 or 0.0.0.0 but I can connect using 192.168.16.3 which changes every time I restart docker container. What is the reason behind this ?
Containers have a namespaced network. Each container has its own loopback interface and an ip for the container per network you attach to. Therefore loopback or 127.0.0.1 in one container is that container and not the redis ip. To connect to redis, use the service name in your commands, which docker will resolve to the ip of the container running redis:
redis:6379