How do use external_links to connect docker-compose to common service? - docker

I currently am using a docker-compose to setup a series of microservices which I want to link to a common error logging service (created outside the compose).
I am creating the errorHandling service outside of the compose.
docker run -d --name errorHandler
Then I run the compose (summarized):
version: '2'
services:
my-service:
build: ../my-service
external_links:
- errorHandler
I am using the hostname alias ('errorHandler') within my application but can't seem to get them connected. How do I check to see if the service if even discovered within the compose network?

Rather than links, use a shared docker network. Place the "errorHandler" container on a network in Docker, using something like docker network create errorNet and docker network connect errorNet errorHandler. Then define that network in your compose file for "my-service" with:
version: '2'
networks:
errorNet:
external: true
services:
my-service:
build: ../my-service
networks:
- errorNet
This uses docker's internal DNS to connect containers together.

Related

AWS ECS how to launch containers in private bridge network

How do I get docker automatic service discovery working in an AWS ECS EC2 based cluster service?
I have this corresponding docker-compose.yml (which I'm mapping over to a ECS compatible task-definition.json file):
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
environment:
- discovery.type=single-node
mongo:
image: mongo:4.0.12
redis:
image: redis:5.0.3
api:
image: api
build: .
command: api.py
depends_on:
- elasticsearch
- mongo
- redis
ports:
- 5000:5000
If I launch this with docker-compose up docker-composer will create a new private bridge network. Within this private network "automatic service discovery" is on and service names resolve to service IP addresses. So for example the api can find mongo without knowing its IP by doing a DNS lookup for "mongo". The network is also isolated from other unrelated containers. You can do this manually via docker too like this:
docker network create api-net
docker run -d --name elasticsearch --net api-net docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run -d --name mongo --net api-net mongo:4.0.12
...
But I can't figure out how I can achieve the same via AWS ECS multi-container service defined with a task-definition.json file. If I define multiple services with "bridge" networking all containers are launched into the default bridge network and automatic service discovery does not work. I can manually log into the ECS EC2 container instance and set up a private network, but obviously not a workable solution.
you need to use :
"links": ["name:internalName", ...]
see more here under Network Settings
please see this note also:
Important
Containers that are collocated on a single container instance may be
able to communicate with each other without requiring links or host
port mappings. Network isolation is achieved on the container instance
using security groups and VPC settings.

Access to container created by docker-compose in same network

As shown in the picture, there are 4 containers. spider is created by:
docker run xxx --net=dockernet
And the app_xx-server_1/2/3 is created by docker-compose up from a directory named app, docker-compose file:
version: '3'
services:
xx-server:
image: xx-server
networks:
default:
external:
name: dockernet
And when I start the spider and the app_xx, I specify the same docker network explicitly(check the ip address in the picture).
Now I want to access the app_xx-server_1/2/3 from spider by http like this:
http://app_xx-server
It does not work.
How to fix that?
You need to explicitly define that the container (xx-server) have the network too:
xx-server:
networks:
- dockernet
Else it won't be available from that container.

how to get two docker containers one running a flask service and golang services to talk to each other?

I have a flask service running through docker-compose on port 5000. Similarly, I have a different go service running through another docker-compose on port 8000. The Golang service needs to call a flask API running on 5000. I am facing trouble in getting the go service to call flask service. I have tried adding docker-network but failed. What are the pros and cons of running both the services through different docker-compose as compared to single docker-compose? (I have not been able to successfully run them in a single docker-compose, btw). docker ps running both the containers.
Flask Docker compose
version: '3' # version of compose format
services:
bidders:
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/usr/src/bidders # mount point
ports:
- 5000:5000 # host:container
Go Docker Compose
version: '3'
services:
auctions:
container_name: auctions
build: .
command: go run main.go
volumes:
- .:/go/src/auctions
working_dir: /go/src/auctions
ports:
- "8000:8000"
Third Nwtwork Docker-compose.yml
#docker-compose.yml
version: '3'
networks:
- second_network
networks:
second_network:
driver: bridge
With a single docker-compose.yml it will be easier to make both services inside the same network. So what was the issue you got while doing this ? Also make sure that your flask and go application both are binding to 0.0.0.0 from the code itself and not 127.0.0.1 so you can reach them from outside the container.
With two docker-compose.yml you have two options:
Create a network through one of these files and make the other container which in another file join this external network.
Create a network using docker network create and define an external network in both files for your containers
There is a similar question that you can check it's answer from here with example included
You can check Networking in Compose for more information

Service access another service on 127.0.0.1?

I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379

Connect two instances of docker-compose [duplicate]

This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql

Resources