Im trying to setup an application environment with two different docker-compose.yml files. The first one creates services in the default network elastic-apm-stack_default. To reach the services of both docker-compose files I used the external command within the second docker-compose file. Both files look like this:
# elastic-apm-stack/docker-compose.yml
services:
apm-server:
image: docker.elastic.co/apm/apm-server:6.2.4
build: ./apm_server
ports:
- 8200:8200
depends_on:
- elasticsearch
- kibana
...
# sockshop/docker-compose.yml
services:
front-end:
...
...
networks:
- elastic-apm-stack_default
networks:
elastic-apm-stack_default:
external: true
Now the front-end service in the second file needs to send data to the apm-server service in the first file. Therefore, I used the url http://apm-server:8200 in the source code of the front-end service but i always get an connectionRefused error. If I define all services in a single docker-compose file it works but I want to separate the docker-compose files.
Could anyone help me? :)
Docker containers run in network 172.17.0.1
So, you may use url
http://172.17.0.1:8200
to get access to your apm-server container.
Related
We have a develop and a production system that use symfony 5 + nginx + MySQL services running in a docker environment.
At the moment the nginx webserver runs in the same container as the symfony service because of this issue:
On our develop environment we are able to mount the symfony sourcecode into the docker container (by a docker-compose file).
In our production environment we need to deliver containers that contains all the source code inside because we must not put our source code on the server. So there is no folder on the server from which we can mount our source code.
Unfortunately nginx needs the sourceode as well to make his routing decisions so we decided to put the symfony and the nginx services together in one container.
Now we want to clean this up to have a better solution by run every service in its own container:
version: '3.5'
services:
php:
image: docker_sandbox
build: ../.
...
volumes:
- docker_sandbox_src:/var/www/docker_sandbox // <== VOLUME
networks:
- docker_sandbox_net
...
nginx:
image: nginx:1.19.0-alpine
...
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- docker_sandbox_src:/var/www/docker_sandbox <== VOLUME
...
networks:
- docker_sandbox_net
depends_on:
- php
mysql:
...
volumes:
docker_sandbox_src:
networks:
docker_sandbox_net:
driver: bridge
One possible solution is to use a named volume that connects the nginx service with the symfony service. The problem with that is that on an update of our symfony image the volume keeps the old changes. So there is no update until we manually delete this volume.
Is there a better way to handle this issue? May be a volume that is able to overwrite its content when a new image is deployed. Or an nginx config that does not require the symfony source code in its own container.
I'm new to docker and I composed a web service and other services used in the web in a docker-compose file. I wonder if it's possible to have access to the services(e.g. api service) for the web service via container_name.
like http://container_name:8080. Container_name is specified in docker-compose file, and web service on docker containers can read other service via http://localhost:port. I want to replace localhost to container_name, can docker do this mapping via some configuration? I tried depends_on and links and none of them work.
Part of my docker-compose.yml:
version: "3.7"
services:
mywebservice:
container_name: mywebservice
ports:
- "8080:80"
depends_on:
- myapiservice
myapiservice:
container_name:myapiservice
ports:
- "8081:80"
You can resolve your container name to the container ip via the hosts file.
192.168.10.10 mywebservice
You can have this file in your application source and get docker to copy it to /etc
Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)
I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)
Working on getting two different services running inside a single docker-compose.yml to communicate w. each other within docker-compose.
The two services are regular NodeJS servers (app1 & app2). app1 receives POST requests from an external source, and should then send a request to the other NodeJS server, app2 w. information based on the initial POST request.
The challenge that I'm facing is how to make the two NodeJS containers communicate w. each other w/o hardcoding a specific container name. The only way I can get the two containers to communicate currently, is to hardcode a url like: http://myproject_app1_1, which will then direct the POST request from app1 to app2 correctly, but due to the way Docker increments container names, it doesn't scale very well nor support potential container crashing etc.
Instead I'd prefer to send the POST request to something along the lines of http://app2 or a similar way to handle and alias a number of containers, and no matter how many instances of the app2 container exists Docker will pass the request one of the running app2 containers.
Here's a sample of my docker-compose.yml file:
version: '2'
services:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
# databases [...]
Thanks in advance.
Ok. This is two questions.
First: how to don`t hardcode container names.
you can use system environment variables like:
nodeJS file:
app2Address = process.env.APP2_ADDRESS;
response = http.request(app2Address);
docker compose file:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
environment:
- APP2_ADDRESS: ${app2_address}
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
environment:
- HOSTNAME: ${app2_address}
and .env file like:
app2_address=myapp2.com
also you can use wildcard application config file. And when container starts you need to substitute real hostname.
for this action you need create entrypoint.sh and use "sed" like:
sed -i '/sAPP_2HOSTNAME_WILDCARD/${app2_address}/g /app1/congig.js
Second. how to make a transparent load balancing:
you need use http load balancer like
haproxy
nginx as load balancer
There is hello-world tutorial how to make a load balancing with docker
When you run two containers from one compose file, docker automatically sets up an "internal dns" that allows to reference other containers by their service name defined in the compose file (assuming they are in the same network). So this should work when referencing http://app2 from the first service.
See this example proxying requests from proxy to the backend whoamiapp by just using the service name.
default.conf
server {
listen 80;
location / {
proxy_pass http://whoamiapp;
}
}
docker-compose.yml
version: "2"
services:
proxy:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
whoamiapp:
image: emilevauge/whoami
Run it using docker-compose up -d and try running curl <dockerhost>.
This sample uses the default network with docker-compose file version 2. You can read more about how networking with docker-compose works here: https://docs.docker.com/compose/networking/
Probably your configuration of the container_name property somehow interferes with this behaviour? You should not need to define this on your own.