I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.
Related
I'm trying to do a basic configuration with Portainer but I can't get my volumes to connect properly. I'm trying a relatively simple configuration as you can see but I get an error when I try this:
version: '3'
networks:
frontend:
backend:
services:
app:
image: webdevops/apache:alpine
container_name: app
volumes:
- "/my/host/absolute/path/data: /var/www"
networks:
- frontend
php:
image: php:fpm-alpine
container_name: php
networks:
- backend
db:
image: mariadb
container_name: db
volumes:
- "/my/host/absolute/path/storage: /var/lib/mysql"
networks:
- backend
Could you give me a hand to make this configuration work which would make me a good little starting point to learn to setup the rest correctly.
All I can find in the documentation is how to link named volumes but I don't see how to link them to a folder on my local computer so I'm not really advanced with this information...
I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.
Currently I have a rabbitmq message broker and multiple celery workers that need to be containerized. My problem is, how can I fire up containers using different docker-compose.yml? My goal is to start the rabbitmq once and for all, and never touch it again.
Currently I have a docker-compose.yml for the rabbitmq:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
expose:
- "5672"
And another docker-compose.yml for celery workers:
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
links:
- rabbit
However, when I do docker-compose up for celery workers, I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to
amqp://admin:**#rabbit:5672//: failed to resolve broker hostname.
Can anyone take a look if there is anything wrong with my code? Thanks.
the domain name rabbit in your second docker-compose.yml file does not resolve because there is no service with that name in that docker-compose.yml file.
As stated in the comments, one solution is to put both the rabbit service and worker service in the same docker-compose.yml file. In such a setup, all containers started for those services would join the same docker network and those service names could be resolved to the IP adresses of their containers.
Since having a single docker-compose.yml file is not convenient in your case, you have to find an other way to have the containers originating from different docker-compose.yml files join a same docker network.
To do so, you need to create a dedicated docker network for that purpose:
docker network create rabbitNetwork
Then, in each docker-compose.yml file, you need to refer to this network in the services definitions:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
# ports:
# - "5672:5672" # there is no need to publish ports on the docker host anymore
expose:
- "5672"
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
You can use any file as service definition.
docker-compose.yml is default file name but any other name can be passed using -f argument.
docker-compose -f rabbit-compose.yml COMMAND
I have a docker compose container that runs Nginx. The site hosted is just a .test domain, like example.test.
Also in the container Nginx runs a location proxy and redirects it to example.test:8000. But it's not able to connect to that because that's actually being hosted from a different container on the same system (all bridged networks).
How can I let the containers communicate using example.test domain?
Or if I can't get them to communicate via example.test then how can I link them so they can use their docker-compose service name such as api or frontend?
Docker compose:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
certificates:
2nd compose file (run together):
version: '3'
services:
django:
environment:
- MODE=PROD
#links:
# - hosting
hosting:
build: ./hosting
restart: always
network_mode: bridge
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
#links:
# - django
volumes:
frontend-build:
With these current settings I get an error when I run it
ERROR: for 92b89f848637_opensrd_hosting_1 Cannot start service hosting: Cannot link to /opensrd_django_1, as it does not belong to the default network
Edit: Altered docker-compose.prod.yml:
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
services:
django:
environment:
- MODE=PROD
networks:
app_net:
ipv4_address: 172.16.238.10
But this gives me an error.
ERROR: The Compose file './docker-compose.prod.yml' is invalid because:
networks.app_net value Additional properties are not allowed ('config' was unexpected)
networks.app_net.ipam contains an invalid type, it should be an object
So I tried the options given by #trust512 and #DimaL, and those didn't work.
However after deleting the network and links from my compose files, and removing the existing default network and built containers, it worked, and I can not refer between container using db, django, and hosting.
The only thing different is I changed the composer version from 3 to 3.5.
These are the final files for anyone interested:
version: '3.5'
services:
db:
image: postgres
ports:
- "5432:5432"
django:
build: ./api
command: ["./docker_up.sh"]
restart: always
volumes:
- ./api:/app/api
- api-static:/app/api/staticfiles
ports:
- "8000:8000"
depends_on:
- db
environment:
- MODE=DEV
volumes:
frontend-build:
api-static:
docker-compose.prod.yml:
version: '3.5'
services:
django:
environment:
- MODE=PROD
hosting:
build: ./hosting
restart: always
volumes:
- frontend-build:/var/www
ports:
- "80:80"
- "443:443"
environment:
- MODE=PROD
volumes:
frontend-build:
You can use external_links (https://docs.docker.com/compose/compose-file/#external_links) or try to put all containers on the same virtual network.
As far as I understand you just want them (django and nginx) to be linked across composes?
Then a native solution would be to use external_links exampled here
And use it like this:
services:
[...]
hosting:
[...]
external_links:
- django_1:example
[...]
Where django_1 stands for the container name created by the compose you provided and example is the alias that the container will be visible inside Django container.
Other way round you can just point a example.test domain to a specific address by editing your /etc/hosts (provided you work on linux/mac)
for example by adding a record like
172.16.238.10 example.test
Where the address above would point to your django application (container).
The above can be achieved without altering your /etc/hosts by using native solution from compose (extra_hosts) documented here
Additionally if you prefer a static ip address for your django/nginx containers in case you stick to the /etc/hosts od extra_hosts solution you can utilize another native solution provided by compose that sets up a static ip for a chosen services, properly exampled here
A adjusted listing from the linked documentation:
services:
[...]
django:
[...]
networks:
app_net:
ipv4_address: 172.16.238.10
networks:
app_net:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
I'm learning docker. I see those two terms that make me confused. For example here is a docker-compose that defined two services redis and web-app.
services:
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
links:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge
This docker-compose file defines a bridge network named lognet and all services will connect to this network. As I understand, this action makes those services see others. So why app service still needs to link to redis service in the above case?
Thanks
Links have been replaced by networks. Docker describes them as a legacy feature that you should avoid using. You can safely remove the link and the two containers will be able to refer to each other by their service name (or container_name).
With compose, links do have a side effect of creating an implied dependency. You should replace this with a more explicit depends_on section so that the app doesn't attempt to run without or before redis starts.
As an aside, I'm not a fan of hard coding container_name unless you are certain that this is the only container that will exist with that name on the host and you need to refer to it from the docker cli by name. Without the container name, docker-compose will give it a less intuitive name, but it will also give it an alias of redis on the network, which is exactly what you need for container to container networking. So the end result with these suggestions is:
version: '2'
# do not forget the version line, this file syntax is invalid without it
services:
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
depends_on:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge