Create multiple containers of the same service with Docker Compose - docker

I am currently starting with Docker Compose, and I wanted to know how to create multiple containers of the same service 'Redis'. I've already tried with docker-compose up --scale redis=3, but it gives an error, I've been searching through Google the possible solution, but I could not find one.
Thank you in advance.
Here is my docker-compose.yml
version: '3'
services:
redis:
container_name: redis
hostname: redis
image: redis
redis-commander:
container_name: redis-dbms
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
And here is the error it gives me.
docker-compose up --scale redis=3
Creating network "ex2_default" with the default driver
WARNING: The "redis" service is using the custom container name "redis". Docker requires each container to have a unique name. Remove the custom name to scale the service.
Creating redis-dbms ...
Creating redis ... done
Creating redis ...
Creating redis ...
Creating redis-dbms ... done
ERROR: for redis Cannot create container for service redis: Conflict. The container name "/redis" is already in use by container "d4c93ae4ca68da0b6430e5eddc657d9dda0f40002c7a81c89368535df05eae24". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for redis Cannot create container for service redis: Conflict. The container name "/redis" is already in use by container "d4c93ae4ca68da0b6430e5eddc657d9dda0f40002c7a81c89368535df05eae24". You have to remove (or rename) that container to be able to reuse that name.

The error raise because the reason below:
Link: https://docs.docker.com/compose/compose-file/compose-file-v3/#container_name
Because Docker container names must be unique, you cannot scale a service beyond 1 container if you have specified a custom name. Attempting to do so results in an error.
You can read more add here: https://github.com/docker/compose/issues/3722

Related

Docker-compose expected container is up to date

I created docker-compose yaml file to run a docker image after it has been pulled locally and this yaml file contain another services (mysql and phpmyadmin) so when I run a command docker-composer up -d I found a conflict in creating a container as it been used by another container with the same name and I expected to show me that the container is already run and up to date, I know that the container must be removed or renamed before creating a new one but I aiming to get the newer version of image and check if mysql and phpmyadmin services is up or not if up gives me those container is up to date and if not create it as another environment bellow.
docker-compose.yml
version: '3.3'
services:
app-prod:
image: prod:1.0
container_name: app-prod
ports:
- "81:80"
links:
- db-prod
depends_on:
- db-prod
- phpmyadmin-prod
db-prod:
image: mysql:8
container_name: db-prod
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=laravel
- MYSQL_USER=user
- MYSQL_PASSWORD=secret
volumes:
- db-prod:/var/lib/mysql
phpmyadmin-prod:
image: phpmyadmin/phpmyadmin:5.0.1
restart: always
environment:
PMA_HOST: db-prod
container_name: phpmyadmin-prod
ports:
- "5001:80"
volumes:
db-prod:
Error
Creating phpmyadmin-prod ... error
Creating db-prod ...
ERROR: for phpmyadmin-prod Cannot create container for service phpmyadmin: Conflict. The container name "/phpmyadmin-prod" is already in use by container "5a52b27b64f7302bccb1c3a0eaeca8a33b3dfee5f1a279f6a809695Creating db-prod ... error
ERROR: for db-prod Cannot create container for service db: Conflict. The container name "/db-prod" is already in use by container "5d01c21efafa757008d1b4dbcc8d09b4341ac1457a0ca526ee57873cd028cf2b". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for phpmyadmin Cannot create container for service phpmyadmin: Conflict. The container name "/phpmyadmin-prod" is already in use by container "5a52b27b64f7302bccb1c3a0eaeca8a33b3dfee5f1a279f6a809695f482500a9". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for db Cannot create container for service db: Conflict. The container name "/db-prod" is already in use by container "5d01c21efafa757008d1b4dbcc8d09b4341ac1457a0ca526ee57873cd028cf2b". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.
Error: No such container: app-prod
Error: No such container: app-prod
While using another docker-compose file for test environment I got this which I expected
db-stage is up-to-date
phpmyadmin-stage is up-to-date
Creating app-stage ... done
docker-compose run command will create new containers. But in your case, 2 of them are already running, hence, you can use
docker-compose up -d
That specific error is easy to fix. You're trying to manually specify container_name: for every container, but the error message says you have existing containers with those same names already. Left to its own, Compose will assign non-conflicting names, and you can almost always just delete container_name: from the Compose file.
version: '3.8'
services:
app:
image: prod:1.0
ports:
- "81:80"
depends_on: [db, phpmyadmin]
# no container_name: or links:
db: { ... }
phpmyadmin: { ... }
The other obvious point of conflict for running the same Compose file in multiple places is the ports:; only one container or host process can listen on a given (first) port number. If you're trying to run the same Compose file multiple times on the same system you might need some way to replace the port numbers. This could be a place where using multiple Compose files fits in well: define a base docker-compose.yml that defines the services but not any of the ports, and an override file that assigns specific host ports.
As I have several docker-compose files and I run this command docker-composer -f <compose_file_path> -p <project_name> -d up and then I try to run docker-composer up -d in the same location without specify the project name -p <project_name> which gives me the conflict of the container as this make the compose-file up with a different project name and with the same container name.

Two docker services depends_on the same service

I am having two docker-compose files
/cfacing/docker-compose.yml
app-customer-facing:
build: .
depends_on:
- mysql-db
mysql-db:
container_name: staging-mysql-db
image: mysql:5.6
/afacing/docker-compose.yml
app-admin-facing:
build: .
depends_on:
- mysql-db
mysql-db:
container_name: staging-mysql-db
image: mysql:5.6
I want both customer-facing container and admin-facing container to depend on the same mysql-db container. This is currently not working, the app-customer-facing will start with mysql-db but app-admin-facing will not start throwing:
ERROR: for mysql-db Cannot create container for service mysql-db: Conflict. The container name "/staging-mysql-db" is already in use by container "fe63e1ab0c1fd19236551bfc5930544cb31e649a4c18421c05959dc1274eb600". You have to remove (or rename) that container to be able to reuse that name.
The error is that you are duplicating your mysql-db service, you are not re-using it. That's why you get an error telling that there is already a container named staging-mysql-db.
To resolve your use case, you'll have to extend your docker-compose file.
You can see an exemple here. See the block ADMINISTRATIVE TASKS which is basically what you are trying to do.

Service access another service on 127.0.0.1?

I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

Passing a docker container's generated name to another container in docker-compose

Container names, in docker-compose, are generated. And I need to pass this name to another container so it can make a connection.
My scenario is, I want to create a container based on docker's container and communicating with the host, execute some thing in a sibling container as the second process within it.
So how can I have a container's name within another?
That will be easy with docker-compose 1.6.1 and the addition of network-scoped alias in docker-compose issue 2829.
--alias
option can be used to resolve the container by another name in the network being connected to.
That means your first container can assume the existence of container 'x' as long as, later, another container starts with the network-scoped alias 'x'.
You would need to link them. At least from your explanation thats what you need.
Example below.
rabbitmq:
container_name: rabbitmq
image: million12/rabbitmq:latest
restart: always
ports:
- "5672:5672"
- "15672:15672"
environment:
- RABBITMQ_PASS=my_pass
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 1
ports:
- "80:80"
links:
- rabbitmq:rabbitmq.server
volumes:
- /etc/haproxy:/etc/haproxy
Now those two containers are linked and can connect to each other.
You can ping rabbitmq container from haproxy:
ping rabbitmq.server
There is no way to pass the container name. Your best option is to set a project name with COMPOSE_PROJECT_NAME, and pass that into the container with environment: - COMPOSE_PROJECT_NAME=.
Then you can predict the container name using <project name>_<service name>_1.
Another option is to tail the event stream from docker-compose events. That should provide all the information you need.

Resources