Eliminate attribute from inherited container? - docker

So, I have two containers. One of them (named celery) should inherit everything from the other one, but it should have no port (I don't want it to be exposed). The problem is that the first one has a port set already, and the second one will inherit it.
How can I make the second container inherit everything from the first one, but have no ports expoesd?
Thanks.
django: &django
container_name: ${COMPOSE_PROJECT_NAME}_django.dev
ports:
- "8000:8000"
celery:
<<: *django
container_name: ${COMPOSE_PROJECT_NAME}_celery.dev
command: "python -m celery -A app worker"

Unsetting the overridden ports in celery should work, i.e. set to empty array.
django: &django
container_name: ${COMPOSE_PROJECT_NAME}_django.dev
ports:
- "8000:8000"
celery:
<<: *django
container_name: ${COMPOSE_PROJECT_NAME}_celery.dev
command: "python -m celery -A app worker"
ports: []

Related

Docker compose stop depending service when done [duplicate]

This question already has answers here:
How to stop all containers when one container stops with docker-compose?
(4 answers)
Closed 1 year ago.
I have this docker-compose.yml which runs a node script which depends on Redis.
version: "3.9"
services:
redis:
image: "redis:alpine"
# restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- ./docker/redis:/data
node:
image: "node:17-alpine"
user: "node"
depends_on:
- redis
environment:
- NODE_ENV=production
- REDIS_HOST_ENV=redis
volumes:
- ./docker/node/src:/home/node/app
- ./docker/node/log:/home/node/log
expose:
- "8081"
working_dir: /home/node/app
command: "npm start"
When starting this script with docker compose up both services will start. However when the node service is finished, the redis service keeps running. Is there a way to define that the redis service can stop when the node service is done?
I have examined the documentation for Compose Spec but I have not found anything that allows you to immediately stop containers based on the state of another one. Perhaps there really is a way, but you can always control the behaviour of the redis service by using an healthcheck:
services:
redis:
image: "redis:alpine"
# restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- ./docker/redis:/data
healthcheck:
test: ping -c 2 mynode || kill 1
interval: 5s
retries: 1
start_period: 20s
node:
image: "node:17-alpine"
container_name: mynode
user: "node"
depends_on:
- redis
environment:
- NODE_ENV=production
- REDIS_HOST_ENV=redis
volumes:
- ./docker/node/src:/home/node/app
- ./docker/node/log:/home/node/log
expose:
- "8081"
working_dir: /home/node/app
command: "npm start"
As for the node service, I have added a container_name: mynode, necessary by the redis service in order to contact it. The container name becomes also the hostname, if not specified with the hostname property.
The redis service has an healthcheck that ping the node container every 5 seconds, starting after 30 seconds from the container start. If the ping is successful, the container is labeled as healthy, otherwise it is killed.
This solution might work in your case but has some downsides:
The healthcheck feature is abused here, besides what if you had another healthcheck?
You cannot always kill the init process, because protected by default. There are some discussions about this and it seems the most popular decision is to use tini as the init process. Fortunately, in the image you are using, it is possible.
redis service contacts the node service via the hostname, which means that they are supposed to be in the same network in your case. The current network is the default bridge network that should be avoided most of the times. I suggest you to declare a custom bridge network.
This solution is based on polling the node container, which is not very elegant, firstly because you have to hope that the time-based parameters in the healthcheck section are "good-enough".

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

docker-compose.yml container_name and hostname

What is the use of container_name in docker-compose.yml file? Can I use it as hostname which is nothing but the service name in docker-compose.yml file.
Also when I explicitly write hostname under services does it override the hostname represented by service name?
hostname: just sets what the container believes its own hostname is. In the unusual event you got a shell inside the container, it might show up in the prompt. It has no effect on anything outside, and there’s usually no point in setting it. (It has basically the same effect as hostname(1): that command doesn’t cause anything outside your host to know the name you set.)
container_name: sets the actual name of the container when it runs, rather than letting Docker Compose generate it. If this name is different from the name of the block in services:, both names will be usable as DNS names for inter-container communication. Unless you need to use docker to manage a container that Compose started, you usually don’t need to set this either.
If you omit both of these settings, one container can reach another (provided they’re in the same Docker Compose file and have compatible networks: settings) using the name of the services: block and the port the service inside the container is listening in.
version: '3'
services:
redis:
image: redis
db:
image: mysql
ports: [6033:3306]
app:
build: .
ports: [12345:8990]
env:
REDIS_HOST: redis
REDIS_PORT: 6379
MYSQL_HOST: db
MYSQL_PORT: 3306
The easiest answer is the following:
container_name: This is the container name that you see from the host machine when listing the running containers with the docker container ls command.
hostname: The hostname of the container. Actually, the name that you define here is going to the /etc/hosts file:
$ exec -it myserver /bin/bash
bash-4.2# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 myserver
That means you can ping machines by that names within a Docker network.
I highly suggest set these two parameters the same to avoid confusion.
An example docker-compose.yml file:
version: '3'
services:
database-server:
image: ...
container_name: database-server
hostname: database-server
ports:
- "xxxx:yyyy"
web-server:
image: ...
container_name: web-server
hostname: web-server
ports:
- "xxxx:xxxx"
- "5101:4001" # debug port
you can customize the image name to build & container name during docker-compose up for this, you need to mention like below in docker-compose.yml file.
It will create an image & container with custom names.
version: '3'
services:
frontend_dev:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: .
dockerfile: Dockerfile.dev
image: "mycustomname/sample:v1"
container_name: mycustomname_sample_v1
ports:
- '3000:3000'
volumes:
- /app/node_modules
- .:/app

Docker shared container within multiple docker compose projects

I have a docker-compose.yml which get's 2 services up (I have left out all irrelevant data from it).
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
image: ...
container_name: app-${ENV}
depends_on:
- db
expose:
- 80
db:
image: ...
container_name: my-cool-db
ports:
- "3306:3306"
The point to see here is that app is getting a container name depending on the parameter. So basically I have a shell script running the following:
ENV=$1 docker-compose -p $1 up -d
So in short, whatever I forward as parameter, the new app container should be brought up. For example if I do sh initializer.sh first I will get app-first container. -p parameter is specified so I can have multiple instances of same container classified as a different project.
If I have a single container this works great, and I end up with say:
app-first
app-second
app-third
What I would like to achieve is to have all containers use the same DB. But when I do a docker-compose my DB container still wants to be brought up independent of his existence already.
Is it an issue that it tries to create a DB under different project name, but with same container name so it causes the collision?
Can this be made without bringing up 2 separate DB containers?
A hacky solution:
change your compose file to
services:
app:
image: ...
container_name: app-${ENV}
networks:
- shared
expose:
- 80
db:
image: ...
container_name: my-cool-db
networks:
- shared
ports:
- "3306:3306"
networks:
shared:
external: true
Then first create the network docker network create shared
Bring up db: docker-compose up -d db
First app: ENV=first docker-compose -p first up -d app
Second app: ENV=second docker-compose -p second up -d app

Docker healthcheck in composer file

I try to integrate the new healthcheck into my docker system, but I don't really know how to do it in the right way :/
The problem is, my database container needs more time to start up and initialize the database then the container who starts my main application.
As a result: the main container wont start correct, cause of the missing database connection.
I wrote an healthcheck.sh script to check the database container for connectivity, so the main container starts booting after the connectivity is available. But I dont know how to integrate it correctly in the Dockerfile and my docker-compose.yml
healthcheck.sh is like:
#!bin/bash
COUNTER=0
while [[ $COUNTER = 0 ]]; do
mysql --host=HOST --user="user" --password="password" --database="databasename" --execute="SELECT 1";
if [[ $? == 1 ]]; then
sleep 1
echo "Let's sleep again"
else
COUNTER=1
echo "OK, lets go!"
fi
done
mysql container Dockerfile:
FROM repository/mysql-5.6:latest
MAINTAINER Me
... some copies, chmod and so on
VOLUME ["/..."]
EXPOSE 3306
CMD [".../run.sh"]
HEALTHCHECK --interval=1s --timeout=3s CMD ./healthcheck.sh
docker-compose.yml like:
version: '2'
services:
db:
image: db image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
data:
image: data image
main application:
image: application image
restart: always
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
What do I have to do to integrate this healthcheck into my docker-compose.yml file to work?
Or is there any other chance to delay the container startup of my main container?
Thx Markus
I believe this is similar to Docker Compose wait for container X before starting Y
Your db_image needs to support curl.
To do that, create your own db_image as:
FROM base_image:latest
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 3306
Then all you should need is a docker-compose.yml that looks like this:
version: '2'
services:
db:
image: db_image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${MYSQL_INTERNAL_PORT}"]
interval: 30s
timeout: 10s
retries: 5
environment:
TZ: Europe/Berlin
main_application:
image: application_image
restart: always
depends_on:
db:
condition: service_healthy
links:
- db
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
In general your application should be able to cope with unavailable resources, but there are also some cases when starting up where it is pretty convenient to have one container waiting for another to be "fully available". Docker itself doesn't handle that for you, but there are ways to handle the startup in the resource-using container by delaying the actual command with some script.
There is a good example for a postgresql startup check that can be used in any container that needs to wait for the database to be "fully started". Please see the sample code in the docker docs: https://docs.docker.com/compose/startup-order/
Since docker-compose 1.10.0 you can specify healthchecks in your compose file: https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md#healthcheck
It makes use of https://docs.docker.com/engine/reference/builder/#/healthcheck which has been introducded with Docker 1.12

Resources