This question already has an answer here:
Docker-compose depends on not waiting until depended on service isn't fully started
(1 answer)
Closed 3 years ago.
I am dealing with a docker composer project. Here is the compose file :
version: '3.3'
services:
tomcatserver:
build: ./mavenServer
depends_on:
- db
ports:
- "8010:8080"
db:
image: mariadb
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "true"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "tpformation"
MYSQL_USER: "tomcat"
MYSQL_PASSWORD: "tomcat"
expose:
- "3306"
when I start my stack with docker-composer up, it is always the maven containers that starts first. However, it should be the db one. Could you help please.
Docker Compose used to have a condition configuration under depends_on. This is no longer the case.
From this page:
depends_on does not wait for db and redis to be “ready” before starting web - only until
they have been started. If you need to wait for a service to be ready, see Controlling
startup order for more on this problem and strategies for solving it.
Version 3 no longer supports the condition form of depends_on.
The simplest way to handle this issue, is to simply avoid it. Do docker-compose up -d db and then you can start and restart anything that depends on it (docker-compose up tomcatserver).
In most cases, dependencies normally "become healthy" before any other container needs them, so you can just docker-compose up your-app and never notice the issue.
Of course, all the above statements are valid assuming you are not using the same docker-compose in production.
If however, you are still interested in hardening the startup sequence, you can take a look at the Control startup and shutdown order docker manual page, for some possibilities.
Relevant snippet from this page:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Personally, I recommend against such tricks wherever possible.
Related
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I am trying to take a very difficult, multi-step docker setup and make it into an easy docker-compose. I haven't really used docker-compose before. I am used to using a dockerfile to build and image, then using something like
docker run --name mysql -v ${PWD}/sql-files:/docker-entrypoint-initdb.d ... -h mysql -d mariadb:10.4
Then running the web app in the same manner after building the dockerfile that is simple. Trying to combine these into a docker-compose.yml file seems to be quite difficult. I'll post up my docker-compose.yml file, edited to remove passwords and such and the error I am getting, hopefully someone can figure out why it's failing, because I have no idea...
docker-compose.yml
version: "3.7"
services:
mysql:
image: mariadb:10.4
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: passwordd1
MYSQL_ALLOW_EMPTY_PASSWORD: "true"
volumes:
- ./sql-files:/docker-entrypoint-initdb.d
- ./ssl:/var/lib/mysql/ssl
- ./tls.cnf:/etc/mysql/conf.d/tls.cnf
healthcheck:
test: ["CMD", "mysqladmin ping"]
interval: 10s
timeout: 2s
retries: 10
web:
build: ./base/newdockerfile
container_name: web
hostname: dev.website.com
volumes:
- ./ssl:/etc/httpd/ssl
- ./webfiles:/var/www
depends_on:
mysql:
condition: service_healthy
ports:
- "8443:443"
- "8888:80"
entrypoint:
- "/sbin/httpd"
- "-D"
- "FOREGROUND"
The error I get when running docker-compose up in the terminal window is...
Service 'web' depends on service 'mysql' which is undefined.
Why would mysql be undefined. It's the first definition in the yml file and has a health check associated with it. Also, it fails very quickly, like within a few seconds, so there's no way all the healthchecks ran and failed, and I'm not getting any other errors in the terminal window. I do docker-compose up and within a couple seconds I get that error. Any help would be greatly appreciated. Thank you.
according to this documentation
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it.
Designing your application so it's resilient when database is not available or set up yet is what we all have to deal with. Healthcheck doesn't guarantee you database is ready before the next stage. The best way is probably write a wait-for-it script or wait-for and run it after depends-on:
depends_on:
- "db"
command: ["./wait-for-it.sh"]
I have several containers which are described in a docker-compose-<service>.yaml file each, and which I start with
docker-compose -f docker-compose-<service>.yaml up -d
I then see via docker ps the container running.
I expected that I could stop that container via
docker-compose -f docker-compose-<service>.yaml down
The container is however not stopped. Neither it is when I use the comane above with stop instead of down.
Doing a docker kill <service> stops the container.
My question: since all my services started with docker-compose are effectively one container for each docker-compose-<service>.yaml file, can I use the bare docker command to stop it?
Or more generally speaking: is docker-compose simply a helper for underlying docker commands which means that using docker is always safe (from a "consistency in using different commands" perspective)?
My question: since all my services started with docker-compose are effectively one container for each docker-compose-.yaml file, can I use the bare docker command to stop it?
Actually docker-compose is using docker engine, you can try locally:
ex: docker-compose.yaml:
version: "3"
services:
# Database
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wpsite
# phpmyadmin
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- '9090:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: wordpress
networks:
- wpsite
networks:
wpsite:
You can now interact with them thought docker engine if needed:
More globally docker-compose is a kind of orchestrater ( I prefer the terme of composer), if you need a to define container stack, dependent each others (like the previous example phpmyadmin/mysql) it is perfect to test in dev environment. In my point of view to have a better resilience, HA, service management... of containers stack in production environment, you strongly need to consider the implementation of a real orchestrater such as docker-swarm, kubernetes, openshift....
Here some documentation to explain the difference: https://linuxhint.com/docker_compose_vs_docker_swarm/
You can also see: What is the difference between `docker-compose build` and `docker build`?
To build a web-server, i'm trying to understand how containers are attached to each other, and i really need some quick answers.
So, if we take this docker-compose.yml file as an exemple:
version: '2'
services:
# APP
nginx:
build: docker/nginx
volumes_from:
- php
links:
- php
depends_on:
- php
php:
build: docker/php
volumes:
- ${SYMFONY_APP_PATH}:/symfony
links:
- mysql
- faye
- rabbitmq
- elasticsearch
client:
image: node:8.9.4
volumes_from:
- php
working_dir: /symfony
user: 1000:1000
command: "npm run dev"
ports:
- "${LIVERELOAD_PORT}:35729"
environment:
LIVERELOAD_PORT: ${LIVERELOAD_PORT}
mysql:
build: docker/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- ${SYMFONY_APP_PATH}:/symfony
- "mysql:/var/lib/mysql"
rabbitmq:
image: rabbitmq:3.4-management
volumes:
- "rabbitmq:/var/lib/rabbitmq"
volumes:
- "elasticsearch5:/usr/share/elasticsearch/data"
- ${SYMFONY_APP_PATH}:/symfony
volumes:
mysql: ~
elasticsearch5: ~
rabbitmq: ~
What is the difference between volumes_from, links, and depends_on ?
If the idea is to attach each container with the other why we don't use only links. what is the difference between volumes_from, links, and depends_on.
Why in my example ngnix depend/linked to php container? why not the opposite? At the file footer, there's a volume configuration volumes:
mysql: ~ elasticsearch5: ~ rabbitmq: ~ but I think we already defined as the volume of each container, so what's is the main reason of that config?
And why not we dont use only one web container that use php, ngnix, and mysqld why we seperate them?
What is the difference between volumes_from, links, and depends_on?
Both links and depends_on provide a way for a container to communicate with each other.
links is a legacy feature and will be deprecated in the future, so avoid using links whenever possible.
volumes_from is used for other purpose, and it has nothing to do with links and depends_on.
Why in my example ngnix depend/linked to php container? why not the opposite?
depends_on defines order of services starting. In your example, you're using Nginx as a proxy server to the PHP service. So you might want the PHP service to start before Nginx.
And why not we dont use only one web container that use php, ngnix, and mysqld why we seperate them?
One of Docker's best practices is to keep each container simple enough to do only one job. Much like Unix's "do one thing and do it well" philosophy.
Single responsibility principle is a good thing, embrace it.
I have a Dockerfile set up to run a service that requires some subsequent commands be run in order to initialize properly. I have created a startup script with the following structure and set it as my entrypoint:
Set environment variables for service, generate certificates, etc.
Run service in background mode.
Run configuration commands to finish initializing service.
Obviously, this does not work since the service was started in background and the entry point script will exit with code 0. How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
Among your many options:
Use something like sleep inf, which is not a busy loop and does not consume CPU time.
You could use a process supervisor like supervisord to start your service and start the configuration script.
You could run your configuration commands in a separate container after the service container has started.
You can look at this GitHub issue and specific comment -
https://github.com/docker-library/wordpress/issues/205#issuecomment-278319730
To summarize, you do something like this:
version: '2.1'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
image: wordpress:latest
volumes:
- "./wp-init.sh:/usr/local/bin/apache2-custom.sh"
depends_on:
db:
condition: service_started
ports:
- 80:80
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
command:
- apache2-custom.sh
wp-init.sh is where you write the code to execute.
Note the command yml tag:
command:
- apache2-custom.sh
because we bounded the two in the volumes tag, it will actually run the code in wp-init.sh within your container.