How can I start one container before other? - docker

I need to start backend-container after start database-container. How can I do it with docker-compose?

Use a depends_on clause on your backend-container. Something like that :
version: "3.7"
services:
web:
build: .
depends_on:
- db
db:
image: postgres
Here is the documentation about it.
Have fun!

You should look into the depends_on configuration for docker compose.
In short, you should be able to do something like:
services:
database-container:
# configuration
backend-container:
depends_on:
- database-container
# configuration

The depends_on field will work with docker-compose, but you will find it is not supported if you upgrade to swarm mode. It also guarantees the database container is created, but not necessarily ready to receive connections.
For that, there are several options:
Let the backend container fail and configure a restart policy. This is ugly, leads to false errors being reported, but it's also the easiest to implement.
Perform a connection from your app with a retry loop, a sleep between retries, and a timeout in case the database doesn't start in a timely fashion. This is usually my preferred method, but it requires a change to your application.
Use an entrypoint script with a command like wait-for-it.sh that waits for a remote resource to become available, and once that command succeeds, launch your app. This doesn't cover all the scenarios as a complete client connection, but can be less intrusive to implement since it only requires changes to an entrypoint script rather than the app itself.

Related

Gitlab-runner + Docker-compose deploying scheme: how to properly restart containers after reboot of host server

Suppose I have repository on Gitlab and following deploying scheme:
Setup docker and gitlab-runner with docker executor on host server.
In .gitlab-ci.yml setup docker-compose to build and up my service together with dependencies.
Setup pipeline to be triggering by pushing commits to production branch.
Suppose docker-compose.yml has two services: app (with restart: always) and db (without restarting rule). app depends on db so docker-compose up starts db and then app.
It works perfectly until host server reboots. After it is only app container restarts.
Workarounds I've found and their cons:
add restart: always to db service. But app can start before db and hence fails.
use docker-compose on host machine and setup docker-compose up to autorun. But in that case I should setup docker-compose, deploy ssh-keys, clone code somewhere to the host server and update it. It seems like violating DRY principle and overcomplicating scheme.
trigger pipleline after reboot. The only way I've found is to trigger it by API and trigger token. But in that case I have to setup trigger token which seems like not as bad as before but violating DRY principle and overcomplicating scheme.
How can one improve deploying scheme to make docker restart containers after reboot in right order.
P.S. Configs are as following:
.gitlab-ci.yml:
image:
name: docker/compose:latest
services:
- docker:dind
stages:
- deploy
deploy:
stage: deploy
only:
- production
script:
- docker image prune -f
- docker-compose build --no-cache
- docker-compose up -d
docker-compose.yml:
version: "3.8"
services:
app:
build: .
container_name: app
depends_on:
- db
ports:
- "80:80"
restart: always
db:
image: postgres
container_name: db
ports:
- "5432:5432"
When you add restart: always to db service, your app can start before db and fails. But your app must restart after fails, becase "restart:always" policy, if it doesn't work probably you have wrong exit code from failed app.
So You can add healthcheck and restart app after delay, which you suppose app must work.
Simple check of 80 port can help.
It is basically happen because you are failing fast in your app due to an unavailable database.
It can be useful in some cases, but for your use case, you can implement the app in a way that you retry to establish the connection if it fails. Ideally a backoff strategy could be implemented so that you don't overload your database in case of a real issue.
Losing the connection to the database can happen, but does it make sense to kill your app if the database is unavailable? Can you implement any fallback e.g. "Sorry we have an issue but we are working on it". In a user perspective letting them know that you have an issue and you are working to fix it, has a much better user experience than just don't open your app.

I want to fail docker compose if my first task fails

I want to fail docker compose if my first task fail and it should not move to other tasks mentioned in the compose file.
Docker-compose is not a script; it's the definition of an environment. The environment is described in the docker-compose file by defining each service that make up the environment.
Of course, it is sometimes necessary to have a service fail during initialization.
This can be done by, for example:
defining a Dockerfile for the service, which performs initialization during image creation, or
defining an entrypoint script, which does initialization when starting a container based on the image.
The point is, docker-compose shouldn't fail. The services should fail.
If you absolutely need to fail on startup, you can define a dependency between services by using depends_on to define the order that containers are started; like this:
version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
Please note, the dependent services will start as soon as the parent services are started... It will not wait for them to be in a ready state.

Make docker image as base for another image

Now I have built a simple GET API to access this database: https://github.com/ghusta/docker-postgres-world-db
This API will get a country code and get the full record of the country of this country from the database.
The structure is that the API is in a separate docker image, and the database is in another one.
So once the API's image tries to start, I need it to start the database's image before and then start running itself upon the database's image.
So how to do that?
You can use Docker Compose, specifically the depends_on directive. This will cause Docker to start all dependencies before starting an image.
Unfortunately there is no way to make it wait for the dependency to go live before starting any dependents. You'll have to manage that yourself with a wait script or similar.
A most probable solution would be to use docker compose along with a third party script.
For Example your docker compose file might look like:
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Where ./wait-for-it.sh is a third party script you can get from
https://github.com/vishnubob/wait-for-it
You can also use this script from
https://github.com/Eficode/wait-for
I would recommend to tweak the script as per your needs if you want to(i did that).
P.S:
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason

docker-compose conditionally build containers

Our team is new to running a micro-service ecosystem and I am curious how one would achieve conditionally loading docker containers from a compose, or another variable-cased script.
An example use-case would be.
Doing front-end development that depends on a few different services. We will label those DockerA/D
Dependency Matrix
Feature1 - DockerA
Feature2 - DockerA and DockerB
Feature3 - DockerA and DockerD
I would like to be able to run something like the following
docker-compose --feature1
or
magic-script -service DockerA -service DockerB
Basically, I would like to run the command to conditionally start the APIs that I need.
I am already aware of using various mock servers for UI development, but want to avoid them.
Any thoughts on how to configure this?
You can stop all services after creating them and then selectively starting them one by one. E.g.:
version: "3"
services:
web1:
image: nginx
ports:
- "80:80"
web2:
image: nginx
ports:
- "8080:80"
docker-compose up -d
Creating network "composenginx_default" with the default driver
Creating composenginx_web2_1 ... done
Creating composenginx_web1_1 ... done
docker-compose stop
Stopping composenginx_web1_1 ... done
Stopping composenginx_web2_1 ... done
Now any service can be started using, e.g.,
docker-compose start web2
Starting web2 ... done
Also, using linked services, there's the scale command that can change the number of running services (can add containers without restart).

How to control docker-compose build order?

I have three services to build, A、B and C. A should be built in the very first place, because B and C depend on A (they import A as image). I thought they should be built in order but I just find out they are built in some random order?
So, how do I control build order in docker-compose.yml?
The accepted answer above does not seem correct.
As seen in the documentation
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it.
Version 3 no longer supports the condition form of depends_on.
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
Update:
In recent practice, I have found my answer to only pertain to run ordering.
Refer to the answer by Quinten Scheppermans and the comment by Authur Weborg about dobi.
You can control build order using depends_on directive.
services:
...
B:
depends_on:
- A
C:
depends_on:
- A
If you need to run one service before an other you can use docker-compose run bar after setting depends_on: - foo in your docker-compose.yml file.
For example:
# docker-compose.yml
services:
foo:
. . .
bar:
. . .
depends_on:
- foo
then run,
# terminal
user#Name Docker % docker-compose run bar
However, per the docs, this will only control the order in which processes start, not the order in which they finish. This can often lead to situations in which dependent services do not properly start. Potential solutions for this include using wait-for-it, wait-for, or Dockerise. I recommend reading the docs before implementing any of the solutions listed above, as there are caveats.
Since compose v2, BuildKit is enabled by default, it will build images parallelly.
By disabling BuildKit you get to build images according to the order in your docker-compose.yml file.
DOCKER_BUILDKIT=0 docker-compose build
If you still want to build images parallelly, you can consider defining services in multiple docker-compose.yml files for building, then composing up in another docker-compose.yml file for deployment.
As stated in https://github.com/docker/compose/blob/e9220f45df07c1f884d5d496507778d2cd4a1687/compose/project.py#L182-L183
Preserves the original order of self.services where possible, reordering as needed to resolve dependencies.
So for me it worked by manually sorting the services with the depending services at first and following the services which is used by the other and the other as last.
Example
version: '3'
services:
mydb:
image: mydb
serviceA:
build:
dockerfile: DockerfileA
depends_on:
- mydb
serviceB:
build:
dockerfile: DockerfileB # which contains 'FROM serviceA'
depends_on:
- mydb
Source: https://github.com/docker/compose/issues/5228#issuecomment-565469948

Resources