I have a docker compose file with multiple services, the services are dependent on each other, for example if my previous service fails, still i want my next service to be executed, how can i handle these kind of scenarios?
Related
I'm trying to set up a docker-compose structure that sort of mimics AWS Lambda, I have my main API, an AWS Lambda Service emulator, and several Lambda Function images.
services:
api:
#...
lambda_server:
#...
lambda_f_1:
#...
lambda_f_2:
#...
lambda_f_3:
#...
lambda_f_4:
#...
The problem with the above is that each lambda_f_n is quite heavy, I can't have them all running at once.
The idea is that api talks to lambda_server, which spins up a lambda_f_x, which then returns an output back to lambda_server and stops executing. So each lambda_f_x is run dynamically and has an ephemeral lifecycle
My current solution actually just doesn't have any lambda_f_x in the compose definition, and they are run by lambda_server using Docker HTTP API, that works with one very annoying caveat: When you compose down, they keep running, making local development hell
Is there a way I can Start/Stop services inside a docker compose dynamically? Or dynamically add containers to a docker compose group so that they all stop together?
docker compose is just a simple language to run several predefined containers together, but not a fully featured container orchestration tool.
You may want to check out docker swarm or Kubernetes that are capable of starting/stopping containers dynamically, or in your current setup just add a shutdown hook to drop all related containers on shutdown.
I'm specifically trying to get files from services (docker containers) in a Gitlab CI job to the runner container. *I could provide more details on exactly what I'm trying to do, but I'd like to keep this question fairly generic and platform/language agnostic.
Essentially I have the following .gitlab-ci.yml:
image: customselfhostedimage-redacted
services:
- postgres:latest
- selenium/standalone-chrome
...
There are files being downloaded in one of the service containers (selenium) which I need to gain access to from the main container being run by the Gitlab runner. Unfortunately I can not seem to find any method to create a volume mount or share of some sort between service containers and the host (※ NOTE: This was not true, see accepted answer.). Adding commands to specifically copy files from within service containers is also not an option for me. I'm aware of multiple issues requesting such functionality, such as this one:
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3207
The existence of these open issues indicates to me there is not currently a solution.
I have tried to specify volumes in config.toml as has been in comments in various Gitlab CI issues related to this subject, but this does not seem to create volume mounts on service containers?
Is there any way to create volume mounts inside service containers accessible to the runner/runner container, or if not is there any simple solution to make files accessible from (and possibly between) service containers?
※ NOTE: This is NOT a docker-compose question, and it is NOT a docker-in-docker question.
If you self-host your runners, you can add volumes to the runner configuration, which applies to services and job containers alike.
Per the documentation:
GitLab Runner 11.11 and later mount the host directory for the defined services as well.
For example:
[runners.docker]
# ...
volumes = ["/path/to/bind/from/host:/path/to/bind/in/container:rw"]
This path will be available both in the job as well as in containers defined in services:. But keep in mind that the data (and changes) are available and persisted across all jobs that use this runner.
If I change something in docker-compose.yml, or add a new service, I need to run docker-compose up --build. How can I not have downtime for services that were not changed?
You can specify the service you want to build/start. The other services won't be restared/rebuild, even if their configuration was changed:
$ docker-compose up --build <service-name>
You would need to implement a load balanced setup where you have more than one instance of the same service running. Your load balancer eg nginx would have to know about the various nodes/instances and split traffic between them.
For example, my-api service with the same configurations duplicated as my-api-1 and my-api-2. Then you would rebuild your main image and restart one service at a time.
Doing this would allow a service instance to continue to reply to traffic while the other instance(s) restart with the new image.
There are other implementation details to consider when implementing load balanced solutions (eg health checks), but this is the high level solution.
Using docker stack deploy, I can see the following message:
Ignoring unsupported options: restart
Does it mean that restart policies are not in place?
Do they have to be specified outside the compose file?
You can see this message for example with the
Joomla compose file available at the bottom of that page.
To start the compose file:
sudo docker swarm init
sudo docker stack deploy -c stackjoomla.yml joomla
A Compose YAML file is used by both docker-compose tool, for local (single-host) dev and test scenarios, and Swarm Stacks, for production multi-host concerns.
There are many settings in the Compose file which only work in one tool or the other (docker-compose up vs. docker stack deploy) because some settings are specific to dev and others specific to production clusters. It's OK that they are there, and you'll see warnings in either tool when there are settings included that the specific tool will ignore. This is commonly seen for build: settings (which are docker-compose only) and deploy: settings (which are Swarm Stacks only).
The whole goal here is a single file you can use in both tools, and the relevant sections of the compose file are used in that scenario, while the rest are ignored.
All of this can be referenced for the individual setting in the compose file documentation. If you're often working in Compose YAML, I recommend always having a tab open on this page, as I've referenced it almost daily for years, as the spec keeps changing (we're on 3.4+ now).
docker-compose does not restart containers by default, but it can if you set the single-setting restart: as documented here. But that setting doesn't work for Swarm Stacks. It will show up as a warning in a docker stack deploy to remind you that the setting will not take effect in a Swarm Stack.
Swarm Stacks use the restart_policy: under the deploy: setting, which gives finer control with multiple sub-settings. Like all Stack's, the defaults don't have to be specified in the compose file, and you'll see their default settings documented on that docs page.
There is a list on that page of the settings that won't work in a Swarm Stack, but it looks incomplete as the restart: setting should be there too. I'll submit a PR to fix that.
Also, in the Joomla example you pointed us too, that README seems out of date as well, as it includes links: in the compose example, which are depreciated as of Compose version 2, and not needed anymore (because all containers on a custom virtual network can reach each other now).
If you docker-compose up your application on a Docker host in standalone mode, all that Compose will do is start containers. It will not monitor the state of these containers once they are created.
So it is up to you to ensure that your application will still work if a container dies. You can do this by setting a restart-policy.
If you deploy an application into a Docker swarm with docker stack deploy, things are different.
A stack is created that consists of service specifications.
Docker swarm then makes sure that for each service in the stack, at all times the specified number of instances is running. If a container fails, swarm will always spawn a new instance in order to match the service specification again. In this context, a restart-policy does not make any sense and the corresponding setting in the compose file is ignored.
If you want to stop the containers of your application in swarm mode, you either have to undeploy the whole stack with docker stack rm <stack-name> or scale the service to zero with docker service scale <service-name>=0.
Is there a way to define all the volume bindings either in the Dockerfile or another configuration file that I am not aware of?
Since volume bindings are used when you create a container, you can't define them in the Dockerfile (which is used to build your Docker image, not the container).
If you want a way to define the volume bindings without having to type them every time, you have the following options:
Create a script that runs the docker command and includes all of the volume options.
If you want to run more than one container, you can also use Docker Compose and define the volume bindings in the docker-compose.yaml file: https://docs.docker.com/compose/compose-file/#/volumes-volumedriver
Out of the two, I prefer Docker Compose, since it includes lots of other cool functionality, e.g. allowing you to define the port bindings, having links between containers, etc. You can do all of that in a script as well, but as soon as you use more than one container at a time for the same application (e.g. a web server container talking to a database container), Docker Compose makes a lot of sense, since you have the configuration in one place, and you can start/stop all of your containers with one single command.