Our team is new to running a micro-service ecosystem and I am curious how one would achieve conditionally loading docker containers from a compose, or another variable-cased script.
An example use-case would be.
Doing front-end development that depends on a few different services. We will label those DockerA/D
Dependency Matrix
Feature1 - DockerA
Feature2 - DockerA and DockerB
Feature3 - DockerA and DockerD
I would like to be able to run something like the following
docker-compose --feature1
or
magic-script -service DockerA -service DockerB
Basically, I would like to run the command to conditionally start the APIs that I need.
I am already aware of using various mock servers for UI development, but want to avoid them.
Any thoughts on how to configure this?
You can stop all services after creating them and then selectively starting them one by one. E.g.:
version: "3"
services:
web1:
image: nginx
ports:
- "80:80"
web2:
image: nginx
ports:
- "8080:80"
docker-compose up -d
Creating network "composenginx_default" with the default driver
Creating composenginx_web2_1 ... done
Creating composenginx_web1_1 ... done
docker-compose stop
Stopping composenginx_web1_1 ... done
Stopping composenginx_web2_1 ... done
Now any service can be started using, e.g.,
docker-compose start web2
Starting web2 ... done
Also, using linked services, there's the scale command that can change the number of running services (can add containers without restart).
Related
Given compose file
version: '3.8'
services:
whoami1:
image: containous/whoami
depends_on:
- whoami2
whoami2:
image: containous/whoami
when deployed to docker swarm docker stack deploy -c docker-compose.yaml test
services whoami1 and whoami2 seem to start in random order and ignore depends_on condition.
docker stack deploy -c docker-compose.yaml test
Creating network test_default
Creating service test_whoami1
Creating service test_whoami2
Does docker swarm support service startup sequencing via dependencies?
No, at least not built in.
Even with depends_on the whoami2 may not yet be ready to interact with whoami1 because it may need time to boot itself:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
https://docs.docker.com/compose/startup-order/
They hint at two possibilites to check if whoami2 is ready.
Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
And depends_on is indeed ignored for docker swarm:
There are several things to be aware of when using depends_on:
(...)
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
https://docs.docker.com/compose/compose-file/#depends_on
I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.
I am fairly new to docker.
My problem is - I have a multi container application(zookeeper, kafka, producer, consumer) in which I want to run the producer and the consumer containers only after the zookeeper and kafka containers are up and running. How to edit the docker-compose file and achieve that? Thanks in advance
The depends_on instruction only allows you to manage the starting order.
However, it does not wait for your service to be up and running before triggering the next.
If you want to start a service after your dependance is up, then you'll have to look at tools like wait-for-it or dockerize.
You can find more info on the official Docker documentation
Check the depends_on directive in the docker-compose documentation.
__ from the docs __
depends_on Express dependency between services, Service dependencies
cause the following behaviors:
docker-compose up starts services in dependency order. In the
following example, db and redis are started before web.
docker-compose up SERVICE automatically includes SERVICE’s
dependencies. In the following example, docker-compose up web also
creates and starts db and redis.
docker-compose stop stops services in dependency order. In the
following example, web is stopped before db and redis.
Simple example:
version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
I have an application that performs elaboration over a data feed. The process is divided into tasks so I structured a docker-compose.yml file like this:
task1-service:
image: task1-image
task2-service:
image: task2-image
task3-service:
image: task3-image
Each task-service is triggered by the end of the previous and triggers the next, then it can exit. So there's no point to keep each service running.
I wonder if there's a solution to keep them all stopped, and start each service on demand when needed.
I don't know if docker compose is the correct solution, but I like the idea ok keeping the system described into one yml file. Anyway, other solutions are appreciated.
Thanks
Is possible to solve your approach in different ways, and one of them is with docker-compose.
First, you can start one concrete service (taskX-service) using docker-compose up -d <service_name>
You have more details in docker-compose up for only certain containers
Second, docker-compose also allows you to configure dependencies between containers. If you want to run them in order, you can specify it in depends_on: structure.
For example, to execute tasks 1, 2, and 3 in order you could use:
task1-service:
image: task1-image
task2-service:
image: task2-image
depends_on:
- task1-service
task3-service:
image: task3-image
depends_on:
- task2-service
Furthermore, this docker-compose.yml is compatible with first I said:
docker-compose up -d task1-service
docker-compose up -d task2-service (also launchs task1-service)
docker-compose up -d task3-service (also launchs task2 and 1 service)
If you don't specify any container, with docker-compose down stops all containers in compose file.
I hope it's useful for you.
I'm runing a simple rails app in docker using docker-compose (formerly fig) like this:
docker-compose.yml
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/usr/src/app
ports:
- "3011:3000"
links:
- db
Dockerfile
FROM rails:onbuild
I need to run some periodical maintainance scripts, such as database backups, pinging sitemaps to search engines etc.
I'd prefer not to use cron on my host machine, since I prefer to keep the application portable and my idea is to use docker-compose to link an image such as https://registry.hub.docker.com/u/hamiltont/docker-cron/ using docker-compose.
The rails official image does not have ssh enabled so I cannot just have the cron container to ssh into the web container and run the scripts.
Does docker-compose have a way for a container to gain a shell into a linked container to execute some commands?
What actually would you like to do with your containers? If you need to access some objects from container's file system, you should just mount the volume to the ancillary container (consider --volumes-from option).
Any SSH interaction between containers is considered as a bad practice (at least since docker 1.3, when docker exec has been implemented). Running more than one process inside the container (e.g. smth but the postgres or rails in your case) will result in a large overhead: in order to have a sshd along with rails you'll have to deploy something like supervisord.
But if you really need to provide some kind of nonstandard interaction between the containers and you're sure that you really need it, I would suggest you to use one of the full-featured docker client libraries (like docker-py). It will allow you to launch docker exec in a programmable way.