Docker: docker compose file for "docker stack deploy" - docker

I have a docker-compose.yml file which works with docker-compose up --build. My app works and everything is fine.
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: ./myapp
ports:
- "8000:8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 flaskplot:app
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- myapp
But when I use docker stack deploy -c docker-compose.yml myapp, I get the following error:
Ignoring unsupported options: build, restart
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network myapp_default
Creating service myapp_myapp
failed to create service myapp_myapp: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
any hints how I should "translate" the docker-compose.yml file to make it compatible with docker stack deploy?

To run containers in swarm mode, you do not build them on each swarm node individually. Instead you build the image once, typically on a CI server, push to a registry server (often locally hosted, or you can use docker hub), and specify the image name inside your compose file with an "image" section for each service.
Doing that will get rid of the hard error. You'll likely remove the build section of the compose file since it no longer applies.
Specifying "container_name" is unsupported because it would break the ability to scale or perform updates (a container name must be unique within the docker engine). Let swarm name the containers and reference your app on the docker network by it's service name.
Specifying "depends_on" is not supported because containers may be started on different nodes, and rolling updates/failure recovery may remove some containers providing a service after the app started. Docker can retry the failing app until the other service starts up, or preferably you configure an entrypoint that waits for the dependencies to become available with some kind of ping for a minute or two.
Without seeing your Dockerfile, I'd also recommend setting up a healthcheck on each image. Swarm mode uses this to control rolling updates and recover from application failures.
Lastly, consider adding a "deploy" section to your compose file. This tells swarm mode how to deploy and update your service, including how many replicas, constraints on where to run, memory and CPU limits and requirements, and how fast to update the service. You can define a restart policy here as well but I recommend against it since I've seen docker engines restarting containers that conflict with swarm mode deploying containers on other nodes, or even a new container on the same node.
You can see the full compose file documentation with all of these options here: https://docs.docker.com/compose/compose-file/

Related

Docker Compose network_mode - adding argument causes local testing to fail

I'm trying to build an application that is able to use local integration testing via Docker Compose with Google Cloud emulator containers, while also being able to run that same Docker Compose configuration on a Docker-based CI/CD tool (Google Cloud Build).
The kind of docker-compose.yml configuration I'm using is:
version: '3.7'
services:
main-application:
build:
context: .
target: dev
image: main-app-dev
container_name: main-app-dev
network_mode: $DOCKER_NETWORK
environment:
- MY_ENV=my_env
command: ["sh", "-c", "PYTHONPATH=./ python app/main.py"]
volumes:
- ~/.config:/home/appuser/.config
- ./app:/home/appuser/app
- ./tests:/home/appuser/tests
depends_on:
- firestore
firestore:
image: google/cloud-sdk
container_name: firestore
network_mode: $DOCKER_NETWORK
environment:
- GCP_PROJECT_ID=dummy-project
command: ["sh", "-c", "gcloud beta emulators firestore start --project=$$GCP_PROJECT_ID --host-port=0.0.0.0:9000"]
ports:
- "9000:9000"
I added the network_mode arguments to enable the configuration to use the "cloudbuild" network type available on the CI/CD pipeline, which is currently working perfectly. However this network configuration is not available to local Docker, hence why I've tried to use environment variables to enable the switch depending on local vs Cloud Build CI/CD environment.
Before I added these network_mode params/args for the CI/CD, the local testing was working just fine. However since I added them, my application either can't run, or can't connect to its accompanying services, like the firestore one specified in the YAML above.
I have tried the following valid Docker network modes with no success:
"bridge" - runs the service, but doesn't allow connection between containers
"host" - doesn't allow the service to run because of not being compatible with assigning ports
"none" - doesn't allow the service to connect externally
"service" - doesn't allow the service to run due to invalid mode/service
Anyone able to provide advice on what I'm missing here?
I would assume one of these network modes would be what Docker Compose would be using if the network_mode is not assigned, so I'm not sure why all of them are failing.
I want to avoid having a separate cloud build file for the remote and local configurations, and would also like to avoid the hassle of setting up my own docker network locally. Ideally if there were some way of only applying network_mode only remotely, that would work best in my case I think.
TL;DR:
Specifying network_mode does not give me the same result as not specifying it when running docker-compose up locally.
Due to running the configuration in the cloud I can't avoid specifying it.
Found a solution thanks to this thread and the comment by David Maze.
As far as I understand it, Docker Compose when not provided a specific network_mode for all the containers, creates its own private default network, named after the folder in which the docker-compose.yml file exists (as default).
Specifying a network mode like the default "bridge" network, without using this custom network created by docker compose means container discovery between services isn't possible, as in main-application couldn't find the firestore:9000 container.
Basically all I had to do was set the network_mode variable to myapplication_default, if the folder where docker-compose.yml sat in was called "MyApplication", to force app apps to use the same custom network set up in docker-compose up

docker stack deploy depends_on

Given compose file
version: '3.8'
services:
whoami1:
image: containous/whoami
depends_on:
- whoami2
whoami2:
image: containous/whoami
when deployed to docker swarm docker stack deploy -c docker-compose.yaml test
services whoami1 and whoami2 seem to start in random order and ignore depends_on condition.
docker stack deploy -c docker-compose.yaml test
Creating network test_default
Creating service test_whoami1
Creating service test_whoami2
Does docker swarm support service startup sequencing via dependencies?
No, at least not built in.
Even with depends_on the whoami2 may not yet be ready to interact with whoami1 because it may need time to boot itself:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
https://docs.docker.com/compose/startup-order/
They hint at two possibilites to check if whoami2 is ready.
Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
And depends_on is indeed ignored for docker swarm:
There are several things to be aware of when using depends_on:
(...)
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
https://docs.docker.com/compose/compose-file/#depends_on

Spinning docker image into multiple containers

I am working on building automated CI/CD pipeline for LAMP application using docker.
I want image to be spinned into 5 containers, so that 5 different developers can work on their code. Can this be atained? I tried it using replicas, but it didnt worked out.
version: '3'
services:
web:
build: .
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
Error which i get:
:#!/bin/bash -eo pipefail docker-compose up ERROR: The Compose file
'./docker-compose.yml' is invalid because: Additional properties are
not allowed ('jobs' was unexpected) You might be seeing this error
because you're using the wrong Compose file version. Either specify a
supported version (e.g "2.2" or "3.3") and place your service
definitions under the services key, or omit the version key and place
your service definitions at the root of the file to use version 1. For
more on the Compose file format versions, see
docs.docker.com/compose/compose-file Exited with code 1 –
Also, from different container, can developers push, pull and commit to git? Will work done in one container will get lost if image is rebuild or run?
What things should i actually take care of while building this pipeline.
First of all, build your image separately using a Dockerfile with docker build -t <image name>:<version/tag> . then use following compose file with docker stack deploy to deploy your stack.
version: '3'
services:
web:
image: <image name>:<version/tag>
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
deploy attribute should be inside a service because it describes the number of replicas a service must have. It is not a global attribute like services. That seems to be the only problem in your compose file and docker compose up is complaining about this when running from the pipeline.
Update
You cannot run multiple replicas with a single docker-compose command. To run multiple replicas from a compose.yml, create a swarm by executing docker swarm init on your machine.
Afterward, simply replace docker-compose up with docker stack deploy <stack name>. docker-compose simply ignores the deploy attribute.
For details on differences between docker-compose up and docker stack deploy <stack name> refer to this article: https://nickjanetakis.com/blog/docker-tip-23-docker-compose-vs-docker-stack

Docker compose/swarm 3: docker file path , build, container name, links, migration

I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Resources