I have a Docker Swarm cluster (currently 5 machines) where I run everything as a Docker Stack like so, initiating from the host: manager1:
$ docker stack deploy -c docker-compose.yml mystack
But I use Traefik as reverse proxy.
I wanted to add a Syncthing container to share some data between nodes, so I want it to run on each node. This is achieved thanks to the option:
deploy:
mode: global
This properly creates the containers I want, one per node.
I then want to access each Syncthing instance, thanks to Traefik, with unique urls like this:
frontend: manager1.syncthing.mydomain.com --> backend: syncthing container on host manager1
frontend: worker1.syncthing.mydomain.com --> backend: syncthing container on host worker1
frontend: worker2.syncthing.mydomain.com --> backend: syncthing container on host worker2
...
I fail to find the proper configuration for this (it is even possible?).
I thought about substituting variable in the docker-compose like so:
deploy:
...
labels:
...
- "traefik.frontend.rule=Host:${HOSTNAME}.syncthing.mydomain.com"
Even if $HOSTNAME is defined on all nodes (including manager), this fails; Traefik creates a useless route: ".syncthing.mydomain.com". A few researches have shown that this should at least not substitute ${HOSTNAME} for "" (pull/30781) but for "manager1". Anyway, I think this can be safely expected not to work as the substitution would probably be done on the master1 node where the docker stack deploy command is run.
As a workaround, I can deploy one service per node and use placement constraints to deploy one service per node; but this does not scale as new nodes would have to be added manually.
Any help would be greatly appreciated.
PS:
I run everything as arm on raspberry pi.
Docker version 17.05.0-ce, build 89658be
docker-compose version 1.9.0, build 2585387
traefik:cancoillotte
Related
I have a docker-compose file that exposes 2 services, a master service and a slave service. I want to be able to scale the slave service to some number of instances using
docker-compose up --scale slave=N
However, one of the options I must specify on command run in the master service is the number of slave instances to expect. E.g. If I scale slave=10, I need to set --num-slaves=10 in the command on the master service.
Is there a way to determine the number of instances of a given service either from the docker-compose file itself, or from a customized entrypoint shellscript?
The problem I'm facing is that since there is no way I've yet found to specify the number of scaled instances from within the docker-compose file format itself, I'm relying on the person running the command to enter the scale factor consistently and to have that value align with the value I need to tell the master node to expect. And trusting users to do the right thing is a recipe for disaster. If I could continue to let the user specify the scale value on the command line, I need a way to determine what that value is at runtime.
scale is not added to up from compose version 3 but you may use replicas:
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
and run it using:
docker-compose --compatibility up -d
docker-compose 1.20.0 introduces a new --compatibility flag designed
to help developers transition to version 3 more easily. When enabled,
docker-compose reads the deploy section of each service’s definition
and attempts to translate it into the equivalent version 2 parameter.
Currently, the following deploy keys are translated:
resources limits and memory reservations
replicas
restart_policy condition and max_attempts
but:
Do not use this in production!
We recommend against using --compatibility mode in production. Because
the resulting configuration is only an approximate using non-Swarm
mode properties, it may produce unexpected results.
see this
PS:
Docker container names must be unique you cannot scale a service beyond 1 container if you have specified a
custom name. Attempting to do so results in an error.
Unfortunately there is no way to define replicas for docker compose. IT ONLY WORKS FOR DOCKER SWARM The documentation specifies it link
Tip: Alternatively, in Compose file version 3.x, you can specify replicas under the deploy key as part of a service configuration for Swarm mode. The deploy key and its sub-options (including replicas) only works with the docker stack deploy command, not docker-compose up or docker-compose run.
So if you have the deploy section in the yaml, but run it with docker-compose, then it will not take any effect.
version: "3.3"
services:
alpine1:
image: alpine
container_name: alpine1
command: ["/bin/sleep", "10000"]
deploy:
replicas: 4
alpine2:
image: alpine
container_name: alpine2
command: ["/bin/sleep", "10000"]
deploy:
replicas: 2
So the only way to scale up in docker compose is by running the scale command manually.
docker-compose scale alpine1=3
Note I had a job in which they loved docker-compose so we had bash scripts to perform operations such as the ones you describe. So for example we would have something like ./controller-app.sh scale test_service=10 and it would run docker-compose scale test_service=10
UPDATE
To check the number of replicas you can mount the docker socket into your container. Then run docker ps --format {{ .Names }} | grep $YOUR_CONTAINER_NAME.
Here is how you would mount the socket.
docker run -v /var/run/docker.sock:/var/run/docker.sock -it alpine sh
Install docker
apk update
apk add docker
I have the below cassandra-compose.yml file which I am trying to deploy on a docker swarm with the below command.
docker stack deploy --compose-file=cassandra-compose.yml cassandra-service
Issue:-
The Service is getting created but no replicas are running.When I inspected the issue, I found that because the mounted folder ie. ~/user/cassandraBackup is not present and that is why the container isn't started.
I tried to run it using docker-compose and it got executed successfully.
Can somebody tell me how to run it using stack-deploy
Cassandra-compose.yml:-
version: '3.1'
services:
multinode:
image: cassandra:3.9
deploy:
replicas: 2
volumes:
- ~/user/cassandraBackup/:/var/lib/cassandra/data
ports:
- 7000:7000
- 7001:7001
- 7199:7199
- 9042:9042
- 9160:9160
try using this command :-
docker stack deploy --compose-file=cassandra-compose.yml -f cassandra-compose.yml cassandra-service
It will get your issue resolved
From this answer https://stackoverflow.com/a/48972713/2138959:
Docker Swarm BIND MOUNTS
If you bind mount a host path into your service’s containers, the path must exist on every swarm node. The Docker swarm mode scheduler can schedule containers on any machine that meets resource availability requirements and satisfies all constraints and placement preferences you specify.
So you have to make sure the path is available to every node in the cluster the task can be scheduled to. But even with this approach container can be scheduled to a node with no or outdated data. So it is better to pin services to concrete nodes.
I started a flask API service onto docker swarm cluster with 1 master and 3 worker node. I have deployed task using the following docker compose file,
version: '3'
services:
xgboost-model-api:
image: xgboost-model-api
ports:
- "5000:5000"
deploy:
mode: global
networks:
- xgboost-net
networks:
xgboost-net:
I deployed the task using the following docker swarm command,
docker stack deploy --compose-file docker-compose.yml xgboost-swarm
However, the task was started only on my master node and not on any worker node.
$ docker service ls
ID NAME MODE REPLICAS IMAGE
pgd8cktr4foz viz replicated 1/1
dockersamples/visualizer
twrpr4av4c7f xgboost-swarm_xgboost-model-api global 1/4 xgboost-model-api
xxrfn1w7eqw6 dockercloud-server-proxy global 1/1 dockercloud/server-proxy
Dockerfile being used is here. Any thoughts on why this behavior occurs would be appreciated.
As stated in this thread (duplicate?):
If you are using a private registry its important to share the login and credentials with the worker nodes by using
docker stack deploy --with-registry-auth
---- UPDATE
From your compose file it doesn't look like you are using a private registry. Generally speaking if containers can't start successfuly on the workers they will end up on the manager.
Some possible reasons for this are:
Can't access private registry (fix with --with-registry-auth)
Application requires some change on the host to run (like elasticSearch requires vm.max_map_count=262144)
HealthCheck fails on other node because of poorly written helthcheck
Network setting issues preventing pulling an image
Try removing your stack and running it again. Then do docker service ps --no-trunc {serviceName} this might show you tasks that should run the service on another node and why it failed.
Check out this SO thread for more troubleshooting tips.
Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.
I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.