I have a Docker Swarm running (spread over three nodes), and pulling images from my private dockerhub repo.
So I can have multiple web applications spread over the same swarm, I'm using Traefik as a proxy.
For each web application, I have a docker-compose.yml and launch it to the stack:
docker stack deploy -c example.yml example
docker stack deploy -c exampletwo.yml exampletwo
The actual *.yml files are all stored in a home folder of one specific swarm manager node.
I don't feel like this is best practice, as I'm going to end up with:
/home/user/example.yml
/home/user/exampletwo.yml
/home/user/examplethree.yml
etc
which seems messy in that they are all together and open to mistakes and loss. And all documenation I read always refers to just using docker-compose.yml; but that doesn't work when I have multiple (totally separate) web applications (I don't want to put all services into one main yml as they are all fundamentally totally unrelated)
Is there a best practise way of doing this? I wonder if I'm missing something fundamental with how to get the yml file available on the node to docker stack deploy
Related
I have been reading various articles for migrating my Docker Application into a different machine. All the articles talk about “docker commit” or “export/ import”. This only refers to a single Container, which is first converted to an Image and then we do a “docker run” on the new machine.
But my application is usually made up of several containers, because I am following the best practice of segregating different services.
The question is, how do I migrate or move all the containers that have been configured to join together and run as one. I don’t know whether “swarm” is the correct term for this.
The alternative I see is - simply copy the “docker-compose” and “dockerfile” into the new machine and do a fresh setup of the architecture. Then I copy all the application files. It runs fine.
My purpose, of course is not the only solution, but it's quite nice:
Create docker images in one machine (where you need your Dockerfile)
Upload images to a docker registry (you can use your own docker hub account, or maybe a nexus, or whatever)
2.1. It's also recommended to tag with version your images, and protect overwritting an image with the same version and different code.
Use docker-compose (it's recommended define a docker network for all docker that have to interact among them) to deploy (docker-compose up is like several docker run, but easier to mantain.)
You can deploy in several machines just using the same docker-compose.yml to deploy and access to your registry.
4.1. Deploy can be done in a single host, swarm, kubernetes... (you'd have to translate your docker-compose.yml to kubectl yml file for that)
I agree to the docker-compose suggestion. And to store your images in a registry or on your local machine. Each section in your docker compose file will be separated per service. Each service will have to be written in YAML format.
You are going to want version 3 YAML I believe. Then from there you code something like below. But each service will use your Dockerfile image in your registry or locally in your folder.
version : '3'
services:
drupal:
image:
......ports, volumes, etc
postgres:
image:
......ports, volumes, etc
Disclosure: I took a Docker Course from Bret Fisher on Udemy.
There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.
I'm learning about using Docker Compose to deploy applications in multiple containers, across multiple hosts. And I have come across two configuration files - stack file, and Compose file.
From the Cloud stack file YAML reference, it states a stack file is a file in YAML format that defines one or more services, similar to a docker-compose.yml file but with a few extensions.
And from this post, it states that stacks are very similar to docker-compose except they define services while docker-compose defines containers.
They look very similar, so I am wondering when I would use the stack file, and when to use the Compose file?
Conceptually, both files serve the same purpose - deployment and configuration of your containers on docker engines.
Docker-compose tool was created first and its purpose is "for defining and running multi-container Docker applications" on a single docker engine. (see docker compose overview )
You use docker-compose up to create/update your containers, networks, volumes and so on.
Where Docker Stack is used in Docker Swarm (Docker's orchestration and scheduling tool) and, therefore, it has additional configuration parameters (i.e. replicas, deploy, roles) that are not needed on a single docker engine.
The stack file is interpreted by docker stack command. This command can be invoked from a docker swarm manager only.
You can convert docker-compose.yml to docker-cloud.yml and back. However, as stated in your question, you must pay attention to the differences. Also, you need to keep in mind that there're different versions for docker-compose. Presently, the latest version is version 3. (https://docs.docker.com/compose/compose-file/)
Edit: An interesting blog, that might help to understand the differences, can be found here https://blog.nimbleci.com/2016/09/14/docker-stacks-and-why-we-need-them/
Note: The question guesses that the Docker Cloud reference is the go-to for understanding stack, and it is useful, but that isn't the authoritative source on stack vs compose -- instead that is a guide that is specific to Docker's hosted service: "Docker Cloud provides a hosted registry service with build and testing facilities." For file documentation, see the Compose file version 3 format -- although it is named "Compose", this is the authoritative place for which features work with both compose and swarm/stack, and how.
You can specify a group of Docker containers to configure and deploy in two ways:
Docker compose (docker-compose up)
Docker swarm (docker swarm init; docker stack deploy --compose-file docker-stack.yml mystack)
Both take a YAML file written in the Docker Compose file version 3 format. That reference is the primary source documenting both docker-compose and docker swarm/stack configuration.
However, there are specific differences between what you can do in the two yml files -- specific options, and specific naming conventions:
Options
The available service configuration options are documented on the Compose file reference page -- usually with a note at the bottom of an option entry describing it as ignored either by docker stack deploy or by docker-compose up.
For example, the following options are ignored when deploying a stack in swarm mode with a (version 3) Compose file:
build, cap_add, cap_drop, cgroup_parent, container_name, depends_on, devices, external_links, links, network_mode, restart, security_opt, stop_signal, sysctls, tmpfs (version 3-3.5), userns_mode
...while some options are ignored by docker-compose, yet work with docker stack deploy, such as:
deploy, restart_policy
When run from the command line, docker stack deploy will print warnings about which options it is ignoring:
Ignoring unsupported options: links
File naming
For docker-compose up the default file name is docker-compose.yml if no alternate file name is specified using -f (see the compose reference). It is common to use this default name and run the command without an argument.
For docker stack deploy there is no default file given in the docker stack deploy reference. You can use whatever name you want, however here are three conventions:
use docker-stack.yml, as used in the official Docker for Beginners Ch.3: Deploying an app to a Swarm.
use docker-cloud.yml, as used in the Docker Cloud Stack YML reference for the Docker Cloud service.
use docker-compose.yml -- the old default name for the Compose file format.
I am working with docker and jenkins, and I'm trying to do two main tasks :
Control and manage docker images and containers (run/start/stop) with jenkins.
Set up a development environment in a docker image then build and test my application which is in the container using jenkins.
While I was surfing the net I found many solutions :
Run jenkins as container and link it with other containers.
Run jenkins as service and use the jenkins plugins provided to support docker.
Run jenkins inside the container which contain the development environment.
So my question is what is the best solution or you can suggest an other approach.
One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?
To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.
Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.
There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord.
Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...
For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.
My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice. It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.
My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.
With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):
$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image
Or with a docker-compose volumes directive:
---
version: '2'
services:
jenkins:
image: namespace/my-jenkins-image
ports:
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- local/folder/with/jenkins_home:/var/jenkins_home
# other services ...
Is there anyone knows what is the best practice for sharing database between containers in docker?
What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.
So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.
Which one is better? Or, is there any other best practice for this requirement?
Thanks
It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.
So I will give a specific example of what I have done in a serious deployment.
I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.
For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:
docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true
I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:
CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0
Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.
I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.
That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.
So to specifically answer your question:
DB is packaged in a container, for all the same reasons the other elements are.
Volumes for DB storage are storage containers passing through local filesystems.
'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)
It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.
(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)
It depends. A useful thing to do is to keep the database URL and password in an environment variable and provide that to Docker when running the containers. That way you will be free to connect to a database wherever it may be located. E.g. running in a container during testing and on a dedicated server in production.
The best practice is to use Docker Volumes.
Official doc: Manage data in containers. This doc details how to deal with DB and container. The usual way of doing so is to put the DB into a container (which is actually not a container but a volume) then the other containers can access this DB-container (the volume) to CRUD (or more) the data.
Random article on "Understanding Docker Volumes"
edit I won't detail much further as the other answer is well made.