Docker Swarm - Deploying stack with shared code base across hosts - docker

I have a question related with the best practices for deploying applications to the production based on the docker swarm.
In order to simplify discussion related with this question/issue lets consider following scenario:
Our swarm contains:
6 servers (different hosts)
on each of these servers, we will have one service
each service will have only one task/replica docker running
Memcached1 and Memcached2 uses public images from docker hub
"Recycle data 1" and "Recycle data 2" uses custom image from private repository
"Client 1" and "Client 2" uses custom image from private repository
So at the end, for our example application, we have 6 dockers running across 6 different servers. 2 dockers are memcached, and 4 of them are clients which are communicating with memcached.
"Client 1" and "Client 2" are going to insert data in the memcached based on the some kind of rules. "Recycle data 1" and "Recycle data 2" are going to update or delete data from memcached based on some kind of rules. Simple as that.
Our applications which are communicating with memcached are custom ones, and they are written by us. The code for these application reside on github (or any other repository). What is the best way to deploy this application to the production:
Build images which will contain copied code within the image which you can use to deploy things to the swarm
Build image which will use volume where code reside outside of the image.
Having in mind that I am deploying swarm to the production for the first time, I can see a lot of issues with way number 1. Having a code incorporate to the images seems non logical to me, having in mind that in 99% of the time, the updates which are going to happen are going to be code based. This will require building image every time when you want to update the code which runs on specific docker (no matter how small that change is).
Way number 2. seems much more logical to me. But at this specific moment I am not sure is this possible? So there are a number of questions here:
What is the best approach in case where we are going to host multiple dockers which will run the same code in the background?
Is it possible on docker swarm, to have one central host,server (manager, anywhere) where we can clone our repositories and share those repositores as volumes across the docker swarm? (in our example, all 4 customer services will mount volume where we have our code hosted)
If this is possible, what is the docker-compose.yml implementation for it?

After digging more deeper and working with docker and docker swarm mode for last 3 months, these are the answers on questions above:
Answer 1: In general, you should consider your docker image as "compiled" version of your program. Your image should contain either code base, or compiled version of the program (depends which programming language you are using), and that specific image represents your version of the app. Every single time when you want to deploy your next version, you will generate the new image.
This is probably best approach for 99% of the apps which are going to be hosted with the docker (exceptions are development environments and apps where you really want to bash and control things directly from the docker container by itself).
Answer 2: It is possible but it is extremely bad approach. As mentioned in answer one, the best one is to copy the app code directly into the image and "consider" your image (running container) as "app by itself".
I was not able to wrap my head around this concept at the begging, because this concept will not allow you to simply go to the server (or where ever you are hosting your docker) and change the app and restart docker (obviously because container will be at the same beginning again after restart using the same image, same base of code you deployed with that image). Any kind of change SHOULD and NEEDS to be deployed as different image with different version. That is what docker is all about.
Additionally, initial idea for sharing same code base across multiple swarm services is possible, but it totally ruins purpose of the versioning across docker swarm.
Consider having 3 services which are used as redundant services (failover), and you want to use new version on one of them as beta test. This will not be possible with the shared code base.

Related

Container with app + DB

I've been making some tests with docker and so far I'm wondering why it's considered a good practice to separate the DB and the app in two containers.
Having two containers seems to be cumbersome to manage and I don't really see the value in it.
Whereas I like the idea of having a self sustainable container per app.
One reason is the separation of data storage and application. If you you put both in their own container, you can update them independently. In my experience this is a common process, because usually the application will evolve faster than the underlying database.
It also frees you to run the containers in different places, which might be a constraint in your operations. Or to run multiple containers from the same database image with different applications.
Often it is also a good thing to be able to scale the UI from one instance to multiple instance, all connected to the same database (or cache instance or HTTP backend). This is mentioned briefly in the docker best practices.
I also understand the urge to run multiple processes in one container. That's why so many minimalist init systems/supervisors like s6 came up lately. I prefer this for demos of applications which require a couple things, like an nginx for frontend, a database and maybe a redis instance. But you could also write a basic docker-compose file and run the demo with multiple containers.
It depends on what you consider your "DB", is it the database application or the content.
The latter is easy, the content needs to be persisted outside the lifetime of the application. The convention used to be to have a "data" container, which simplified linking it with the application (e.g. using the Docker Engine create command --volumes-from parameter). With Docker 1.9 there is a new volume API which has superceded the concept of "data" containers. But you should never store your data in the overlay filesystem (if not only for persistence, but for performance).
If you are referring to a database application, you really enter a semi-religious debate with the microservices crowd. Docker is built to run single process. It is built for 12-factor apps. It is built for microservices. It is definitely possible to run more than one process in a container, but with it you have to consider the additional complexity of managing/monitoring these processes (e.g. using an init process like supervisord), dealing with logging, etc.
I've delivered both. If you are managing the container deployment (e.g. you are hosting the app), it is actually less work to use multiple containers. This allows you to use Docker's abstraction layers for networking and persistent storage. It also provides maximum portability as you scale the application (perhaps you may consider using convoy or flocker volume drivers or an overlay network for hosting containers across multiple servers). If you are developing a product for distribution, it is more convenient to deliver a single Docker Repository (with one Image). This minimizes the support costs as you guide customers through deployment.

How to upgrade an application inside a Docker image

I am very new to this docker thing, and as such might not have been able to frame my search good enough to find this answer. However I am trying to build a test image which would contain a few test applications,
But I see a problem there.
If I commit them all to one image and then need to upgrade one of the applications I would need to rebuild me entire Image again and the redistribute this image to all remotes (Is this correct ?).
Do I then used data containers for my applications and just have a linux image ?
regards
You should split your single container into multiple containers each with one Microservice.
Microservices is an approach to application development in which a large application is built as a suite of modular services. Each module supports a specific business goal and uses a simple, well-defined interface to communicate with other modules.
In you case you can start putting each application into one container.
Example:
You have a web application, first step would be having a container for webapp and another for the database.
Volumes are used for persistent data, Like your Database files that you want to keep after removing the container. It's not a good practice to have your entire app in these volumes.

docker-compose per microservice in local and production enironments?

I want to be able to develop microservices locally, but also to 'push' them into production with minimal configurational changes. I used to put all microservices into one docker-compose locally; but I start to see this might no be the practical.
The new idea is to have single docker-compose per service. This does not means it will run with only one container; it might have more inside (like some datastore behind etc).
From that new point of view, let's take a look at the well-known docker voting app example, that consist of 5 components:
(P) Python webapp which lets you vote between two options
(R) Redis queue which collects new votes
(J) Java worker which consumes votes and stores them in…
(S) Postgres database backed by a Docker volume
(N) Node.js webapp which shows the results of the voting in real time
Let's say you want to push this example into production (so having just one docker-compose is not an option:). Not forget that more infrastructure-related components may be added on top of it (like kibana, prometheus...). And we want to be able to scale what we need; and we use e.g. swarm.
The question is:
How to organize this example: in single docker-composes or many?
What microservices do we have here? In other words, which components would you combine into single docker-compose? Example: J and S?
If services are not in single docker-compose, do we add them to same overlay network to use swarm dns feature?
and so on...
(I don't need details on how to install stuff, this question is about top-level organization)
Docker Compose is mostly for defining different container, configure and using a single command make them available (also for sequencing). So it is best suited for local development, integration testing and use it as part of your Continuous Integration process.
While not ruling out Docker compose can be used in production environment, I think it would be a good case of using Kubernetes which gives more control over scaling, managing multiple containers.
This blog has some example scenarios to try out (and many other resources which can be helpful)
https://renzedevries.wordpress.com/2016/05/31/deploying-a-docker-container-to-kubernetes-on-amazon-aws/comment-page-1/#comment-10

Docker application deployment DEV vs TEST

I'm really struggling to grasp the workflow of Docker. The issue is: where exactly are the deliverables?
One would expect the developers image to be the same one as the one used for testing, production.
But how can one develop use auto-reload and such(probably by some shared volumes) without building the image again and again?
The image for testers should be just fire and you are ready to go. How are the images split?
I heard something about data-container which holds probably the app deliverables. So does it mean that I will have one container for DB, one for App. Server and one versioned image for my code itself?
The issue is ,where exactly are the deliverables.
static deliverables (which never changes) are directly copied in the image.
dynamic deliverables (which are generated during a docker run session, or which are updated) are in volumes (either host mounted volume or data container volume), in order to have persistence across container life-cycle.
does it mean that I will have one container for DB, one for App.
Yes, in addition of your application container (which is what docker primarily is: it puts applications in container), you would have data container in order to isolate the data that needs persistence.

Understanding docker from a layman point of view

I am just one day old to docker , so it is relatively very new to me .
I read the docker.io but could not get the answers to few basic questions . Here is what it is:
Docker is basically a tool which allows you to make use of the images and spin up your own customised images by installing softwares so that you can use to create the VMs using that .
Is this what docker is all about from a 10000 ft bird's eye piont of view?
2 . What exactly is the meaning of a container ? Is it synonymn for image?
3 . I remember reading somewhere that it allows you to deploy applications. Is this correct ? In other words will it behave like IIS for deploying the .net applications?
Please answer my questions above , so that I can understand it better and take it forward.
1) What docker is all about from a 10000 ft bird's eye point of view?
From the website: Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.
Drill down a little bit more and a thorough explanation of the what/why docker addresses:
https://www.docker.io/the_whole_story/
https://www.docker.io/the_whole_story/#Why-Should-I-Care-(For-Developers)
https://www.docker.io/the_whole_story/#Why-Should-I-Care-(For-Devops)
Further depth can be found in the technology documentation:
http://docs.docker.io/introduction/technology/
2) What exactly is the meaning of a container ? Is it synonymn for image?
An image is the set of layers that are built up and can be moved around. Images are read-only.
http://docs.docker.io/en/latest/terms/image/
http://docs.docker.io/en/latest/terms/layer/
A container is an active (or inactive if exited) stateful instantiation of an image.
http://docs.docker.io/en/latest/terms/container/
See also: In Docker, what's the difference between a container and an image?
3)I remember reading somewhere that it allows you to deploy applications. Is this correct ? In other words will it behave like IIS for deploying the .net applications?
Yes, Docker can be used to deploy applications. You can deploy single components of the application stack or multiple components within a container. It depends on the use case. See the First steps with Docker page here: http://docs.docker.io/use/basics/
See also:
http://docs.docker.io/examples/nodejs_web_app/
http://docs.docker.io/examples/python_web_app/
http://docs.docker.io/examples/running_redis_service/
http://docs.docker.io/examples/using_supervisord/
So.
It's about providing the separation of processes that you get with virtualisation without the overhead. Of course this doesn't come without a cost - which in this case the largest one is that your docked containers will all be running under the same kernel.
A container is roughly a chroot (with better process encapsulation) and some ethernet virtualisation. The image is the filesystem (plus a few bits) that is mounted to provide the root filesystem^1
deploy is just the term docker uses for spinning up a container instance.
Effectively, each running instance of a container thinks that it is the only thing^2 running on that machine (much like a cloud appliance is typically designed). It provides more separation of processes than running on the host OS would provide, and allows for easily spinning up multiple separate copies of the container as needed; while providing much, much lower overheads than using full virtualisation would need.
^1: Actually there may be several layers of file-system sandwiched together to form the root file system.
^2: Docker does support multiple processes running within a single instance, but that is generally considered to be somewhat advanced usage.

Resources