Docker: Level of separation between database, client and API - docker

I always separate client, database and API into completely separate scopes (i.e. they are designed to run on separate servers). Should this separation also be reflected when you set up your Docker compose? I feel relatively confident in separating the client (for example Vue or React project) into its own scope with its own docker compose file.
But I have some doubt as to how I should handle the API and the database. Are they expected to be two completely separate scopes with their own Docker and docker-compose files? Or are they expected to be in the same docker-compose?
I understand that both are possible - I'm interested in what's considered the best practice :-)
I hope the question is phrased to make sense. Thanks in advance

Usually, when talking about the same overall application you'd use one docker-compose.yml file and a different Dockerfile for each one if building them yourself.
If you check out this commercial application docker-compose file you can see their entire stack is defined in a single compose file.
If you're interested in best practices maybe check out Kubernetes or its simple one machine (node) personal use product minikube. generally, if you run these services not in production a compose file would be just fine, but, if you care about redundancy with resources and minimizing downtime (i.e. you have 3 machines, one becomes unavailable but the other two still have the other app instances running on them) then Kubernetes might be a better option

Related

Use a single docker-compose.yml file with both Docker Compose and Stack, pro and cons?

I'm new of Docker, I'm exploring Docker Compose for development, and Docker Stack over Swarm for production.
I'm facing with issues comparing both capabilities, specifically with regard to use of Secrets, that are not supported without Swarm, and this implies a difference on how env variables are declared. This leads me to a question:
What are pro and cons of use a same docker-compose.yml file shared between Compose and Stack, and what are best patterns?
What I see in pro is that I can maintain a single file, that otherwise probably would share a lot of configuration with a "twin" docker-stack.yml.
What I see in cons is that instead the two files have specific different features, and aside of similarities, their scopes are fundamentally different. Even more if in future I will use a different orchestrating system, where probably a stack file will be totally replaced.
Is recommended to keep these file separated, or is acceptable to unify into a single with specific techniques?

How common is library sharing between containers and host?

I am researching shared libraries between containers from security point of view.
In many security resources, the shared library scenario, where containers share dependency reference (i.e. same file) is discussed, I can come up with two scenarios:
De-facto discussed scenario - where some lib directory is mounted from the host machine to container
Invented scenario - where a shared volume is created for different containers (different services, replicate set of same container, or both), and it is populated with libraries which are shared between all of the containers.
Despite the discussions, I was not able to find this kind of behavior in real world, so the question is: How common is this approach?
A reference to an official and known image which uses this technique would be great!
This is generally not considered a best practice at all. A single Docker image is intended to be self-contained and include all of the application code and libraries it needs to run. SO questions aside, I’ve never encountered a Docker image that suggests using volumes to inject any sort of code into containers.
(The one exception is the node image; there’s a frequent SO question about using an anonymous volume for node_modules directories [TL;DR: changes in package.json never update the volume] but even then this is trying to avoid sharing the library tree with other contexts.)
One helpful technique is to build an intermediate base image that contains some set of libraries, and then building an application on top of that. At a mechanical level, for a particular version of the ubuntu:18.04 image, I think all other images based on that use the physically same libc.so.6, but from a Docker point of view this is an implementation detail.

Is it feasible to have one docker image for an already existing application with multiple dependencies

I am new to Docker and want to learn the ropes with real-life challenges.
I have an application hosted on IIS and has dependencies over SQL Express and SOLR.
I want to understand the following:
Is it possible to have my whole set-up, including of enabling IIS,
SQL, SOLR and my application in one single container?
If point 1 is feasible, how should I start with it?
Sorry if my questions are basics.
It is feasible, just not a good practice. You want to isolate the software stack to improve the mantainability (easier to deploy updates), modularity (you can reuse a certain component in a different project and even have multiple projects reusing the same image) and security (a software vulnerability in a component of the stack will hardly be able to reach a different component).
So, instead of putting all together into the same image, I do recommend using Docker Compose to have multiple images for each component of the stack (you can even pull generic, up-to-date images from Docker Hub) and assemble them up from the Compose file, so with a single command you can fire up all the components needed for your application to work.
That being said, it is feasible to have all the stack together into the same Dockerfile, but it will be an important mess. You'll need a Dockerfile that installs all the software required, which will make it bulky and hard to mantain. If you're really up for this, you'll have to start from a basic OS image (maybe Windows Server Core IIS) and from there start installing all the other software manually. If there are Dockerfiles for the other components you need to install and they share the same base image or a compatible one, you can straight copy-paste the contents into your Dockerfile, at the cost of said mantainability.
Also, you should definitely use volumes to keep your data safe, especially if you take this monolithic approach, since you risk losing data from the database otherwise.
TL;DR: yes, you can, but you really don't want to since there are much better alternatives that are almost as hard.

what is a docker image? what is it trying to solve?

I understand that it is software shipped in some sort of binary format. In simple terms, what exactly is a docker-image? And what problem is it trying to solve? And how is it better than other distribution formats?
As a student, I don't completely get the picture of how it works and why so many companies are opting for it? Even many open source libraries are shipped as docker images.
To understand the docker images, you should better understand the main element of the Docker mechanism the UnionFS.
Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which
implements a union mount for other file systems. It allows files and
directories of separate file systems, known as branches, to be
transparently overlaid, forming a single coherent file system.
The docker images сonsist of several layers(levels). Each layer is a write-protected filesystem, for every instruction in the Dockerfile created own layer, which has been placed over already created. Then the docker run or docker create is invoked, it make a layer on the top with write persmission(also it has doing a lot of other things). This approach to the distribution of containers is very good in my opinion.
Disclaimer:
It is my opinion which I'm found somewhere, feel free to correct me if I'm wrong.

Docker: Development environments [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am coding in a few different languages/technologies. Actually to be honest, I am only messing around, playing with golang, node.js, ruby on rails, etc.
But now I want to jump on the Docker bandwagon as well, but I am not sure what the benefits would be and if I should put in the effort.
What is the best practise in using Docker for development environments? Do I set up a separate container for each language or technology I dabble with? Or are containers overkill and I should just set up one VM (Linux VM on Windows host) where I do all the development?
How do you guys use Docker for development work?
You should definitely go ahead and do that as is the best approach to follow, even if you share volumes between containers, and avoid setting up different VMs if you have the necessary hardware-power in your workstation and do not need to distribute your environment on different workstations.
At my current company, I'm the guy responsible for setting up all the development environments among other things. We have a few monolithic applications but we're quickly decoupling multiple functionalities into separate micro-services.
How we're starting to manage that is, every micro-service code repository has everything self-contained, that being docker-compose files, with a makefile for the automation, tests, etc.
Developers just have to install docker-toolbox on their Mac OS X, clone the repo and type make. That will start the docker compose with all the links between the containers and all the necessary bits and pieces (DBs, Caches, Queues).
Direct link to the Makefile: https://github.com/marclop/prometheus-demo/blob/master/Makefile.
Also if you want to avoid setting up all the containers there's a few alternatives out there, for example Phusion's one: https://github.com/phusion/baseimage-docker.
I hope this answers your questions.
You shouldn't use Docker for your development environments, use regular vm's like VirtualBox for that if you want complete separation.
Docker is more suited for delivering finished code somewhere, e.g. to a staging environment.
And the reason is that Docker containers are not ideal for persisted state unless you mess around with sharing volumes.
The answer to this is inherently subjective and tied to how you like to do development. It will also be tied to how you want to deploy these in a testing scenario.
Jonas is correct, the primary purpose of Docker is to provide finished code to a staging/production environment HOWEVER I have used it for development and indeed it may be preferable depending on your situation.
To whit - lets say you have a single virtual server, and you want to minimize the amount of space you are using for your environment. The entire purpose of Docker is to store a single copy of the Linux kernel (and base software) and re-use them in each docker instance. You can also minimize the RAM and CPU usage used for running the base Linux "pieces" by running the Docker container on top of Linux.
Probably the most compelling reason (in your case) to use Docker would be to make finding the base setup you want easier. There are plenty of pre-made docker containers that you can use to build your test/dev environment and deploying your code after you are finished to a different machine is WAY easier using Docker than VMWare or Virtual Box (yes, you can create an OVF and that would work, but Docker is IMHO much easier).
I personally used Project Photon when I was playing around with this, which provided a very easy way to setup the base Docker setup in a VMWare environment.
https://blogs.vmware.com/cloudnative/introducing-photon/
The last time I used Docker was for an assignment in one of my classes where I was having to play around with MongoDB on a local instance. Setting up MongoDB would have been trivial on either (or both) Windows or Linux, but I felt the opportunity to learn Docker was too much to pass up. In the end, I feel much more comfortable now with Docker :)
There are ways of backing up Containers, which can (as Jonas pointed out) get kind of messy, but it isn't outside the realm of a reasonably technical individual.
Good Luck either way you go! Again, either way will obviously work - and I don't see anything inherently wrong with either approach.

Resources