How common is library sharing between containers and host? - docker

I am researching shared libraries between containers from security point of view.
In many security resources, the shared library scenario, where containers share dependency reference (i.e. same file) is discussed, I can come up with two scenarios:
De-facto discussed scenario - where some lib directory is mounted from the host machine to container
Invented scenario - where a shared volume is created for different containers (different services, replicate set of same container, or both), and it is populated with libraries which are shared between all of the containers.
Despite the discussions, I was not able to find this kind of behavior in real world, so the question is: How common is this approach?
A reference to an official and known image which uses this technique would be great!

This is generally not considered a best practice at all. A single Docker image is intended to be self-contained and include all of the application code and libraries it needs to run. SO questions aside, I’ve never encountered a Docker image that suggests using volumes to inject any sort of code into containers.
(The one exception is the node image; there’s a frequent SO question about using an anonymous volume for node_modules directories [TL;DR: changes in package.json never update the volume] but even then this is trying to avoid sharing the library tree with other contexts.)
One helpful technique is to build an intermediate base image that contains some set of libraries, and then building an application on top of that. At a mechanical level, for a particular version of the ubuntu:18.04 image, I think all other images based on that use the physically same libc.so.6, but from a Docker point of view this is an implementation detail.

Related

Docker: Level of separation between database, client and API

I always separate client, database and API into completely separate scopes (i.e. they are designed to run on separate servers). Should this separation also be reflected when you set up your Docker compose? I feel relatively confident in separating the client (for example Vue or React project) into its own scope with its own docker compose file.
But I have some doubt as to how I should handle the API and the database. Are they expected to be two completely separate scopes with their own Docker and docker-compose files? Or are they expected to be in the same docker-compose?
I understand that both are possible - I'm interested in what's considered the best practice :-)
I hope the question is phrased to make sense. Thanks in advance
Usually, when talking about the same overall application you'd use one docker-compose.yml file and a different Dockerfile for each one if building them yourself.
If you check out this commercial application docker-compose file you can see their entire stack is defined in a single compose file.
If you're interested in best practices maybe check out Kubernetes or its simple one machine (node) personal use product minikube. generally, if you run these services not in production a compose file would be just fine, but, if you care about redundancy with resources and minimizing downtime (i.e. you have 3 machines, one becomes unavailable but the other two still have the other app instances running on them) then Kubernetes might be a better option

Use a single docker-compose.yml file with both Docker Compose and Stack, pro and cons?

I'm new of Docker, I'm exploring Docker Compose for development, and Docker Stack over Swarm for production.
I'm facing with issues comparing both capabilities, specifically with regard to use of Secrets, that are not supported without Swarm, and this implies a difference on how env variables are declared. This leads me to a question:
What are pro and cons of use a same docker-compose.yml file shared between Compose and Stack, and what are best patterns?
What I see in pro is that I can maintain a single file, that otherwise probably would share a lot of configuration with a "twin" docker-stack.yml.
What I see in cons is that instead the two files have specific different features, and aside of similarities, their scopes are fundamentally different. Even more if in future I will use a different orchestrating system, where probably a stack file will be totally replaced.
Is recommended to keep these file separated, or is acceptable to unify into a single with specific techniques?

Is it feasible to have one docker image for an already existing application with multiple dependencies

I am new to Docker and want to learn the ropes with real-life challenges.
I have an application hosted on IIS and has dependencies over SQL Express and SOLR.
I want to understand the following:
Is it possible to have my whole set-up, including of enabling IIS,
SQL, SOLR and my application in one single container?
If point 1 is feasible, how should I start with it?
Sorry if my questions are basics.
It is feasible, just not a good practice. You want to isolate the software stack to improve the mantainability (easier to deploy updates), modularity (you can reuse a certain component in a different project and even have multiple projects reusing the same image) and security (a software vulnerability in a component of the stack will hardly be able to reach a different component).
So, instead of putting all together into the same image, I do recommend using Docker Compose to have multiple images for each component of the stack (you can even pull generic, up-to-date images from Docker Hub) and assemble them up from the Compose file, so with a single command you can fire up all the components needed for your application to work.
That being said, it is feasible to have all the stack together into the same Dockerfile, but it will be an important mess. You'll need a Dockerfile that installs all the software required, which will make it bulky and hard to mantain. If you're really up for this, you'll have to start from a basic OS image (maybe Windows Server Core IIS) and from there start installing all the other software manually. If there are Dockerfiles for the other components you need to install and they share the same base image or a compatible one, you can straight copy-paste the contents into your Dockerfile, at the cost of said mantainability.
Also, you should definitely use volumes to keep your data safe, especially if you take this monolithic approach, since you risk losing data from the database otherwise.
TL;DR: yes, you can, but you really don't want to since there are much better alternatives that are almost as hard.

Do OS providers make special / custom made OS for docker?

I am trying to understand Docker and its related core concepts, I came to know that there is concept of images which forms the basis of container where applications run isolated.
I also came to know that we can download the official images from docker hub, https://hub.docker.com , part of screen shot below:
My question is:
Do respective company create special/custom made OS (the minimal, for example we can see ubuntu image) for docker? If so, what benefit these companies get in creating these custom made images for docker?
One could call them custom images, however, they are just base bare images which are to be used as a starting point for your application.
They are mostly built by people who works at Docker and they are trying to ensure some guarantee of quality.
They are stripped of unnecessary packages in order to keep the image size to a minimum.
To find out more you could read this Docker documentation page or this blog post.

what is a docker image? what is it trying to solve?

I understand that it is software shipped in some sort of binary format. In simple terms, what exactly is a docker-image? And what problem is it trying to solve? And how is it better than other distribution formats?
As a student, I don't completely get the picture of how it works and why so many companies are opting for it? Even many open source libraries are shipped as docker images.
To understand the docker images, you should better understand the main element of the Docker mechanism the UnionFS.
Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which
implements a union mount for other file systems. It allows files and
directories of separate file systems, known as branches, to be
transparently overlaid, forming a single coherent file system.
The docker images сonsist of several layers(levels). Each layer is a write-protected filesystem, for every instruction in the Dockerfile created own layer, which has been placed over already created. Then the docker run or docker create is invoked, it make a layer on the top with write persmission(also it has doing a lot of other things). This approach to the distribution of containers is very good in my opinion.
Disclaimer:
It is my opinion which I'm found somewhere, feel free to correct me if I'm wrong.

Resources