Is the Union File System (UFS) concept only exist in Docker? - docker

I'm a beginner of Docker. When I read about docker's union file system, I was wondering about "Is the UFS concept only exist in Docker?"

Union mounts of various kinds have been around for a long time, they are really not a novel idea.
UnionFS is but one implementation of this idea and has been developed independently of Docker.
Docker actually uses various implementations of this idea, depending on what's available and configured: AUFS, OverlayFS

Related

Use a single docker-compose.yml file with both Docker Compose and Stack, pro and cons?

I'm new of Docker, I'm exploring Docker Compose for development, and Docker Stack over Swarm for production.
I'm facing with issues comparing both capabilities, specifically with regard to use of Secrets, that are not supported without Swarm, and this implies a difference on how env variables are declared. This leads me to a question:
What are pro and cons of use a same docker-compose.yml file shared between Compose and Stack, and what are best patterns?
What I see in pro is that I can maintain a single file, that otherwise probably would share a lot of configuration with a "twin" docker-stack.yml.
What I see in cons is that instead the two files have specific different features, and aside of similarities, their scopes are fundamentally different. Even more if in future I will use a different orchestrating system, where probably a stack file will be totally replaced.
Is recommended to keep these file separated, or is acceptable to unify into a single with specific techniques?

How common is library sharing between containers and host?

I am researching shared libraries between containers from security point of view.
In many security resources, the shared library scenario, where containers share dependency reference (i.e. same file) is discussed, I can come up with two scenarios:
De-facto discussed scenario - where some lib directory is mounted from the host machine to container
Invented scenario - where a shared volume is created for different containers (different services, replicate set of same container, or both), and it is populated with libraries which are shared between all of the containers.
Despite the discussions, I was not able to find this kind of behavior in real world, so the question is: How common is this approach?
A reference to an official and known image which uses this technique would be great!
This is generally not considered a best practice at all. A single Docker image is intended to be self-contained and include all of the application code and libraries it needs to run. SO questions aside, I’ve never encountered a Docker image that suggests using volumes to inject any sort of code into containers.
(The one exception is the node image; there’s a frequent SO question about using an anonymous volume for node_modules directories [TL;DR: changes in package.json never update the volume] but even then this is trying to avoid sharing the library tree with other contexts.)
One helpful technique is to build an intermediate base image that contains some set of libraries, and then building an application on top of that. At a mechanical level, for a particular version of the ubuntu:18.04 image, I think all other images based on that use the physically same libc.so.6, but from a Docker point of view this is an implementation detail.

what is a docker image? what is it trying to solve?

I understand that it is software shipped in some sort of binary format. In simple terms, what exactly is a docker-image? And what problem is it trying to solve? And how is it better than other distribution formats?
As a student, I don't completely get the picture of how it works and why so many companies are opting for it? Even many open source libraries are shipped as docker images.
To understand the docker images, you should better understand the main element of the Docker mechanism the UnionFS.
Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which
implements a union mount for other file systems. It allows files and
directories of separate file systems, known as branches, to be
transparently overlaid, forming a single coherent file system.
The docker images сonsist of several layers(levels). Each layer is a write-protected filesystem, for every instruction in the Dockerfile created own layer, which has been placed over already created. Then the docker run or docker create is invoked, it make a layer on the top with write persmission(also it has doing a lot of other things). This approach to the distribution of containers is very good in my opinion.
Disclaimer:
It is my opinion which I'm found somewhere, feel free to correct me if I'm wrong.

Docker Container compared with Unikernel

I recently deployed a tiny Haskell app with docker, using "scratch-haskell" as a base image.
Then I read about Unikernels and HALVM. And I got a little confused.
My docker container is about 6MB large. A Unikernel (with the same haskell app) would be roughly the same size I guess.
The Unikernel runs directly on the Xen hypervisor, whereas the docker Image (or general LXC) runs on a normal Linux distribution, which runs on bare metal.
Now I have the "choice" of running Linux with multiple minimal containers OR a Xen machine with multiple small Unikernels.
But what are the advantages and disadvantages of those two solutions? Is one more secure than the other? And are there any significant performance differences between them?
from http://wiki.xenproject.org/wiki/Unikernels
What do Unikernels Provide?
Unikernels normally generate a singular runtime environment meant to
enable single applications built solely with that environment.
Generally, this environment lacks the ability to spawn subprocesses,
execute shell commands, create multiple threads, or fork processes.
Instead, they provide a pure incarnation of the language runtime
targetted, be it OCaml, Haskell, Java, Erlang, or some other
environment.
Unikernels Versus Linux Containers
Much has been made recently of the advantages of Linux Container
solutions over traditional VMs. It is said by container advocates that
their lightweight memory footprint, quick boot time, and ease of
packaging makes containers the future of virtualization. While these
aspects of containers are certainly notable, they do not spell the end
of the world of the hypervisor. In fact, Unikernels may reduce the
long-term usefulness of containers.
Unikernels facilitate the very same desirable attributes described by
the container proponents, with the addition of an absolutely splendid
security story which few other solutions can match.
So if you want just run Haskell application Unikernels may work for you, and they should have even less overhead than docker (and docker overhead is very small anyway), but if your application will need some prepared environment, need to communicate with non Unikernels software docker is a better choice. I guess it is too early to say will Unikernels be useful or widespread or not, only time will tell.
Unikernals are great for things that are stateless. When you start needing disk access you are better off using Docker.
That's why all the "killer" apps for unikernals are statically configured kernels, like static web pages or software defined networking stacks.
There are many good explations heres a simple one :
Unikernel are VMs but specialized and optimized for the particular application.

Docker best practice on base images and host os

I have a questition about the best pratices on using docker in production.
In my company we use SLES12 as host os. Should we use SLES also as base for our docker containers?
In my opinion SLES image is too big to follow the ddocker recommendation for small base images.
My questition is: Has anyone experience in using docker in production with different host and container os? Are there any disadvantages if we use a small debian/ubuntu base image for our containers? (overhead, security, ...)
I agree with your assessment that for dockerized applications, smaller base images are preferred. This will save on disk space, reduce network transfer, offer a smaller software surface to worry about security vulnerabilities and general complexity. To my knowledge different host/container distributions is the norm and when they align it's more of a coincidence than an intentional design. Since the way you interact with the host OS and the container are so very different, even if they were identical, you procedures for keeping things patched would be different. That said, depending on your staff skill set, sticking to the same package manager ecosystem (rpm vs deb) may have some benefit in terms of familiarity of tooling, so finding a small base RPM distro might be a good choice.

Resources