What is the Docker cross platform architecture - docker

Docker isn't a VM so it only runs apps native to the OS, right? Does that mean Docker for Windows only runs Windows .exe files? So Docker containers for Windows and Linux, what do they have in common, if anything? Are containers reusable on different operating systems in any way?

"Docker isn't a VM"
Correct, containers should be considered as processes running in a sandbox. If you search about how this isolation takes place in Linux, you'll definitely run into namespaces & cgroups. One definition of containers I've seen lately states that:
"containers are processes born from tarballs, anchored to namespaces and controlled by cgroups"
photo by Dan Mayer, #LeadDevLondon - June 2018
You can also find some interesting stuff regarding linux containers here: Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon by Jérôme Petazzoni
Docker for Windows only runs Windows .exe files?
No. Consider that a developer with a Windows PC might work on linux based containers that are later deployed to the cloud. Docker for Windows brings this flexibility, BUT if you run linux containers, these will be running on some kind of virtualization environment. Initially, Docker toolbox was using Oracle Virtualbox, now Docker for Windows uses Hyper-V.
I don't know much about how the isolation takes place inside the Windows OS but I think the logic is similar to Linux. Some info about Windows containers:
Windows Container Types
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
Windows and Linux, what do they have in common, if anything?
In general, I would answer that containers serve the idea of Microservices, separation of concerns, do one thing & do it well.
Are containers reusable on different operating systems in any way?
Yes and No. You may face limitations. For example, if you have an application that starts FROM ubuntu:latest and want to make it work on a raspberry Pi, you will have to build a new container from a base image that is made for arm architecture. Docker is not an abstraction that will take any container and make it work on any architecture, OS... You have to know what you are trying to achieve and carefully make your decisions on what you finally choose to use.

Related

docker in windows need Hyper-V enabled?

Does Docker require Hyper-V enabled in windows? If yes, why?
What is the role of Hyper-V in this case?
I m using Windows 10 home. What is the alternative for hyper-V to install Docker pls?
If you use windows10 professional & your bios supports hardware virtualization, suggest you to enable Hyper-V.
When run linux container in windows10, in fact, it still needs a linux system as a docker host, because linux container cannot share kernel with windows.
If enable hyper-v, docker-windows will auto setup a MobyLinuxVm in hyper-v as a virtual machine which act as the host machine of docker. Compared to traditional solution, I mean install a linux in virtualbox. Hyper-v has much better performance, because it does not depend on windows os, it something like setup based on hardware just like vmware-esx.
Finally, if you use home version of windows10, you had to install a virtualbox as the host machine of docker and use docker toolbox, details refers to https://docs.docker.com/toolbox/overview/ for legacy desktop solution.
Update some additional points you may want to know:
a) linux container:
Docker container had to share kernel with host, there are no linux kernel on windows, so for all situations, you had to have a virtual machine with linux as docker host, either hyper-v or virtualbox if no hyper-v support.
b) windows container:
In theory, windows container could share the kernel of windows, so no virtual machine needed.
But microsoft support container too late compared to linux, so for different host, it use different solutions, see next chapter from microsoft web site:
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.

Run docker without host OS?

Is it possible that I run docker without any host OS. I mean run it natively. It would be a performance boost that way I believe if possible.
Suppose I have a tool which runs on linux kernel. I create a docker container with some extra dependencies. Now I share that container with other person who has linux to run that container.
But I want to run that container without host OS. as it will be double layer of OS with container.
Docker itself is not a VM, so there is no double layer of OS. Docker is a tool to run applications with settings that isolate them from other applications running on the same OS kernel. Docker does include a VM with Docker for Windows and Docker for Mac to run the Linux kernel so you can run Linux containers. There is an option to run native Windows containers with Server 2016, but if you are looking for minimal and efficiency, I would suggest looking elsewhere.
The closest things to what you are looking for are:
Unikernels: these are applications compiled into a kernel with everything else removed, designed to run inside of a VM for a very specialized task, often security related. These are still early in their development stage, but Docker does use some of their technology inside their project.
LinuxKit (part of the Moby Project): this is how Docker creates their VMs for Docker for Windows and Docker for Mac. It is a container based Linux operating system that you can custom compile with only the containers you want to run. Most of the focus of this is still designed for VMs, but bare metal is an option.
Scratch base image: if you statically compile your application to remove all of the library dependencies, you can have a container without any shell or other OS tools. This is often seen in Go binaries shipped as Docker containers to do a single task with a very small attack surface. As a Docker container, it still requires the underlying Linux OS to run the binary.

Is Docker Toolbox or Docker for Mac beneficial over virtualization solutions?

Initially Docker for Linux leveraged the namespace, cgroup primitives to provide the containerization solution on Linux platform. It used LXC and later on runC to jail docker processes. While they are extending the support for docker on Mac/Windows, seems that they are taking an inelegant workaround that beats the whole purpose of using containerization over virtualization.
Docker Toolbox used boot2docker Linux (based on a stripped down version of Tiny Core) to host docker containers. boot2docker runs on Oracle Virtualbox.
Docker for Mac runs Alpine Linux on OS X Yosemite's native virtualization called Hypervisor framework. The interfacing is realized through Hyperkit built on top of xhyve (an OS X port of bhyve).
Docker for Windows runs on Hyper-V virtualization framework on Windows 10.
The reason behind using docker (in general, containers) over traditional VMs is negligible overhead and near native performance. Conainers has to be lightweight to be useful.
How do containers compare to virtual machines?
They are complementary. VMs are best used to allocate chunks of
hardware resources. Containers operate at the process level, which
makes them very lightweight and perfect as a unit of software
delivery.
As both Docker for Mac/Windows rely on some virtualization technology behind the scene, is using docker on these platform still retain its relevance? Doesn't using virtualization to emulate containerization beat the whole purpose of switching to docker framework? Just as a side note, this article, too, supports my viewpoint.
As both Docker for Mac/Windows rely on some virtualization technology behind the scene, is using docker on these platform still retain its relevance?
Of course. Pending full native container support on those platform, you still benefit from the main advantages of docker: service discovery, orchestration (kubernetes/swarm) and monitoring.
Those services are easier to scale as container as they would be as individual VMs.
Doesn't using virtualization to emulate containerization beat the whole purpose of switching to docker framework?
No because without the docker framework, you would be left with one VM in which all your services would have to live, without the benefit of isolation and individual upgrade.

Linux machine with docker deploy windows container

I have a Linux server with 16GB ram with docker host installed. I would like to deploy on it a Windows Server container. Is it possible? Anyone has just tried this solution?
Update 2019
As noted by duct_tape_coder in the comments:
Microsoft has improved the network options for containers and now allows multiple containers per pod with improved namespace.
In theory (original answer Oct 2015):
There is no "Windows container" running on a Linux host.
And a Linux container would not run directly on a Windows server, since it relies on system calls to a Linux kernel.
You certainly can run those Linux containers on any Windows machine through a VM.
That is what docker toolbox will install.
There will be support for docker on Windows soon, but that would be for Windows container, not Linux containers.
Update 2017: yes, LinuxKit allows to run a linux container through aa Hyper-V isolation wrapper on a Windows platform, through a minimal Linux OS built from linuxkit.
That is still the same idea: linux running inside a VM on Windows.
That is not a Linux server deployed on a Windows server: only deployed inside a Linux server running in a VM on Windows.
Actually... (update Dec. 2016)
See "Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)"
Kubernetes 1.5 includes alpha support for both Windows Server Containers, a shared kernel model similar to Docker, and Hyper-V Containers, a single-kernel model that provides better isolation for multi-tenant environments (at the cost of greater latency).
The end result is the ability to create a single Kubernetes cluster that includes not just Linux nodes running Linux containers or Windows nodes running Windows containers, but both side by side, for a truly hybrid experience.
For example, a single service can have PODs using Windows Server Containers and other PODs using Linux containers.
But:
Though it appears fully functional, there do appear to be some limitations in this early release, including:
The Kubernetes master must still run on Linux due to dependencies in how it’s written. It’s possible to port to Windows, but for the moment the team feels it’s better to focus their efforts on the client components.
There is no native support for network overlays for containers in windows, so networking is limited to L3. (There are other solutions, but they’re not natively available.)
The Kubernetes Windows SIG is working with Microsoft to solve these problems, however, and they hope to have made progress by Kubernetes 1.6’s release early next year.
Networking between Windows containers is more complicated because each container gets its own network namespace, so it’s recommended that you use single-container pods for now.
Applications running in Windows Server Containers can run in any language supported by Windows. You CAN run .NET applications in Linux containers, but only if they’re written in .NET Core. .NET core is also supported by the Nano Server operating system, which can be deployed on Windows Server Containers.

Docker relationship to VMs and LXC

My understanding of Linux Containers (LXC) is that it provides a native hypervisor for Linux systems, similar to Windows' Hyper-V introduced in Windows 8. By "native hypervisor", I mean, the ability for the Linux system to host guest VMs inside of it without having to install any kind of specialized virtualization software.
My understanding of Docker is that it somehow builds on top of LXC, and allows application developers to define:
The exact app stack of a VM/node, including the OS, the exact configuration and tuning of the OS, and any tools or applications installed/configured/deployed to that OS; and
The exact resource requirements for running this VM/node, including CPU requirements, memory/disk/network requirements, load balancing and replication requirements, etc. Docker then figures out what nodes to run the container on, using these declared requirements as its baseline.
So first off, if my understanding of LXC or Docker is mislead at all, please begin by correcting me!
Assuming I'm more or less correct in my understanding, I ask:
What is the relationship between Docker and, say, vmWare or Xen VMs? Does Docker "sit on top" of the virtualization layer? In other words, are there "Docker bindings" for different virtualization platforms (vmWare, Xen, kvm, etc.), and I could take a Docker container for myapp and deploy it to any Docker-ified platform?
What is the relationship between LXC and Docker? Does Docker simply just extend LXC, or is it a similar (but completely separate) concept altogether? If its an extension of LXC, then in what way?
relationship between LXC and Docker, -> docker started using LXC, but since docker 0.9, docker uses libcontainer, and no longer uses lxc-start to start the containers. Compared to LXC, docker offers a REST Api, allows to move images from and to the registry, allows to build using Dockerfiles...

Resources