Docker relationship to VMs and LXC - docker

My understanding of Linux Containers (LXC) is that it provides a native hypervisor for Linux systems, similar to Windows' Hyper-V introduced in Windows 8. By "native hypervisor", I mean, the ability for the Linux system to host guest VMs inside of it without having to install any kind of specialized virtualization software.
My understanding of Docker is that it somehow builds on top of LXC, and allows application developers to define:
The exact app stack of a VM/node, including the OS, the exact configuration and tuning of the OS, and any tools or applications installed/configured/deployed to that OS; and
The exact resource requirements for running this VM/node, including CPU requirements, memory/disk/network requirements, load balancing and replication requirements, etc. Docker then figures out what nodes to run the container on, using these declared requirements as its baseline.
So first off, if my understanding of LXC or Docker is mislead at all, please begin by correcting me!
Assuming I'm more or less correct in my understanding, I ask:
What is the relationship between Docker and, say, vmWare or Xen VMs? Does Docker "sit on top" of the virtualization layer? In other words, are there "Docker bindings" for different virtualization platforms (vmWare, Xen, kvm, etc.), and I could take a Docker container for myapp and deploy it to any Docker-ified platform?
What is the relationship between LXC and Docker? Does Docker simply just extend LXC, or is it a similar (but completely separate) concept altogether? If its an extension of LXC, then in what way?

relationship between LXC and Docker, -> docker started using LXC, but since docker 0.9, docker uses libcontainer, and no longer uses lxc-start to start the containers. Compared to LXC, docker offers a REST Api, allows to move images from and to the registry, allows to build using Dockerfiles...

Related

Docker-machine vs Vagrant? [duplicate]

Every Docker image, as I understand, is based on base image - for example, Ubuntu.
And if I want to isolate any process I should deploy ubuntu docker base image (where is difference with Vagrant here?), and create a necessary subimage after it installing on ubuntu image?
So, if Ubuntu is launched on Vagrant and on Docker, where is practice difference?
And if to use docker provider in Vagrant - where here is difference between Vagrant and Docker?
And, in Docker is it possible to isolate processes on some PC without base image without it's sharing to another PC?
Vagrant is a utility to help you automate setting up VMs. Docker is a utility that helps you use containerization in linux.
A virtual machine runs a whole system, and emulates hardware. Containers section off processes in a single running kernel without emulating hardware.
Both a VM and a Docker image may be Ubuntu 14.04, but with the Docker image you don't need to run the whole OS.
For example, if I want to run an nginx container based on ubuntu, I'd end up with only the nginx process running. No upstart/systemd/init is needed. A VM would run an init system, manage its own networking, and run other services as well. The container image approach that uses a linux distro base is mostly for convenience.
It is entirely possible to run Docker containers with very minimal images. A statically compiled binary alone in an image is all you'd need to run a container.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. Yes, their are still issues with docker in production, but, hopefully, we'll see the solutions to those problems, as docker team and contributors working hard on those issues. As Docker Volume driver allows third-party container data management solutions to provide data volumes for containers which operate on data, such as database, key-value stores, and other stateful applications. The latest version is coming with much more flexibility, complete orchestration build-in, advanced networking, secrets management, etc. As you can see one, rexray, as volume plugin and provides advanced storage functionality. emccode/rexray We're finally starting to agree on more than just images and run time.

What is the Docker cross platform architecture

Docker isn't a VM so it only runs apps native to the OS, right? Does that mean Docker for Windows only runs Windows .exe files? So Docker containers for Windows and Linux, what do they have in common, if anything? Are containers reusable on different operating systems in any way?
"Docker isn't a VM"
Correct, containers should be considered as processes running in a sandbox. If you search about how this isolation takes place in Linux, you'll definitely run into namespaces & cgroups. One definition of containers I've seen lately states that:
"containers are processes born from tarballs, anchored to namespaces and controlled by cgroups"
photo by Dan Mayer, #LeadDevLondon - June 2018
You can also find some interesting stuff regarding linux containers here: Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon by Jérôme Petazzoni
Docker for Windows only runs Windows .exe files?
No. Consider that a developer with a Windows PC might work on linux based containers that are later deployed to the cloud. Docker for Windows brings this flexibility, BUT if you run linux containers, these will be running on some kind of virtualization environment. Initially, Docker toolbox was using Oracle Virtualbox, now Docker for Windows uses Hyper-V.
I don't know much about how the isolation takes place inside the Windows OS but I think the logic is similar to Linux. Some info about Windows containers:
Windows Container Types
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
Windows and Linux, what do they have in common, if anything?
In general, I would answer that containers serve the idea of Microservices, separation of concerns, do one thing & do it well.
Are containers reusable on different operating systems in any way?
Yes and No. You may face limitations. For example, if you have an application that starts FROM ubuntu:latest and want to make it work on a raspberry Pi, you will have to build a new container from a base image that is made for arm architecture. Docker is not an abstraction that will take any container and make it work on any architecture, OS... You have to know what you are trying to achieve and carefully make your decisions on what you finally choose to use.

Is "Docker" an implementation of "Container"?

I'm trying to learn Docker and came to know of term Container. I am bit confused, in most of the online materials which I referred to (understand Docker), the term Container appears somewhere.
Can anyone help me understand what is the difference between Docker and Container and is Docker one of the implementations of Container?
Thanks in advance.
A Container is essentially a package(with an embedded application) that can be run anywhere. A Container allows a developer to package an application with all of the dependencies it needs, and ship it as one package.
At a lower level, a Container is an operating system level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. LXC(Linux Containers) combines the kernel's cgroups and support for isolated namespaces to provide an isolated environment for applications.
Docker is a tool that was designed to make it much easier to create, deploy, and run applications in containers, rather than using LXC(Linux Containers) directly.
All docker's are containers not all containers are docker.
docker is a strict subset of containers.
A container is a generic term, like you said, docker is a particular implementation of a container system.

VMWare VIC (Photon OS + Docker) vs CoreOS + Docker

Could you pls help me to understand how VMWare VIC actually works?
I'm familiar with Docker, and has very basic overview of CoreOS. Docker is your environment container which increase your app portability, whereas CoreOS is very lightweight Linux system, which has a bare minimum to launch Docker container.
In other hand, there are lots of virtual machines (e.g. VMWare), which are so heavy that humanity had to invent Docker. The only VM's benefit over Docker is that it's more secure.
Questions:
So why trying to put Docker inside virtual machine? In other words, why do you need VIC?
How can virtual machine be "small"? Isn't it a container than?
Why do you need additional layer like Photon OS? Why not just start Docker instances directly from VM. Docker inside OS, OS inside VM sounds like an overhead?
I've played with VIC for sometime and I try to answer your question.
Lets imagine VIC as a docker daemon you can send commands like ps,run etc Usually VIC has lots of resources assigned. When VIC receive a run command it will spawn a new VM with demanded profile. You can provide how much memory and cpu should have assigned via docker arguments. The docker runs in this small VM spawned exclusively for the docker container. So it's grated that each container will run in its own VM. When you stop the container the VM is shut down as well. The VIC has implemented all features of docker so far e.g. volumes, network... except exec command.
Well, it's just design that can be considered as an overhead. VIC creates for each container it's own VM which runs the container. I believe this "a hack" to provide old fashioned VMware tools for docker ecosystem.
Definitely, this is an overhead, but it's nothing significant I suppose. You can have a look into Photon Controller which should be the product without the additional layer. But it has not support for VMware tools yet.
I'd would say it has some pros and cons:
PROS:
VIC spawns new VM automatically with desired cpu and mem profiles
VIC can be controlled via native docker calls
VIC supports other VMware products. Monitoring, storages, networking
CONS:
List item
VIC has some bugs in docker implementation or its not working the same way as native docker. So its hard to integrate it with others systems like mesos, marathon.
VIC supports only 1.23 docker API version
VIC doesn't support exec command

Docker vs Vagrant

Every Docker image, as I understand, is based on base image - for example, Ubuntu.
And if I want to isolate any process I should deploy ubuntu docker base image (where is difference with Vagrant here?), and create a necessary subimage after it installing on ubuntu image?
So, if Ubuntu is launched on Vagrant and on Docker, where is practice difference?
And if to use docker provider in Vagrant - where here is difference between Vagrant and Docker?
And, in Docker is it possible to isolate processes on some PC without base image without it's sharing to another PC?
Vagrant is a utility to help you automate setting up VMs. Docker is a utility that helps you use containerization in linux.
A virtual machine runs a whole system, and emulates hardware. Containers section off processes in a single running kernel without emulating hardware.
Both a VM and a Docker image may be Ubuntu 14.04, but with the Docker image you don't need to run the whole OS.
For example, if I want to run an nginx container based on ubuntu, I'd end up with only the nginx process running. No upstart/systemd/init is needed. A VM would run an init system, manage its own networking, and run other services as well. The container image approach that uses a linux distro base is mostly for convenience.
It is entirely possible to run Docker containers with very minimal images. A statically compiled binary alone in an image is all you'd need to run a container.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. Yes, their are still issues with docker in production, but, hopefully, we'll see the solutions to those problems, as docker team and contributors working hard on those issues. As Docker Volume driver allows third-party container data management solutions to provide data volumes for containers which operate on data, such as database, key-value stores, and other stateful applications. The latest version is coming with much more flexibility, complete orchestration build-in, advanced networking, secrets management, etc. As you can see one, rexray, as volume plugin and provides advanced storage functionality. emccode/rexray We're finally starting to agree on more than just images and run time.

Resources