I have installed Docker on Windows10 to work with single-node Hadoop cluster and enabled Hyper-V feature for it. Now I don't need it, but I don't want to drop docker containers. Can I disable Hyper-V feature for some time and enable it, when I need to work with docker again? Or it might somehow affect existing containers?
Docker is heavily built on deep Linux kernel features which Windows does not support (such as cgroups and namespaces) which is why Docker on Windows 10 can use one of 2 backends:
Hyper-V or WSL2 which in turn is also based on Hyper-V.
It is possible, although not recommended to setup a Windows container on a Windows host without using Hyper-V basing on windows process isolation. This seems to be irrelevant to your case as you ask about Hadoop cluster which seems to be supported only on linux.
So it seems that even if you manage to setup docker to work without Hyper-V, setting up an Hadoop cluster will be impossible.
What is you concern? Is it about performance? Why would you like to turn Hyper-V off?
Related
Does Docker require Hyper-V enabled in windows? If yes, why?
What is the role of Hyper-V in this case?
I m using Windows 10 home. What is the alternative for hyper-V to install Docker pls?
If you use windows10 professional & your bios supports hardware virtualization, suggest you to enable Hyper-V.
When run linux container in windows10, in fact, it still needs a linux system as a docker host, because linux container cannot share kernel with windows.
If enable hyper-v, docker-windows will auto setup a MobyLinuxVm in hyper-v as a virtual machine which act as the host machine of docker. Compared to traditional solution, I mean install a linux in virtualbox. Hyper-v has much better performance, because it does not depend on windows os, it something like setup based on hardware just like vmware-esx.
Finally, if you use home version of windows10, you had to install a virtualbox as the host machine of docker and use docker toolbox, details refers to https://docs.docker.com/toolbox/overview/ for legacy desktop solution.
Update some additional points you may want to know:
a) linux container:
Docker container had to share kernel with host, there are no linux kernel on windows, so for all situations, you had to have a virtual machine with linux as docker host, either hyper-v or virtualbox if no hyper-v support.
b) windows container:
In theory, windows container could share the kernel of windows, so no virtual machine needed.
But microsoft support container too late compared to linux, so for different host, it use different solutions, see next chapter from microsoft web site:
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Docker isn't a VM so it only runs apps native to the OS, right? Does that mean Docker for Windows only runs Windows .exe files? So Docker containers for Windows and Linux, what do they have in common, if anything? Are containers reusable on different operating systems in any way?
"Docker isn't a VM"
Correct, containers should be considered as processes running in a sandbox. If you search about how this isolation takes place in Linux, you'll definitely run into namespaces & cgroups. One definition of containers I've seen lately states that:
"containers are processes born from tarballs, anchored to namespaces and controlled by cgroups"
photo by Dan Mayer, #LeadDevLondon - June 2018
You can also find some interesting stuff regarding linux containers here: Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon by Jérôme Petazzoni
Docker for Windows only runs Windows .exe files?
No. Consider that a developer with a Windows PC might work on linux based containers that are later deployed to the cloud. Docker for Windows brings this flexibility, BUT if you run linux containers, these will be running on some kind of virtualization environment. Initially, Docker toolbox was using Oracle Virtualbox, now Docker for Windows uses Hyper-V.
I don't know much about how the isolation takes place inside the Windows OS but I think the logic is similar to Linux. Some info about Windows containers:
Windows Container Types
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
Windows and Linux, what do they have in common, if anything?
In general, I would answer that containers serve the idea of Microservices, separation of concerns, do one thing & do it well.
Are containers reusable on different operating systems in any way?
Yes and No. You may face limitations. For example, if you have an application that starts FROM ubuntu:latest and want to make it work on a raspberry Pi, you will have to build a new container from a base image that is made for arm architecture. Docker is not an abstraction that will take any container and make it work on any architecture, OS... You have to know what you are trying to achieve and carefully make your decisions on what you finally choose to use.
Practically I want to play with .NET Core within Docker.
So as I understand it from this post to give myself the best flexibility I would install "Docker For Windows". Means I can ultimately deploy my .Core app to a container that is either a Windows or a Linux container. However the Linux container is still a Hyper-V managed Linux container.
1) Is there a way to instead use the Windows Subsystem for Linux (WSL) to do this in the Windows 10 Creators Update? Seems like less overhead than have Windows/Docker manage a separate Linux VM for me?
No, running Docker containers in WSL is not supported (link mine):
The docker engine is not a supported scenario in the short term. I would suggest hitting our User Voice page and upvoting Docker if you're looking to run the docker engine.
The docker client however should be running in build 14342. I have been able to run the docker client and connect to a docker engine running in a VM.
As to why it's not supported:
WSL is a clean-room kernel reimplementation. So it can't, for both technical and legal reasons, simply take the kernel components of Docker and "make it work". They would need to reverse-engineer years of ongoing kernel development and reimplement it. (Or take some other nontrivial approach.)
Initially Docker for Linux leveraged the namespace, cgroup primitives to provide the containerization solution on Linux platform. It used LXC and later on runC to jail docker processes. While they are extending the support for docker on Mac/Windows, seems that they are taking an inelegant workaround that beats the whole purpose of using containerization over virtualization.
Docker Toolbox used boot2docker Linux (based on a stripped down version of Tiny Core) to host docker containers. boot2docker runs on Oracle Virtualbox.
Docker for Mac runs Alpine Linux on OS X Yosemite's native virtualization called Hypervisor framework. The interfacing is realized through Hyperkit built on top of xhyve (an OS X port of bhyve).
Docker for Windows runs on Hyper-V virtualization framework on Windows 10.
The reason behind using docker (in general, containers) over traditional VMs is negligible overhead and near native performance. Conainers has to be lightweight to be useful.
How do containers compare to virtual machines?
They are complementary. VMs are best used to allocate chunks of
hardware resources. Containers operate at the process level, which
makes them very lightweight and perfect as a unit of software
delivery.
As both Docker for Mac/Windows rely on some virtualization technology behind the scene, is using docker on these platform still retain its relevance? Doesn't using virtualization to emulate containerization beat the whole purpose of switching to docker framework? Just as a side note, this article, too, supports my viewpoint.
As both Docker for Mac/Windows rely on some virtualization technology behind the scene, is using docker on these platform still retain its relevance?
Of course. Pending full native container support on those platform, you still benefit from the main advantages of docker: service discovery, orchestration (kubernetes/swarm) and monitoring.
Those services are easier to scale as container as they would be as individual VMs.
Doesn't using virtualization to emulate containerization beat the whole purpose of switching to docker framework?
No because without the docker framework, you would be left with one VM in which all your services would have to live, without the benefit of isolation and individual upgrade.
My understanding of Linux Containers (LXC) is that it provides a native hypervisor for Linux systems, similar to Windows' Hyper-V introduced in Windows 8. By "native hypervisor", I mean, the ability for the Linux system to host guest VMs inside of it without having to install any kind of specialized virtualization software.
My understanding of Docker is that it somehow builds on top of LXC, and allows application developers to define:
The exact app stack of a VM/node, including the OS, the exact configuration and tuning of the OS, and any tools or applications installed/configured/deployed to that OS; and
The exact resource requirements for running this VM/node, including CPU requirements, memory/disk/network requirements, load balancing and replication requirements, etc. Docker then figures out what nodes to run the container on, using these declared requirements as its baseline.
So first off, if my understanding of LXC or Docker is mislead at all, please begin by correcting me!
Assuming I'm more or less correct in my understanding, I ask:
What is the relationship between Docker and, say, vmWare or Xen VMs? Does Docker "sit on top" of the virtualization layer? In other words, are there "Docker bindings" for different virtualization platforms (vmWare, Xen, kvm, etc.), and I could take a Docker container for myapp and deploy it to any Docker-ified platform?
What is the relationship between LXC and Docker? Does Docker simply just extend LXC, or is it a similar (but completely separate) concept altogether? If its an extension of LXC, then in what way?
relationship between LXC and Docker, -> docker started using LXC, but since docker 0.9, docker uses libcontainer, and no longer uses lxc-start to start the containers. Compared to LXC, docker offers a REST Api, allows to move images from and to the registry, allows to build using Dockerfiles...