I have a Linux server with 16GB ram with docker host installed. I would like to deploy on it a Windows Server container. Is it possible? Anyone has just tried this solution?
Update 2019
As noted by duct_tape_coder in the comments:
Microsoft has improved the network options for containers and now allows multiple containers per pod with improved namespace.
In theory (original answer Oct 2015):
There is no "Windows container" running on a Linux host.
And a Linux container would not run directly on a Windows server, since it relies on system calls to a Linux kernel.
You certainly can run those Linux containers on any Windows machine through a VM.
That is what docker toolbox will install.
There will be support for docker on Windows soon, but that would be for Windows container, not Linux containers.
Update 2017: yes, LinuxKit allows to run a linux container through aa Hyper-V isolation wrapper on a Windows platform, through a minimal Linux OS built from linuxkit.
That is still the same idea: linux running inside a VM on Windows.
That is not a Linux server deployed on a Windows server: only deployed inside a Linux server running in a VM on Windows.
Actually... (update Dec. 2016)
See "Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)"
Kubernetes 1.5 includes alpha support for both Windows Server Containers, a shared kernel model similar to Docker, and Hyper-V Containers, a single-kernel model that provides better isolation for multi-tenant environments (at the cost of greater latency).
The end result is the ability to create a single Kubernetes cluster that includes not just Linux nodes running Linux containers or Windows nodes running Windows containers, but both side by side, for a truly hybrid experience.
For example, a single service can have PODs using Windows Server Containers and other PODs using Linux containers.
But:
Though it appears fully functional, there do appear to be some limitations in this early release, including:
The Kubernetes master must still run on Linux due to dependencies in how it’s written. It’s possible to port to Windows, but for the moment the team feels it’s better to focus their efforts on the client components.
There is no native support for network overlays for containers in windows, so networking is limited to L3. (There are other solutions, but they’re not natively available.)
The Kubernetes Windows SIG is working with Microsoft to solve these problems, however, and they hope to have made progress by Kubernetes 1.6’s release early next year.
Networking between Windows containers is more complicated because each container gets its own network namespace, so it’s recommended that you use single-container pods for now.
Applications running in Windows Server Containers can run in any language supported by Windows. You CAN run .NET applications in Linux containers, but only if they’re written in .NET Core. .NET core is also supported by the Nano Server operating system, which can be deployed on Windows Server Containers.
Related
My production instance is running under Ubuntu 16 while my local machine runs under Windows 10.
In order to have a setup close to my production, I use VMs (vagrant, virtualbox, homestead). Btw, my application is a Laravel app so homestead is the route to go as per its documentation.
Since I have multiple applications that have different specifications (different OS version, different app versions), I need to set multiple VMs as well. Since VMs are resource-heavy, it tends to slow down my machine in time.
That then, I came across Docker. Will Docker for Windows and create containers and images base on my app's specification suffice or do I still need a VM then create docker containers from there?
Below is a diagram
Windows running Docker for Windows
Windows running Ubuntu VM with Docker
Docker-Desktop will by default start and run a Linux VM in the background of your Windows System.
https://docs.docker.com/docker-for-windows/install/
Hyper-V and Containers Windows features must be enabled.
You can also use WLS/2 which is basically the same thing.
https://docs.docker.com/docker-for-windows/wsl/
Jens
From what I understand, the container includes all dependencies to run, but all containers running on the same platform whether it's a VM, or bare-metal will share the underlying kernel.
I believe I read somewhere that in order to run linux containers on windows, the Docker client spins up a linux based VM, and runs the container in that.
But now I see that docker for windows runs linux containers natively (ie, without hyper-v).
My question is: How can an image that was built to run on linux run on a system that has a windows kernel?
This is the original source that my question arose from:
https://www.hanselman.com/blog/DockerAndLinuxContainersOnWindowsWithOrWithoutHyperVVirtualMachines.aspx
With the latest version of Windows 10 (or 10 Server) and the beta of
Docker for Windows, there's native Linux Container support on Windows.
That means there's no Virtual Machine or Hyper-V involved (unless you
want), so Linux Containers run on Windows itself using Windows 10's
built in container support.
I saw some similar questions, but they explained how a linux container runs on a windows platform by utilising a vm/hyper-v
How docker desktop runs linux containers on Windows machine
Does "Docker On Windows" launch a linux virtual machine?
Perhaps I didn't understand their answers, but from what I understood, it still seems like the linux container is sitting on-top of the windows kernel.
this is the magic of LCOW (https://github.com/linuxkit/lcow)
you are right to run a container the base KERNEL should be same , since container is just an abstraction , so to run a linux container on windows there are two options
either use moby linux on hyperv and run containers there
use lcow to run light weight linux vm for each container. (lcow)
https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers
with WSL in windows in future we might be able to get a third method don't know if already someone is working on it .
Docker isn't a VM so it only runs apps native to the OS, right? Does that mean Docker for Windows only runs Windows .exe files? So Docker containers for Windows and Linux, what do they have in common, if anything? Are containers reusable on different operating systems in any way?
"Docker isn't a VM"
Correct, containers should be considered as processes running in a sandbox. If you search about how this isolation takes place in Linux, you'll definitely run into namespaces & cgroups. One definition of containers I've seen lately states that:
"containers are processes born from tarballs, anchored to namespaces and controlled by cgroups"
photo by Dan Mayer, #LeadDevLondon - June 2018
You can also find some interesting stuff regarding linux containers here: Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon by Jérôme Petazzoni
Docker for Windows only runs Windows .exe files?
No. Consider that a developer with a Windows PC might work on linux based containers that are later deployed to the cloud. Docker for Windows brings this flexibility, BUT if you run linux containers, these will be running on some kind of virtualization environment. Initially, Docker toolbox was using Oracle Virtualbox, now Docker for Windows uses Hyper-V.
I don't know much about how the isolation takes place inside the Windows OS but I think the logic is similar to Linux. Some info about Windows containers:
Windows Container Types
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
Windows and Linux, what do they have in common, if anything?
In general, I would answer that containers serve the idea of Microservices, separation of concerns, do one thing & do it well.
Are containers reusable on different operating systems in any way?
Yes and No. You may face limitations. For example, if you have an application that starts FROM ubuntu:latest and want to make it work on a raspberry Pi, you will have to build a new container from a base image that is made for arm architecture. Docker is not an abstraction that will take any container and make it work on any architecture, OS... You have to know what you are trying to achieve and carefully make your decisions on what you finally choose to use.
I am using Win 10 Pro N (Version 1709) as a development machine and Windows Server 2016 Standard (Version 1607) as production server.
I am currently developing an ASP.NET Core 2 application with MongoDb as database.
A couple days ago I first stumbled over the idea, to run MongoDb as a Docker image.
I don't have any experience with Docker so far, but I managed to switch from Linux containers (default) to Windows containers on Windows machines.
Was this a good decision? Or is there any reason why I should use Linux containers instead of Windows containers in my scenario?
What e.g. if I should decide to deploy my application to a Linux server some time? In this case, would it wiser to start with Linux containers right from the beginning?
Docker is not about virtualization but more about isolation.
A windows container will run on a windows host
A linux container will run on a linux host
Then some people wanted to run linux container on windows
First you needed to create a linux vm on windows to run the container
Now you can use LinuxKit to run the container but it's still a light VM
Then some people wanted to run windows container on linux
First you needed to create a windows vm on linux to run the container
Now you can use nothing more as of today
So the best bet is to start with a container aimed at your production servers
If you want to deploy to linux I would advise using linux containers since you then test a more similar setup and are more likely to find issues that will also show in your final deployment.
Other than that linux container technology is more mature and better supported than windows containers.
I've read that:
Docker is a system for management and deployment of application containers, not operating system containers.
However, in several resources (e.g. around 1:20 into https://www.youtube.com/watch?v=pGYAg7TMmp0) it gives an example of "problems" you might encounter if you've developed a web application on a Windows PC or Mac, and are deploying it to a Linux server.
So, how does Docker help in this situation? If we take a web application I understand Docker could help you make a container with the source, and say a specific version of PHP. But could you specify a target OS for it to run on, if it's different from the server that Docker is running on?
The Docker FAQ (https://docs.docker.com/engine/faq/) says
You can run both Linux and Windows programs and excutables in Docker containers.
Does this mean you need Docker installed on a Linux and Windows machine separately to do this, or is it possible to specify any OS within your Docker image and have any machine run it?
Please can someone explain how - or if - Docker deals with specifying a particular OS for your application?
Docker started as a way to run containers on Linux hosts, and this remains the dominate target for docker containers. Developer environments include an embedded VM to run Linux under the covers on Mac and Windows. Originally this was VirtualBox, but newer releases use xhyve and hyperv. The host OS in all of these are Linux so you are not building your image on one OS and running it on another OS.
Since that start, Docker has expanded target OS's. This requires that you have a docker installation for that OS, and it requires that your image be designed to run on that architecture/OS. This started with other architectures of Linux like arm64, and now zLinux. The Microsoft partnership is a rather large rewrite, partially in Windows itself, but also in the Docker code, and especially in the images designed to run natively on Windows. To run these, you have to change the settings on Docker for Windows to run Windows containers instead of Linux containers, you cannot run them both concurrently on the same host. At present, running Windows binaries can only be done on a Windows host, Microsoft isn't shipping free VMs for Linux hosts. And as a new target platform, it still lags behind in features from the Linux hosts.