Is it possible that I run docker without any host OS. I mean run it natively. It would be a performance boost that way I believe if possible.
Suppose I have a tool which runs on linux kernel. I create a docker container with some extra dependencies. Now I share that container with other person who has linux to run that container.
But I want to run that container without host OS. as it will be double layer of OS with container.
Docker itself is not a VM, so there is no double layer of OS. Docker is a tool to run applications with settings that isolate them from other applications running on the same OS kernel. Docker does include a VM with Docker for Windows and Docker for Mac to run the Linux kernel so you can run Linux containers. There is an option to run native Windows containers with Server 2016, but if you are looking for minimal and efficiency, I would suggest looking elsewhere.
The closest things to what you are looking for are:
Unikernels: these are applications compiled into a kernel with everything else removed, designed to run inside of a VM for a very specialized task, often security related. These are still early in their development stage, but Docker does use some of their technology inside their project.
LinuxKit (part of the Moby Project): this is how Docker creates their VMs for Docker for Windows and Docker for Mac. It is a container based Linux operating system that you can custom compile with only the containers you want to run. Most of the focus of this is still designed for VMs, but bare metal is an option.
Scratch base image: if you statically compile your application to remove all of the library dependencies, you can have a container without any shell or other OS tools. This is often seen in Go binaries shipped as Docker containers to do a single task with a very small attack surface. As a Docker container, it still requires the underlying Linux OS to run the binary.
Related
I just got started with docker. To my understanding, docker container runs a discrete process on the host machine and shares system resources of host machine too to that process, and as we know, codes building for Linux may not able to run on MacOS, and vice versa. My question is: can a docker image built on an OS platform can be deployed to another OS, like MacOS to Linux, or Ubuntu to CentOS?
If the question is NO, how come it only has one official mysql image on docker repositories, but not multiple like for Mac, for Ubuntu, for RHEL?
Docker on mac works by creating a linux virtual machine. So a docker image built on Mac is in fact built on a linux virtual machine and can be freely exchanged with most other docker systems - including most docker on windows.
There is a windows version of dockers that is not linux based. Those images are not interchangeable.
In fact, the docker built from any linux-based image can be run (w/o VM as an additional layer) on any linux distribution that has the same OS kernel.
It means docker built from e.g. SuSE image can be then run on Fedora/Ubuntu/Debian/etc... w/o any restrictions.
Short form - yes it can, but i think it will depend on the setup - notably user/group in - docker-compose file.
Recently i had some issues with work docker-compose files being setup without a user specified, these work ok when building on a mac as had an app user, but when run on my linux machine the user defaulted to root and thus the build was not successful. So it depends on the quality of the config.
Docker images are platform agnostic. The first thing a Dockerfile declares is what base image it pulls from, and that should determine the operating system under which the containers will run.
Using the MySQL 8 Dockerfile as an example:
https://github.com/docker-library/mysql/blob/223f0be1213bbd8647b841243a3114e8b34022f4/8.0/Dockerfile
FROM debian:stretch-slim
This means the image, and thus any containers started from it, will be based on Debian Linux...even if the host machine is MacOS.
Docker isn't a VM so it only runs apps native to the OS, right? Does that mean Docker for Windows only runs Windows .exe files? So Docker containers for Windows and Linux, what do they have in common, if anything? Are containers reusable on different operating systems in any way?
"Docker isn't a VM"
Correct, containers should be considered as processes running in a sandbox. If you search about how this isolation takes place in Linux, you'll definitely run into namespaces & cgroups. One definition of containers I've seen lately states that:
"containers are processes born from tarballs, anchored to namespaces and controlled by cgroups"
photo by Dan Mayer, #LeadDevLondon - June 2018
You can also find some interesting stuff regarding linux containers here: Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon by Jérôme Petazzoni
Docker for Windows only runs Windows .exe files?
No. Consider that a developer with a Windows PC might work on linux based containers that are later deployed to the cloud. Docker for Windows brings this flexibility, BUT if you run linux containers, these will be running on some kind of virtualization environment. Initially, Docker toolbox was using Oracle Virtualbox, now Docker for Windows uses Hyper-V.
I don't know much about how the isolation takes place inside the Windows OS but I think the logic is similar to Linux. Some info about Windows containers:
Windows Container Types
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) - for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
Windows and Linux, what do they have in common, if anything?
In general, I would answer that containers serve the idea of Microservices, separation of concerns, do one thing & do it well.
Are containers reusable on different operating systems in any way?
Yes and No. You may face limitations. For example, if you have an application that starts FROM ubuntu:latest and want to make it work on a raspberry Pi, you will have to build a new container from a base image that is made for arm architecture. Docker is not an abstraction that will take any container and make it work on any architecture, OS... You have to know what you are trying to achieve and carefully make your decisions on what you finally choose to use.
I am aware of this question (Can Windows Containers be hosted on linux?), but it doesn't really answer my question.
I am new to Docker, but my question is such - if I take any windows application, put it inside a Docker container, can it run now on Linux and vice versa?
Confluent claims that it can run only on linux, but my colleague installed it on Windows using Docker. So if you can install it with Docker, then the whole application would surely be regarded as cross platform?
I think I am missing some important point here.
Docker is not a VM, it's a way to run applications on a shared kernel that isolate those applications from each other. Windows binaries don't run on a Linux kernel, and vice versa (ignoring the Linux runtime for Windows for the time being). So if you build a container with your Windows application, it will only run if you did so on Dockers Windows runtime and windows base image. It's won't run on a Linux host.
What docker does provide is an embedded VM running Linux (originally this was VirtualBox, but current versions are HyperV). By running Docker for Windows, by default, this VM was used and you would only be running Linux containers, so your windows application would not even run inside the container. To run the Windows binaries, you need to toggle Docker for Windows to use the Windows runtime, and presently that's a toggle, you can't run both Linux and Windows runtimes concurrently on the same host.
There also is no Windows VM packaged with Docker's Linux install. You would need to install your own copy of Windows (and get the licensing which is why Docker doesn't ship this) inside a VM on a Linux host and run your containers inside that VM if you need Windows support.
I've read that:
Docker is a system for management and deployment of application containers, not operating system containers.
However, in several resources (e.g. around 1:20 into https://www.youtube.com/watch?v=pGYAg7TMmp0) it gives an example of "problems" you might encounter if you've developed a web application on a Windows PC or Mac, and are deploying it to a Linux server.
So, how does Docker help in this situation? If we take a web application I understand Docker could help you make a container with the source, and say a specific version of PHP. But could you specify a target OS for it to run on, if it's different from the server that Docker is running on?
The Docker FAQ (https://docs.docker.com/engine/faq/) says
You can run both Linux and Windows programs and excutables in Docker containers.
Does this mean you need Docker installed on a Linux and Windows machine separately to do this, or is it possible to specify any OS within your Docker image and have any machine run it?
Please can someone explain how - or if - Docker deals with specifying a particular OS for your application?
Docker started as a way to run containers on Linux hosts, and this remains the dominate target for docker containers. Developer environments include an embedded VM to run Linux under the covers on Mac and Windows. Originally this was VirtualBox, but newer releases use xhyve and hyperv. The host OS in all of these are Linux so you are not building your image on one OS and running it on another OS.
Since that start, Docker has expanded target OS's. This requires that you have a docker installation for that OS, and it requires that your image be designed to run on that architecture/OS. This started with other architectures of Linux like arm64, and now zLinux. The Microsoft partnership is a rather large rewrite, partially in Windows itself, but also in the Docker code, and especially in the images designed to run natively on Windows. To run these, you have to change the settings on Docker for Windows to run Windows containers instead of Linux containers, you cannot run them both concurrently on the same host. At present, running Windows binaries can only be done on a Windows host, Microsoft isn't shipping free VMs for Linux hosts. And as a new target platform, it still lags behind in features from the Linux hosts.
I am running a Docker daemon on my GUEST OS which is CentOS. I want to install software services on top of that in an isolated manner and I do not need another OS image inside my Docker container.
I want to have a Docker container with just the additional binaries and libraries for the software application I am going to install.
Is there a "whiteglove/blank" base image in Docker I can use ? I want a very lean container that uses as a starting point what my GUEST OS has to offer. Is that possible ?
What you're asking for isn't possible out-of-the-box with Docker. Each Docker image has its own root filesystem, which needs to have some sort of OS installed.
Your options are:
Use a minimal base image, such as the BusyBox image. This will give you the absolute minimum you need to get a container running.
Use the CentOS base image, in which case your container will be running the same or very similar OS.
The reason Docker images are like this is because they're meant to be portable. Any Docker image is meant to run anywhere Docker is running, regardless of the operating system. This means that the Docker image must contain an entire root filesystem and OS installation.
What you can do if you need stuff from the host OS is share a directory using Docker volumes. However, this is generally meant to be used for mounting data directories, and it still necessitates the Docker image having an OS.
That said, if you have a statically-linked binary that has absolutely no dependencies, it becomes easy to create a very minimal image. This is called a "microcontainer", and Go in particular is well-suited to producing these. Here is some further reading on microcontainers and how to produce them.
One other option you could look into if all you want is the resource management part of containers is using lxc-execute, as described in this answer. But you lose out on all the other nice Docker features as well. Unfortunately, what you're trying to do is just not what Docker is built for.
As I understood docker, when you use a base image, you really do not install an additional OS.
Its just a directory structure sort of thing with preinstalled programs or we can say a file system of an actual base image OS.
In most cases [click this link for the exception], docker itself [the docker engine] runs on a linux VM when used on mac and windows.
If you are confused with virtualization, there is no virtualization inside Docker Container. Containers run in user space on top of the host operating system's kernel. So, the containers and the host OS would share the same kernel.
So, to sumarize:
Consider the host OS to be windows or mac.
Docker when installed, is inside a linux VM running on these host OS.[use this resource for more info]
The base linux images inside the docker container then use this linux VM machine as host OS and not the native windows or mac.
On linux, The base linux images inside the docker container direclty uses the host OS which is linux itself without any virtualization.
The base image inside Docker Container is just the snapshot of that linux distributions programs and tool.
The base image make use of the host kernel (which in all three cases, is linux).
Hence, there is no virtualisation inside a container but docker can use a single parent linux virtual machine to run itself [the docker engine] inside it.
Conclusion:
When you install a base image inside docker, there is no additional OS that is installed inside the container but just the copy of filesystem with minimal programs and tools is created.
From Docker's best practices:
Whenever possible, use current Official Repositories as the basis for your image. We recommend the Debian image since it’s very tightly controlled and kept extremely minimal (currently under 100 mb), while still being a full distribution.
What you're asking for is completely against the idea of using Docker Containers. You don't want to have any dependability on your GUEST OS. If you do your Docker wont be portable.
When you create a container, you want it to run on any machine that runs Docker. Be it CentoOS, Ubuntu, Mac, or Microsoft Azure :)
Ideally there are no advantages of your base container OS having to do anything with your Host OS.
For any container, you need to have at least a root file system. That is why you need to use a base image that have the root file system. Your idea is not completely against the container paradigm of usage; as opposed to VMs, we want container to be minimal without much of repetitive elements that it can leverage from the underlayer OS.
Following the links of Rohan Singh, I found some related info, that doesn't generally contradict, but relates to the core ide of the question:
The base image for all Docker images is the scratch image. It has essentially nothing in it. This may sound useless, but you can actually use it to create the smallest possible image for your application, if you can compile your application to a static binary with zero dependencies like you can with Go or C.