Are containers specific to a host OS? - docker

Are containers specific to a particular host OS? For instance, if a container is created on Windows with particular dependencies (e.g., DLL files), can it run in a setup in which the host OS is Linux? I initially assumed that a container must be specific to a particular host OS.
But the following two excerpts seem to suggest that I may not have understood the mechanics correctly. So my question is: are containers built over the docker engine so when the dependencies are included, they are relative to the docker engine and the underlying host OS does not matter?
(1) From IBM:
Containerization allows developers to create and deploy applications faster and more securely. With traditional methods, code is developed in a specific computing environment which, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system. Containerization eliminates this problem by bundling the application code together with the related configuration files, libraries, and dependencies required for it to run. This single package of software or “container” is abstracted away from the host operating system, and hence, it stands alone and becomes portable—able to run across any platform or cloud, free of issues. [https://www.ibm.com/cloud/learn/containerization]
(2) From Docker:
Does Docker run on Linux, macOS, and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64).
Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
What does Docker technology add to just plain LXC?🔗
Docker technology is not a replacement for LXC. “LXC” refers to capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations. On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:
Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object called a container. This container can be transferred to any Docker-enabled machine. The container can be executed there with the guarantee that the execution environment exposed to the application is the same in development, testing, and production. LXC implements process sandboxing, which is an important pre-requisite for portable deployment, but is not sufficient for portable deployment. If you sent me a copy of your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way it does on yours. The app you sent me is tied to your machine’s specific configuration: networking, storage, logging, etc. Docker defines an abstraction for these machine-specific settings. The exact same Docker container can run - unchanged - on many different machines, with many different configurations.

The host OS, or precisely, the kernel provided still matters. That's why you can't run Windows containers on Linux. You can run Linux container on Windows due to Hyper-V and WSL2, and on macOS with Hypervisor, but that's it. If the provided kernel is compatible (doesn't have to be identical), usually similar version and the same architecture (remember, there are x64, ARM64, etc) or at least supported virtualization (x64 containers can run on M1, which is ARM64) then you can just run the container, no need to worry about DLLs because they're supposed to be included either in one of the base image you start with or the image you generate.

Related

Still confused about docker

Ive taken an app and built a docker image for windows server 2016 using microsoft/aspnetcore:2.0 base image.
My question is...what machines/OS's will I be able to run the container on?
I know it cant run on Linux.....but could it run on (e.g.) ANY version of windows server 2016? How about windows server 2019?
The architecture is AMD64....does that mean the container will only run on machines with that exact architecture?
Im trying to figure out why containers are considered beneficial
I don't have any experience with Docker Windows containers, but I have a ton of experience with Docker containers in general, and the concepts between Windows and Linux containers should be mostly the same.
When you run your built app, no matter if you run it on Windows Server 2016, Windows Server 2019, or even Windows 10 Pro, the app should function exactly the same. Under the covers, Docker provides an isolated application environment. From your applications perspective, it only knows/experiences/sees itself and the Windows Kernel that it's running on. If you had, say, an IIS instance also running on that server, your app would have no idea. The point here is that Docker provides a means to:
Run multiple versions of an app on the same machine, in complete isolation.
Have a more clean running environment for every app.
Be much more resource efficient than running discrete VMs
Another huge benefit of Docker is that it provides a means to ephemeral environments. Which means you should expect to have the exact same behavior from an app running on machine #1 as you do on machine #2. It eliminates the "works on my machine" mentality, especially when some other 3rd party dependency is not installed/forgotten, because these will be bundled into the container as part of the build.
Lastly, about architecture. The app you built is designed to run against the architecture of the Windows Kernel it was built with. In your case AMD64, from my understanding, this implies the x86_64 architecture. This should mean that your container will run on any 64-bit x86 machine (AMD or Intel). Your container will not run on any other architecture: x86 (32 bit), 386, 486, ARM, ARM 64, etc. I think in the case of Windows this isn't as important of an issue, because 90% of the time you're running on x86_64. But with Linux you end up with everything from SPARC to ARM, and so that architecture distinction is important.
I too had a lot of the same questions when I started using docker. While the product "Docker" has been hit-or-miss on occasion, the concept "containers" and the benefits they provide when used correctly are very powerful and I use the for almost every project I work on.

What's the point of running an OS (Ubuntu) in Docker?

I have trouble understanding this concept. I know a little bit about how Docker works and what the benefits are, and while I understand running web servers, databases and development environments in containers, I don't understand the point of running an OS like Ubuntu in Docker.
Can someone explain why you would want to do that and also the benefits of an entire OS in a container?
The OS is essentially the runtime environment required to run your app. If you app is compiled to run on Linux, it relies on Linux libraries (libc, glib, and so on) that must be present in executing environment, regardless of its type. Docker makes no exception to this.
So a Ubuntu application requires a Ubuntu image in order to run correctly.
Note that Docker container does not include nor run an entire OS, but only the minimum set of libraries that allow your app to run. In particular it does never contain or execute a kernel, as it runs under the host kernel.
Docker doesn't have its own OS, it is installed on a machine and this allows it to share host operating system resources. There will be only one OS and all the containers will be using that OS.
Most of the application are meaningless without OS since it is required for IO, hardware calls etc.
Each docker container may have different packages (java, python, jboss etc), applications installed.

what programs can be installed in a docker container

I am a Windows user.
I have looked at the official Docker tutorial "Get Started". The example focus is a python app. I don't know python and I guess a Docker container can have many programs installed as an environment, not just python.
Is Docker good for testing a program I download from the internet in an isolated environment (like a sandbox in firewalls or antivirus) ?
How for example can I make a container that has an environment containing installed programs like Visual Studio, VLC player, Office, etc.?
Thanks,
Abe
Yes; you can have an isolated environment with docker. You can set your desired configurations, download from internet, install, and whatever you do in a Virtual Machine.
Yes, you can. What your container contains depends on the base image you create it FROM and packages you install inside of it.
Tips
You can build your container from an empty OS (e.g. ubuntu), configure the OS, download/install/configure/run whatever you want.
You can create a base image which derives FROM a suitable OS, then install any basic application (e.g. firefox) which you may use in a lot of containers on it. Then you should push it in a registry (e.g. Github). After that, you can use it as a base image for other containers, so your new containers have installed applications by default; no need to install them again. It reduces complexity and repetitions in Dockerfile.

what is the difference between vagrant, docker, virtualenv or just a virtual machine?

I develop websites using python with django framework, I like to get things done fast.
I used to use virtual machine or in the local host machine, recently went to vagrant, I am not sure if there is other technologies to help keep the process faster?
I could use some tips and pointers.
- Docker
It is great at building and sharing disk images with others through the Docker Index
Docker is a manager for infrastructure (today's bindings are for Linux Containers, but future bindings including KVM, Hyper-V, Xen, etc.)
Docker is a great image distribution model for server templates built with Configuration * Managers (like Chef, Puppet, SaltStack, etc)
Docker uses btrfs (a copy-on-write filesystem) to keep track of filesystem diff's which can be committed and collaborated on with other users (like git)
Docker has a central repository of disk images (public and private) that allow you to easily run different operating systems (Ubuntu, Centos, Fedora, even Gentoo)
- virtualenv
It isolates the Python interpreter and the Python dependencies on one machine so you can install multiple Python projects alongside each other with their own dependencies. But for the rest of the machine the virtualenv doesn't do anything:
you still have global dependencies / packages that are installed using your Mac OS X / Linux package manager and these are shared between the virtualenvs.
- A virtual machine (VM)
It is a software program or operating system that not only exhibits the behavior of a separate computer, but is also capable of performing tasks such as running applications and programs like a separate computer.
A virtual machine, usually known as a guest is created within another computing environment referred as a "host."
Multiple virtual machines can exist within a single host at one time.
- Vagrant
often used to programmatically configure virtual machines
specifies the whole machine: it allows you to specify the Linux distribution, packages to be installed and actions to be taken to install the project.
So if you want to launch a Vagrant box with multiple Python projects on that machine you'd still use virtualenv to keep the Python dependencies separate.

Can I use Docker like this ...?

My work laptop is running LinuxMint as the base OS, plus Virtualbox to run Windows 7 which is the actual work environment, usually plus an additional Virtualbox VM to run a different Windows installation in which I do my client project work (I have one VM per client, to avoid messing up my main OS).
But I'm wondering if it's feasible and beneficent to switch to using Docker for the client project stuff? That is, I'd like to keep LinuxMint (to preserve my sanity), and keep Windows ('cause I have to use some MS products), but then instead of that series of "client VM's" use Docker containers?
I'm not entirely clear on how containers are useful. Can I, for instance, have a container in which I've installed dotNET and MS SQL; and then another container where I've installed an Azure Powershell; and a third container where I've installed Java and Eclipse -- and then decide which of these "sets" of software is available on the same common base OS (Windows, with VPN and Outlook and Notepad++)?
This post makes me think I'm asking for a solution from the wrong tool?
Or should I perhaps attack the root problem from a different angle, and ask the following over at Workplace.SE: How to work as a consultant without "cluttering up" one's (Windows) OS with more or less temporary installations of all sorts of software necessary for client projects?
AFAIK there is no WindowsOS ready to be run INSIDE a docker container localy, but they are anounced. See www.docker.com/microsoft and msdn windowscontainers
What you can do is run Linux OSs in docker containers within Windows. But in your case you should run the docker engine in your Mint Linux
Not really an answer, more like several comments -- though it's too long to fit within a comment
First of all I would not run Mint, but that's off the question.
Then, it may probably worth to take a look at How is Docker different from a normal virtual machine?.
Also, as you linked, Docker does not aim (at all) to run several programs. Indeed, their policy is Caas: Container as a Service. So basically one program per container. Saying all that, you can probably run wine within container and run one application on each container (over wine).
Have fun!

Resources