I am running a RHEL 7 server and I am deploying containers using docker. Since you need to have RHEL servers and containers registered with RHN, I am now thinking of using centos7 docker images rather than RHEL7 ones, to avoid the RHN hassles.
Can anybody see any downside to doing it this way?
Since the kernel is the same you can use any distro available: Why docker has ability to run different linux distribution?.
For example many projects are moving to Alpine Linux because it give you the ability to build very small images: see Docker Official Images are Moving to Alpine Linux.
Related
What I mean is if I can run for example the official docker image DEBIAN and on top of that
run the official docker image NGINX with both same supported architecture e.g. Linux x86-64?
Will it work like I would install NGINX package for DEBIAN operating systems in non-docker way?
Because I'm learning docker and I've came across that NGINX was build and run from official NGINX repository for DEBIAN OS on top of the official docker image DEBIAN?
Is that a clue that docker images are not cross-platform compatible?
I've also came across this helpful question.
If by cross-platform you mean whether a docker image built on an x86_64 machine will run on a ppcle64 machine, then the answer is no (there are ways around it by using an emulator, but generally speaking the answer is no).
If you mean, whether an Ubuntu container can be run on a Debian host, then yes (provided host kernel version is compatible, which it will be, since you were able to install docker).
As for the question of why NGINX official image is Debian, the developers might have their own reasons. In fact, the official repo has Alpine flavour image as well. You can modify the Dockerfile to use Ubuntu image, make the necessary modifications (such as the ubuntu version of the installer) and build it on a Debian host. It will produce an Ubuntu image which will run an Ubuntu container on any Linux, Unix, MacOS or Windows (using Linux VM) . You can build that Dockerfile as is on an Ubuntu host and it will create the same nginx:latest image as you would download from dockerhub. This can be verified using the checksum.
I just got started with docker. To my understanding, docker container runs a discrete process on the host machine and shares system resources of host machine too to that process, and as we know, codes building for Linux may not able to run on MacOS, and vice versa. My question is: can a docker image built on an OS platform can be deployed to another OS, like MacOS to Linux, or Ubuntu to CentOS?
If the question is NO, how come it only has one official mysql image on docker repositories, but not multiple like for Mac, for Ubuntu, for RHEL?
Docker on mac works by creating a linux virtual machine. So a docker image built on Mac is in fact built on a linux virtual machine and can be freely exchanged with most other docker systems - including most docker on windows.
There is a windows version of dockers that is not linux based. Those images are not interchangeable.
In fact, the docker built from any linux-based image can be run (w/o VM as an additional layer) on any linux distribution that has the same OS kernel.
It means docker built from e.g. SuSE image can be then run on Fedora/Ubuntu/Debian/etc... w/o any restrictions.
Short form - yes it can, but i think it will depend on the setup - notably user/group in - docker-compose file.
Recently i had some issues with work docker-compose files being setup without a user specified, these work ok when building on a mac as had an app user, but when run on my linux machine the user defaulted to root and thus the build was not successful. So it depends on the quality of the config.
Docker images are platform agnostic. The first thing a Dockerfile declares is what base image it pulls from, and that should determine the operating system under which the containers will run.
Using the MySQL 8 Dockerfile as an example:
https://github.com/docker-library/mysql/blob/223f0be1213bbd8647b841243a3114e8b34022f4/8.0/Dockerfile
FROM debian:stretch-slim
This means the image, and thus any containers started from it, will be based on Debian Linux...even if the host machine is MacOS.
How do I run Datalab locally when it requires Docker (and Docker Toolbox is not supported as documented here: https://cloud.google.com/datalab/docs/quickstarts/quickstart-local)? The Docker website says Docker requires Windows 10 Professional or Enterprise 64-bit, and most corporate environments don't run Windows 10.
Docker is highly preferred over Docker Toolbox, as its a simpler, self-contained installation, with simpler configuration (since you don't have additional virtualization software to deal with, as you do with Docker Toolbox - namely boot2docker and its underlying functionality). However if you have a setup to run docker on your end, you should theoretically be able to use that for running the Datalab docker container by adapting the instructions.
You do have the option of running everything on a GCE VM.
I was facing the same problem, what I found more comfrotable in the end is to install Ubuntu on Virtual Box. This is free and fairly easy, and from the virtual machine you can use all the Docker and the Google guide to run Datalab locally.
I have a openvz vps which is centos 7 but with a 2.6 kernel. I know this is not compatible with docker. I have another KVM vps which has docker on it. Is there anyway to access docker in KVM remotely using my openvz vps? Basically I want my openvz box to be my dev machine and Ill deploy to KVM docker. What would be an ideal setup above?
You say the host has a 2.6.x kernel, but that covers a couple different releases. I have made docker work in an openvz VPS on a host with 2.6.32 kernel (derived from RedHat el6) but it would probably not work for kernels 2.6.18 or 2.6.9 (you really should upgrade if you have 2.6.9 as that is based on RedHat el4 kernel, 2.6.18 should be fine until 3/2017). You can find instructions to make it work with a compatible kernel at the openvz wiki. WARNING: docker does not perform very well in this configuration (2.6.32 kernel, CentOS 7 VPS) as you do not get any of the fancy filesystem layering functionality since you are forced to use the "vfs" storage engine. Each layer of the docker container will be a full copy of its underlying filesystem, grossly ballooning disk usage for images with lots of layers.
If you are not running a docker compatible kernel, you would not be able to run any of the docker tools at all, so your options are limited. If you still want to develop docker containers on your VPS to move to your KVM, you could use chroot and yum/rpm to construct your container and make a ${docker_image}.tgz file on your VPS and then copy that to your KVM and import into docker.
Hope that helps.
Have been trying to learn Docker and one thing that puzzles me is how a different flavour of Linux (to the host OS) actually runs in the Docker container.
If we assume my Docker host is running RedHat and I start a container from an Ubuntu image then are the following true?:
logically speaking, if the Ubuntu image footprint is around 550MB then will the Docker Daemon actually download (from an image registry) 550MB worth of Ubuntu image in order to create the Container?
is the instance of Ubuntu running in the container essentially no different than if I had downloaded and installed the same version manually?
I'm aware that the Docker container shares the same kernel used by the host OS and that one of the fundamental points of Docker was its efficiency gains of the container using the underlying OS. So Im a bit confused about what actually happens when you start a Container created from a different Linux version than the host.
I think this previous post may help you understand it a little more - Docker container isolation, does it care about underlying Linux OS?.
The crux of the matter is that if the Host OS is RedHat then it is the RedHat kernel which will be used by whatever build of Linux you run in your Docker container ie. Ubuntu in your example.
This comes down to understanding what the difference is between a Linux OS and a Linux Image. You will not be running a full Ubuntu OS inside the Docker Container but an image of Ubuntu.
For the purpose of your question think:-
OS = kernel + filesystem/libraries
Image = filesystem/libraries
The Ubuntu image running inside your Docker container is just the Ubuntu filesystem/libraries - it will not contain the Ubuntu kernel. This partly explains the efficiencies you get from a Docker container which is leveraging the Kernel (among other things) of the underlying Host.
The Ubuntu image running inside the Docker container runs in what is called the user space for that container. This image can make kernel system calls to the RedHat host OS kernel (as part of transferring control from user space to kernel space for some user operations). Since the core kernel is common technology, the system calls are expected to be compatible even when the call is made from an Ubuntu user space code to a Redhat kernel code. This compatibility make it possible to share the kernel across containers which may all have different base OS images.