Does Docker image include OS? - docker

I have below Dockerfile"
FROM openjdk:12.0.2
EXPOSE 8080
ADD ./build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The resulting Docker image encapsulates Java program. When I deploy this Docker image to Windows Server or Linux, does the image always include OS like Linux which runs on top of host OS (Windows Server or Linux) ?
I am asking this question in the sense of Docker image being physical box which contains other boxes (one being openjdk), does this box also contain Linux OS box that I can pull out of it ( assuming if this was possible) and install it as Linux OS on empty machine?

That depends on what you call the "OS". It will always contain stuff from the distribution image, it is built on.
For example, a debian based images will include apt and other debian-specific tools. But most of the stuff you need on a "complete" machine (as in non-container), will have been removed to keep the image as small as possible.
It will not contain the kernel, as it is running on the host machine and is controlled by the host's kernel.

The "official" OpenJDK images from the Docker Hub are available in variants based on a number of different Linux distributions. There is a cut-down Debian, an Alpine, and others. There are advantages and disadvantages to each.
The image will need to contain enough operating system dependencies to allow the JVM to run. It may also include basic diagnostic and management tools -- enough to carry out rudimentary troubleshooting in the container, anyway. You can expect all the images to contain at least basic console shell tools like "cp" and "cat", although they differ in implementation. For example, the Alpine variant gets these utilities from BusyBox, not from a conventional GNU/Linux installation.
It's possible to create a Docker image that contains no platform dependencies at all, but there's little incentive to be that minimal -- you'd just have to build more stuff into the application program itself.

It doesn't include the entire operating system, but the image will be dependent on either linux or windows, you can't build an image that runs on both in one Dockerfile.
The reason for the dependency is that a docker container shares resources with it's host machine in a carefully fenced off way, this mechanism is different on windows and linux (though to you, as a docker user, the difference is invisible).

Related

does docker always need an operating system as base image

I have heard that docker doesn't need a separate os in linux, because it shares with the host os, but in hyper-v Windows it can run Windows OS because it can hyper a linux virtual machine so run linux software on it.
But, I get confused about the FROM stage in the dockerfile, all guides said like this:
FROM ubuntu:18.04
cp . /usr/local/bin
RUN make
CMD /usr/local/bin/youapp
I can understand this step, first you need an OS, then you deploy your application; finally you run your app or whatever.
But what does the FROM stage really mean?
Does it always need an OS? Does nginx docker image have an os in it?
If i want to build my own app, I write it, I compile it, I run it; but does my own app need an OS? If not, what should I write in the FROM stage?
i got this picture, it said docker container does not need os,but use the host os,now docker build always need an os
The containers on a host share the (host's) kernel but each container must provide (the subset of) the OS that it needs.
In Windows, there's a 1:1 mapping of kernel:OS but, with Linux, the kernel is bundled into various OSs: Debian, Ubuntu, Alpine, SuSE, CoreOS etc.
The FROM statement often references an operating system but it need not and it is often not necessary (nor a good idea) to bundle an operating system in a container. The container should only include what it needs.
The NGINX image uses Debian (Dockerfile).
In some cases, the container process has no dependencies beyond the kernel. In these cases, a special FROM: scratch may be used that adds nothing else. It's an empty image (link).
No its not like that. To create any docker image using DockerFile, You need to start with a base docker image. That base docker image can be anything, Like an empty image as well, In the docker file in your example the FROM section says ubuntu, it means its assuming ubuntu as the base image. Its not always needed to have an OS as base image.
Follow this link - https://linuxhint.com/create_docker_image_from_scratch/
This will clear your doubts related to base image.
now i got answer
the From stage import the software but not the OS with kernel
it just provide a platform for your application,the ubuntu,debian,centos you write in FROM stage is just a software,the true kernel does not have relationship with them.
so if your application can run dependent ,it must like hello-world ,just a binary-package,dont rely on any other library. but mostly you need an OS,because they have the library you need.
No, the FROM stage is not providing the operating system to the image. The kernel is always provided by the host system where you are running the container. The FROM stage provides the initial file system i.e., files, directories, pre-installed softwares etc for the new image. You can also start FROM scratch which is like a blank slate.
The FROM line need NOT necessarily point to any other OS:
It can be any other container or it could be FROM SCRATCH.
Containers in host share kernel so you can think as it is master process utilizing host kernel.
Generally people see HTTPD, NGINX etc. are utilizing Debian as container OS, since this Debian OS is very thin and serves the purpose of isolation and runs as independent server.
Even you can create a HTTPD, NGINX without using any OS and name with your own version :-)

Can I run a docker container doing a x86 build on a IBM Power system?

Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?
You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.
Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.

Why provide a Linux distro as a Dockerfile base when my host has all the software I need installed?

I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.

what should be packed as a docker image?

I just get started with docker.
I'm really confused what should be packed as a docker image?
In docker hub, I can find a complete OS as a docker image: ubuntu, centos...
as well as popular platform and database like: nodejs, mongodb...
It seems to me docker hub is just like a software repository.
should everything be packed as an image? how about just a command line tool like: ls, cd, git??? they are also software, what is qualified to be a docker image??
please help clarify
Docker will provide some isolation between the programs in the image, and your host environment. In a docker image, one may package anything from a single binary, to a full environment (everything but the linux kernel).
See it as a convenience. It provides a convenient form to deploy programs that require a complex environment that may conflict with the programs installed on the host. For instance, if you're trying to package a webapp (e.g. some blog software), the docker container will allow you to ship your application code, along with a tested version of its interpreter (php, python, etc), a compatible webserver, and maybe a database environment -- together. From the perspective of the user installing your container/app, nothing else than the container is needed to run the app. It's all self-contained, and it's simpler than setting up a virtual machine.
If your binaries in the image depend on an ls command, then you'd include that as well. Generally, in the image, you include a binary (the entry point), as well as all its dependencies.
If you're familiar with chroots, you may see a docker image as a fancy chroot where the network, and process address space are also isolated, in addition to the file system.
You can think of dockerhub as an app-store.

Can a docker image based on Ubuntu run in Redhat?

Read some PPTs, it seems that one container can run on different linux vendors. Is is true?
Yes. That's the main idea of docker.
It creates a "static container" in a chrooted env that is able to run on any linux because all the needed user-land dependencies are included in the image.
Since linux (the kernel) maintains a backward compatibility on system calls and their call-schemes, the idea can work across versions and even different distributions of Linux.
Of course, the binary architecture (say amd64) needs to be the same on the source and target system.
Yes, for most applications this works. The kernel is whatever you are really running on (RedHat in your example) while the userspace is supplied by the container (Ubuntu).
Most Linux kernel variants are sufficiently similar that applications will not notice. However if the code relies on something specific in the kernel that is not there, Docker can't help you.
Docker itself relies on certain minimum kernel features, version 3.8 at the time of writing. https://docs.docker.com/engine/installation/binaries/

Resources