I have heard that docker doesn't need a separate os in linux, because it shares with the host os, but in hyper-v Windows it can run Windows OS because it can hyper a linux virtual machine so run linux software on it.
But, I get confused about the FROM stage in the dockerfile, all guides said like this:
FROM ubuntu:18.04
cp . /usr/local/bin
RUN make
CMD /usr/local/bin/youapp
I can understand this step, first you need an OS, then you deploy your application; finally you run your app or whatever.
But what does the FROM stage really mean?
Does it always need an OS? Does nginx docker image have an os in it?
If i want to build my own app, I write it, I compile it, I run it; but does my own app need an OS? If not, what should I write in the FROM stage?
i got this picture, it said docker container does not need os,but use the host os,now docker build always need an os
The containers on a host share the (host's) kernel but each container must provide (the subset of) the OS that it needs.
In Windows, there's a 1:1 mapping of kernel:OS but, with Linux, the kernel is bundled into various OSs: Debian, Ubuntu, Alpine, SuSE, CoreOS etc.
The FROM statement often references an operating system but it need not and it is often not necessary (nor a good idea) to bundle an operating system in a container. The container should only include what it needs.
The NGINX image uses Debian (Dockerfile).
In some cases, the container process has no dependencies beyond the kernel. In these cases, a special FROM: scratch may be used that adds nothing else. It's an empty image (link).
No its not like that. To create any docker image using DockerFile, You need to start with a base docker image. That base docker image can be anything, Like an empty image as well, In the docker file in your example the FROM section says ubuntu, it means its assuming ubuntu as the base image. Its not always needed to have an OS as base image.
Follow this link - https://linuxhint.com/create_docker_image_from_scratch/
This will clear your doubts related to base image.
now i got answer
the From stage import the software but not the OS with kernel
it just provide a platform for your application,the ubuntu,debian,centos you write in FROM stage is just a software,the true kernel does not have relationship with them.
so if your application can run dependent ,it must like hello-world ,just a binary-package,dont rely on any other library. but mostly you need an OS,because they have the library you need.
No, the FROM stage is not providing the operating system to the image. The kernel is always provided by the host system where you are running the container. The FROM stage provides the initial file system i.e., files, directories, pre-installed softwares etc for the new image. You can also start FROM scratch which is like a blank slate.
The FROM line need NOT necessarily point to any other OS:
It can be any other container or it could be FROM SCRATCH.
Containers in host share kernel so you can think as it is master process utilizing host kernel.
Generally people see HTTPD, NGINX etc. are utilizing Debian as container OS, since this Debian OS is very thin and serves the purpose of isolation and runs as independent server.
Even you can create a HTTPD, NGINX without using any OS and name with your own version :-)
I am using Windows OS but to use docker I use CentOS VM over Oracle VM Virtualbox. I have seen a Dockerfile where centos is used as base image. First line of my Dockerfile is
FROM centos
If I check the Dockerfile of CentOS on Docker Hub then first line is
FROM scratch
scratch is used to build an explicitly empty image, especially for building images. Here I can understand that if I start traversing upward using "FROM " line then finally I will end up at "scratch" image. I can see that scratch can be used to create a minimal container.
Question: If I want to create some bigger applications using web server, database etc, then is it necessary to add a base OS image?
I have tried to search for mysql and tomcat and noticed that it finally uses a OS image.
My understanding of Container was that I can "just bundle the required software and my service" in the container. Please clarify.
Your understanding is correct, however "just bundle the required software and my service" may be cumbersome, especially if you also have some shell scripts that make further use of other support programs.
Using some base image that contains already all the necessary stuff is more convenient. You can share the same base image for several services and due to docker's layered images will have no overhead regarding disk space.
I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.
this might seems a stupid question, but here I am :
I'm running Ubuntu 16.04 and managed to install windows 10 in dual boot.
Running docker exclusively in linux so far, I decided to give it a try on Windows 10.
As I already downloaded several docker images on my Linux system, I'm willing to have a "shared" like development environment. I must admit this would be a waste of time and disk space to download Docker images I already downloaded before (on linux) on my fresh windows install.
So my question is simple : Can I use my linux images / containers on windows. I'm thinking of something like a global path variable pointing to my linux images to configure on docker windows.
Any idea if this is possible, and if yes, the pros and cons and the caveats ?
Thanks for helping me on this one.
Well i would suggest to create your local registry and then push these images there and pull it in your windows docker.
Sonatype nexus(artifact storage repository) can be used to store your docker images. Check if this helps.
I guess it's not possible to share the same folder (to reduce disk usage) since the stored files are totally different:
Under Windows the file is:
C:\Users\Public\Documents\Hyper-V\Virtual hard disks \MobyLinuxVM.vhdx
the vhdx extension is specific to MS systems.
and under linux it consist of 2 files:
/var/lib/docker/devicemapper/devicemapper/data
/var/lib/docker/devicemapper/devicemapper/metadata
see here for details
Where are Docker images stored on the host machine?
The technology under this is to have a specific fileSystem optimal for docker. Even if they used the same fileSystem storage, it wouldn't be a good idea imho.
If the purpose is only to gain time for resintalling, just dump all the images from on system, and re-pull them on the other one.
docker images --format "{{.Repository}}" > image-list.txt
then loop on the other OS
while read p; do
docker pull $p
done < image-listtxt
I am totally new to Docker and have only, so far, used the images available on Docker repos.
I have already tested and being using docker for some aspects of my daily activities and it works great for me but in some specific cases I need a "virtual" image of Linux with graphic support(X in Ubuntu or CentOS) and so far I have only encountered on Docker repos images that by default don't have X support.
Should I in this case use a standard Virtual Box or VMWare image? Is it possible to run a visual version of Linux in a docker container? I haven't tried it yet.
If you run your containers in privileged mode they can access the host's resources (and to anything else for that matter), so in essence it is possible though I'd be willing to bet that it turns out to be more trouble than it's worth because the containers won't be quite as portable as ones that don't require such outside resources.