Related
When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.
Why does docker have to create an image from a dockerfile then create a container from the image instead of creating a container directly from a Dockerfile?
What is the purpose/benefit of creating the image first from the Dockerfile then from that create a container?
-----EDIT-----
This question What is the difference between a Docker image and a container?
Does not answer my question.
My question is: Why do we need to create a container from an image and not a dockerfile? What is the purpose/benefit of creating the image first from the Dockerfile then from that create a container?
the Dockerfile is the recipe to create an image
the image is a virtual filesystem
the container is the a running process on a host machine
You don't want every host to build its own image based on the recipe. It's easier for some hosts to just download an image and work with that.
Creating an image can be very expensive. I have complicated Dockerfiles that may take hours to build, may download 50 GB of data, yet still only create a 200 MB image that I can send to different hosts.
Spinning up a container from an existing image is very cheap.
If all you had was the Dockerfile in order to spin up image-containers, the entire workflow would become very cumbersome.
Images and Containers are two different concepts.
Basically, images are like a snapshot of a filesystem, along with some meta-data.
A container is one of several process that are actually running (and which is based on an image). As soon as the processes end, your container do not exist anymore (well, it is stopped to be exact)
You can view the image as the base that you will make your container run on.
Thus, you Dockerfile will create an image (which is static) which you can store locally or push on a repository, to be able to use it later.
The container cannot be "stored" because it is a "living" thing.
You can think of Images vs Containers similar to Classes vs Objects or the Definition vs Instance. The image contains the filesystem and default settings for creating the container. The container contains the settings for a specific instance, and when running, the namespaces and running process.
As for why you'd want to separate them, efficiency and portability. Since we have separate images, we also have inheritance, where one image extends another. The key detail of that inheritance is that filesystem layers in the image are not copied for each image. Those layers are static, and you can them by creating a new image with new layers. Using the overlay filesystem (or one of the other union filesystem drivers) we can append additional changes to that filesystem with our new image. Containers do the same when the run the image. That means you can have a 1 Gig base image, extend it with a child image with 100 Megs of changes, and run 5 containers that each write 1 Meg of files, and the overall disk space used on the docker host is only 1.105 Gigs rather than 7.6 Gigs.
The portability part comes into play when you use registries, e.g. Docker Hub. The image is the part of the container that is generic, reusable, and transferable. It's not associated with an instance on any host. So you can push and pull images, but containers are tightly bound to the host they are running on, named volumes on that host, networks defined on that host, etc.
I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
a) volume mounts
use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona (and similar with docker-compose).
Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image).
You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)
b) entry-point based configuration (generation)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona and this will start percona and create the database percona for you.
This is all done by
adding the entry-point script here https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L65
do not forget to copy the script during image build https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L63
Then during the image-startup, your ENV variable will cause this to trigger: https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh#L91
Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml
which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
c) Derived images
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
from myapp
COPY my.ini /etc/mysql/my.ini
COPY application.yml /var/app/config/application.yml
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
docker run --name MyContainer1 <same image id>
docker run --name MyContainer2 <same image id>
docker run --name MyContainer3 <same image id>
That's it.
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
c1524520d864 67759a80360c 12 hours ago Up About a minute MyContainer3
After that you have your containers created forever and you can start and stop them like VMs.
docker start MyContainer1
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.
My understanding is that Docker creates an image layer at every stage of a dockerfile.
If I have X containers running on the same machine (where X >=2) and every container has a common underlying image layer (ie. debian), will docker keep only one copy of the base image on that machine, or does it have multiple copies for each container?
Is there a point this breaks down, or is it true for every layer in the dockerfile?
How does this work?
Does Kubernetes affect this in any way?
Dockers Understand images, containers, and storage drivers details most of this.
From Docker 1.10 onwards, all the layers that make up an image have an SHA256 secure content hash associated with them at build time. This hash is consistent across hosts and builds, as long as the content of the layer is the same.
If any number of images share a layer, only the 1 copy of that layer will be stored and used by all images on that instance of the Docker engine.
A tag like debian can refer to multiple SHA256 image hash's over time as new releases come out. Two images that are built with FROM debian don't necessarily share layers, only if the SHA256 hash's match.
Anything that runs the Docker Engine underneath will use this storage setup.
This sharing also works in the Docker Registry (>2.2 for the best results). If you were to push images with layers that already exist on that registry, the existing layers are skipped. Same with pulling layers to your local engine.
When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.