Kubernetes can't delete docker images from other nodes - docker

I need to understand how we can delete a docker image using Kubernetes. I'm using Jenkins for pipeline automation and Jenkins only has access to the master node, not the slaves. So when I generate the deploy everything works fine, the deploy makes the slaves pull from the repository and I get everything going.
But if Jenkins kills the deploy and tries to remove the image, it only deletes the image on the master node and not on other slaves. So I don't want to manually delete the image.
Is there a way to delete images on slave nodes from the master node?

Kubernetes is responsible for deleting images, It is kubelet that makes garbage collection on nodes, including image deletion, it is customizable.
Deleting images by external methods is not recomended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
Kubelet verify if the storage available for the images is more than 85% full, in that case it delete some images to make room. Min and Max threshold can be customized in the file
/var/lib/kubelet/config.yaml
imageGCHighThresholdPercent is the percent of disk usage after which image garbage collection is always run.
ImageGCHighThresholdPercent is the percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to.
Default values are :
ImageGCHighThresholdPercent 85
ImageGCLowThresholdPercent 80

Kubelet does eventually garbage collect old unused images when disk usage gets beyond 85% (default setting). Otherwise kubernetes doesn't provide a remote interface for deleting images.
To do it would require something like an SSH to each node or to run a Daemonset that has permissions to manage the underlying container runtime. See Mr.Axe answer for the Docker/SSH variant.

From master node,
You can write shell script/ansible playbook to ssh into slave nodes and remove images-
docker images -f "dangling=true" -q | xargs --no-run-if-empty docker rmi

Related

Docker container image vs container [duplicate]

When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.

Is there any way to configure Skaffold to build images on my local Docker daemon and not on the minikube's one?

I use minikube with Docker driver on Linux. For a manual workflow I can enable registry addon in minikube, push there my images and refer to them in deployment config file simply as localhost:5000/anything. Then they are pulled to a minikube's environment by its Docker daemon and deployments successfully start in here. As a result I get all the base images saved only on my local device (as I build my images using my local Docker daemon) and minikube's environment gets cluttered only by images that are pulled by its Docker daemon.
Can I implement the same workflow when use Skaffold? By default Skaffold uses minikube's environment for both building images and running containers out of them, and also it duplicates (sometimes even triplicates) my images inside minikube (don't know why).
Skaffold builds directly to Minikube's Docker daemon as an optimization so as to avoid the additional retrieve-and-unpack required when pushing to a registry.
I believe your duplicates are like the following:
$ (eval $(minikube docker-env); docker images node-example)
REPOSITORY TAG IMAGE ID CREATED SIZE
node-example bb9830940d8803b9ad60dfe92d4abcbaf3eb8701c5672c785ee0189178d815bf bb9830940d88 3 days ago 92.9MB
node-example v1.17.1-38-g1c6517887 bb9830940d88 3 days ago 92.9MB
Although these images have different tags, those tags are just pointers to the same Image ID so there is a single image being retained.
Skaffold normally cleans up left-over images from previous runs. So you shouldn't see the minikube daemon's space continuously growing.
An aside: even if those Image IDs were different, an image is made up of multiple layers, and those layers are shared across the images. So Docker's reported image sizes may not actually match the actual disk space consumed.

Docker images disappearing over time

I loaded some docker images running
docker load --input <file>
I can then see these images when executing
docker image ls
After a while images start disappearing. Every few minutes there are less and less images listed. I did not run any of images yet. What could be the cause of this issue?
EDIT: This issue arises with docker inside minikube VM.
Since you've mentioned that Docker daemon runs inside minikube VM, I assume that you might hit K8s Garbage collection mechanism, which keeps system utilization on appropriate level and reduce amount of unused containers(built from images) by adjusting the specific thresholds.
These eviction thresholds are fully managed by Kubelet k8s node agent, cleaning uncertain images and containers according to the parameters(flags) propagated in kubelet configuration file.
Therefore, I guess you can investigate K8s eviction behavior looking at the certain thresholds, adjusted in kubelet config file which is generated by minikube bootstrapper in the following path /var/lib/kubelet/config.yaml.
As mention in #mk_sta answer to fix issue you need:
Create or edit /var/lib/kubelet/config.yaml with
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
imagefs.available: "5%"
Default value is 15%
minikube stop
minikube start --extra-config=kubelet.config=/var/lib/kubelet/config.yaml
Or free more space on docker partition.
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#create-the-config-file
https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds

How to remove an image across all nodes in a Docker swarm?

On the local host, I can remove an image using either docker image rm or docker rmi.
What if my current host is a manager node in a Docker swarm and I wish to cascade this operation throughout the swarm?
When I first created the Docker service, the image was pulled down on each node in the swarm. Removing the service did not remove the image and all nodes retain a copy of the image.
It feels natural that if there's a way to "push" an image out to all the nodes then there should be an equally natural way to remove them too without having to SSH into every single machine :'( Plus, this is a real problem. Sooner or later the nodes are bound to have no more disk space!
AFAIK there is no such option as of now. Each node is responsible of its own cleanup. There is a command docker system prune -f that you can use to clear container data.
But tagged images can be deleted using docker rmi only. See below issues
https://github.com/moby/moby/issues/24079
This is doable. Create host entries in /etc/hosts on your manager node, like this
1.1.1.1 node01
1.1.1.2 node02
1.1.1.3 node03
Then run
for i in {01..03}; do ssh host$i "docker rmi $(docker images -q)"; done
Warning: this command will remove all images on all nodes, listed in /etc/hosts.

What is the difference between a Docker image and a container?

When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.

Resources