Why are the Atomic Host CentOS automatic partitions like they are? - docker

I've been tasked with working with the CentOS Atomic Host distribution which comes preinstalled with Docker. My problem is I can pull from a host registry without a problem (but I don't know where it's stored), but what I'd really like to do is just scp/sftp an image over to my client PC and "docker load" the image.
If I choose auto installation, I get a "cah-docker--pool_tdata" that's 48.9 and a "cah-docker--pool_tmeta" and the /dev/mapper/cash-root is only 3 GB which can't hold an image.
Where should I be transferring the files to and could anyone give me a rundown on why these partitions are like this? I couldn't find anything in the docs about it.

Atomic Host's only purpose is to run Docker. From the Project Atomic website:
Atomic Host is a lightweight, immutable platform, designed with the sole purpose of running containerized applications.
The only place where such a host needs a lot of disk space is the Docker pool. Therefore, the installer gives the pool as much disk space as possible and keeps the rest of the partitions very small.
To solve your problem, try to pipe the output of docker save directly into docker load. From the host where you want to copy the image from:
docker save <image> | ssh <target_host> 'docker load'
This can be improved by following this answer by kolypto:
docker save <image> | bzip2 | pv | \
ssh <target_host> 'bunzip2 | docker load'

Related

Docker container image vs container [duplicate]

When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.

Shared Docker devicemapper lvm thinpool in a multiboot setup

I'm developing using Docker on a multiboot setup under both Fedora and Ubuntu on my laptop. I need this to rule out issues with selinux and/or apparmor so my build will work for both red hat(and friends) and debian(and friends).
I'm using devicemapper in thin pool lvm configuration as storage backend. This was configured using docker-storage-setup tool under Fedora.
I would like to share my docker images and containers to both Fedora (/ is formatted as ext4fs on lvm) and Ubuntu environments (/ is formatted as btrfs also on lvm) to save space.
However after one Docker system has started and taken over the docker thinpool, the other Docker system could not use the same docker thinpool.
This is the error:
Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool ("my docker thin pool") that already has used data blocks
Based on that it seems to have this limitation by design. In that case would anyone elaborate on my particular use case and is there another way to share my docker devicemapper thin pool with several linux systems so I can save space and not have duplicate images/containers?
In a bug report Eric Paris says:
IF you are using device mapper (instead of loopback) /var/lib/docker contains metadata informing docker about the contents of the device mapper storage area. If you delete /var/lib/docker that metadata is lost. Docker is then able to detect that the thin pool has data but docker is unable to make use of that information. The only solution is to delete the thin pool and recreate it so that both the thin pool and the metadata in /var/lib/docker will be empty.
So syncing parts of /var/lib/docker may be a solution.

Usage of loopback devices is strongly discouraged for production use

I want to test docker in my CentOS 7.1 box, I got this warning:
[root#docker1 ~]# docker run busybox /bin/echo Hello Docker
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Hello Docker
I want to know the reason and how to suppress this warning.
The CentOS instance is running in virtualbox created by vagrant.
The warning message occurs because your Docker storage configuration is using a "loopback device" -- a virtual block device such as /dev/loop0 that is actually backed by a file on your filesystem. This was never meant as anything more than a quick hack to get Docker up and running quickly as a proof of concept.
You don't want to suppress the warning; you want to fix your storage configuration such that the warning is no longer issued. The easiest way to do this is to assign some local disk space for use by Docker's devicemapper storage driver and use that.
If you're using LVM and have some free space available on your volume group, this is relatively easy. For example, to give docker 100G of space, first create a data and metadata volume:
# lvcreate -n docker-data -L 100G /dev/my-vg
# lvcreate -n docker-metadata -L1G /dev/my-vg
And then configure Docker to use this space by editing /etc/sysconfig/docker-storage to look like:
DOCKER_STORAGE_OPTIONS=-s devicemapper --storage-opt dm.datadev=/dev/my-vg/docker-data --storage-opt dm.metadatadev=/dev/my-vg/docker-metadata
If you're not using LVM or don't have free space available on your VG, you could expose some other block device (e.g., a spare disk or partition) to Docker in a similar fashion.
There are some interesting notes on this topic here.
Thanks. This was driving me crazy. I thought bash was outputting this message. I was about to submit a bug against bash. Unfortunately, none of the options presented are viable on a laptop or such where disk is fully utilized. Here is my answer for that scenario.
Here is what I used in the /etc/sysconfig/docker-storage on my laptop:
DOCKER_STORAGE_OPTIONS="--storage-opt dm.no_warn_on_loop_devices=true"
Note: I had to restart the docker service for this to have an effect. On Fedora the command for that is:
systemctl stop docker
systemctl start docker
There is also just a restart command (systemctl restart docker), but it is a good idea to check to make sure stop really worked before starting again.
If you don't mind disabling SELinux in your containers, another option is to use overlay. Here is a link that describes that fully:
http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
In summary for /etc/sysconfig/docker:
OPTIONS='--selinux-enabled=false --log-driver=journald'
and for /etc/sysconfig/docker-storage:
DOCKER_STORAGE_OPTIONS=-s overlay
When you change a storage type, restarting docker will destroy your complete image and container store. You may as well everything up in the /var/lib/docker folder when doing this:
systemctl stop docker
rm -rf /var/lib/docker
dnf reinstall docker
systemctl start docker
In RHEL 6.6 any user with docker access can access my private keys, and run applications as root with the most trivial of hacks via volumes. SELinux is the one thing that prevents that in Fedora and RHEL 7. That said, it is not clear how much of the additional RHEL 7 security comes from SELinux outside the container and how much inside the container...
Generally, loopback devices are fine for instances where the limit of 100GB maximum and a slightly reduced performance are not a problem. The only issue I can find is the docker store can be corrupt if you have a disk full error while running... That can probably be avoided with quotas, or other simple solutions.
However, for a production instance it is definitely worth the time and effort to set this up correctly.
100G may excessive for your production instance. Containers and images are fairly small. Many organizations are running docker containers within VM's as an additional measure of security and isolation. If so, you might have a fairly small number of containers running per VM. In which case even 10G might be sufficient.
One final note. Even if you are using direct lvm, you probable want a additional filesystem for /var/lib/docker. The reason is the command "docker load" will create an uncompressed version of the images being loaded in this folder before adding it to the data store. So if you are trying to keep it small and light then explore options other than direct lvm.
#Igor Ganapolsky Feb and #Mincă Daniel Andrei
Check this:
systemctl edit docker --full
If directive EnvironmentFile is not listed in [Service] block, then no luck (I also have this problem on Centos7), but you can extend standard systemd unit like this:
systemctl edit docker
EnvironmentFile=-/etc/sysconfig/docker
ExecStart=
ExecStart=/usr/bin/dockerd $OPTIONS
And create a file /etc/sysconfig/docker with content:
OPTIONS="-s overlay --storage-opt dm.no_warn_on_loop_devices=true"

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

What is the difference between a Docker image and a container?

When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.

Resources