Docker "Sharing Dependencies" - docker

Along with the readings with Docker, I stopped a couple of times with the fact that Docker containers not only share the host kernel, but If possible they share common binaries and libraries.
What I understand from that, is that If I'm running the same docker Image twice on the same host, and this image is using some files x,y,z (say libraries / bins .. anything). These files will also be shared among the 2 launched containers? What is even more is that if I'm running two different images, they still could share these common dependencies. What I'm asking for is just two things ...
1- Verification / Explanation --> Is that true / false + explanation (how does that happen)
2- If true --> Is there a practical example, that I can run 2 containers (of the same / diff images) and verify they are seeing the same files / libs.
I hope my question is clear and someone has an answer :)

Yes, answer is "true" to both questions.
If you start 2 (or more) containers on the same host, all using the same base image, the whole content of the base image will be shared.
What is called as an "image" is, in fact, multiple images called "layers" with parent-child relationships, stacked together.
Now, If you start multiple containers with different images, it may happen that these images share some common layers, depending on how they were built.
At the system level, Docker mounts each image layer on top of the other up to the final/top image. each layer overwrites its parent content if it overlaps. To do that, it uses what is called an "union filesystem" (Aufs), or even volume snapshots. More information here.
The images are never modified, they are read-only. On top of the last/upper image, an extra, writeable layer, is added, it will contain changes/additions made by the running container.
That means that this writeable layer can also be turned into an image layer, and you can start other containers from this new image.
To see layers sharing "with your own eyes", just run the following examples:
docker run ubuntu:trusty /bin/bash
Then:
docker run ubuntu-upstart:trusty /bin/bash
Docker will tell you that it already has some layers and will thus download them all.
Check the documentation about writing a Dockerfile (image build script), that should give you a good vision about how all this works.

Related

Docker - workflow for updating container

I'm just getting to grips with Docker. I need to update a base image for my image.
Questions
Do I need to completely recreate all the changes I made on top of the
new base image and save it as a new image?
What do people do to remember the changes they've made to their
image?
Do I need to completely recreate all the changes I made on top of the new base image and save it as a new image?
You don't. It is up to you whether you want to completely rebuild the image or to use your old one as a new base but unless we are talking about generic base image, such as one where you just preinstall things that you want available to all the derived images, it is probably better to just rebuild the image from scratch, otherwise you might end up cluttering images with stuff they don't need which is never a good thing (both from the perspective of size and security).
What do people do to remember the changes they've made to their image?
Right out of the box you can use history command to see what went into the image
docker image history <image>
which lists image's filesystem layers.
Personally, when I build images, I copy Dockerfile to the image so that I can quickly cat it.
docker exec <image> cat Dockerfile
It is more convenient for me than listing through the history output (I don't include anything sensitive in a dockerfile and all the information that it has is already available within the container if someone breaks in).

Is `FROM` clause required in Dockerfile?

For all the Dockerfiles I've come across thus (which admittedly is not many), all of them have used a FROM clause to base off an existing image, even if it's FROM scratch.
Is this clause required? Is it possible to have a Dockerfile with no FROM clause? Will this container created thus be able to do anything?
EDIT
I read
A Dockerfile with no FROM directive has no parent image, and is called
a base image.
https://docs.docker.com/glossary/?term=parent%20image
But I think this may be an error.
Based on the official documentation it's required:
The FROM instruction initializes a new build stage and sets the Base
Image for subsequent instructions. As such, a valid Dockerfile MUST
start with a FROM instruction. The image can be any valid image – it
is especially easy to start by pulling an image from the Public
Repositories.
https://docs.docker.com/engine/reference/builder/#from
Short answer is yes, the FROM clause is required. But it's easier to come to this conclusion if you think of the image building process a bit.
Dockerfile is just a way to describe a sequence of commands to be executed by Docker build subsystem to create an image. And an image is just a bunch of regular files, most notably, user land files of a particular Linux distribution, but possibly with some extra files on top of it. Every Docker image is based on the parent image and adds its own files to the parent's set. Every image has to start FROM something, i.e. specify its parent. And the parent of all parents is a scratch image defined as noop, i.e. an empty set of files.
Take a look at busybox image:
FROM scratch
ADD busybox.tar.xz /
CMD ["sh"]
It starts from scratch, i.e. an empty set of files, and adds (i.e. copies) to this set a bunch of files from busybox.tar.xz archive.
Now, if you want to create your own image, you can start from busybox image and describe what files (and how) you are going to add:
FROM busybox:latest
ADD myfile.txt /
But every time a new image has to start FROM something.
Yes, it is. It defines the layers on which the image you are building is based on.
If you want to start an image from scratch docker offers an image called scratch
The documentation also says:
A parent image is the image that your image is based on
also
A base image has FROM scratch in its Dockerfile.
Refer to base images documentation

Is it good practice to commit docker container frequently?

I'm using WebSphere Liberty inside. As WebSphere Liberty requires frequent xml editing, which is impossible with Dockerfile commands. I have to docker-commit the container from time to time, for others to make use of my images.
The command is like:
docker commit -m "updated sa1" -a "Song" $id company/wlp:v0.1
Colleges are doing similar things to the image, they continue to docker commit the container several times every day.
One day we're going to deploy the image on production.
Q1: Is the practice of frequent docker-committing advised?
Q2: Does it leave any potential problem behind?
Q3: Does it create an extra layer? I read the docker-commit document, which didn't mention if it creates another layer, I assume it means no.
I wouldn't use docker commit,
It seems like a really good idea but you can't reproduce the image at will like you can with a Dockerfile and you can't change the base image once this is done either, so makes it very hard to commit say for example a security patch to the underlying os base image.
If you go the full Dockerfile approach you can re-run docker build and you'll get the same image again. And you are able to change the base image.
So my rule of thumb is if you are creating a temporary tool and you don't care about reuse or reproducing the image at will then commit is convenient to use.
As I understand Docker every container image has two parts this is a group of read-only layers making up the bulk of the image and then a small layer which is writeable where any changes are committed.
When you run commit docker goes ahead and creates a new image this is the base image plus changes you made (the image created is a distinct image), it copies up the code to the thin writable layer. So a new read-only layer is not created it merely stores the deltas you make into the thin writeable layer.
Don't just take my word for it, take Redhats advice
For clarity that article in step 5 says:
5) Don’t create images from running containers – In other terms, don’t
use “docker commit” to create an image. This method to create an image
is not reproducible and should be completely avoided. Always use a
Dockerfile or any other S2I (source-to-image) approach that is totally
reproducible, and you can track changes to the Dockerfile if you store
it in a source control repository (git).

Docker Build and Push in Detail

Many people know, what docker build and docker push are doing in general on a high level, but what do they exactly do on a low level?
let's say we have a Dockerfile like this
FROM alpine:latest
RUN touch ~/tmp
RUN touch ~/tmp2
this will create the delta filesystem (only changes) for each layer in /var/lib/docker/overlay2.
layer contains a whole filesystem
layer contains the file ~/tmp
layer contains the file ~/tmp2
Open questions
What is the actual link between the layers? Is there a json, containing all the image info, including a sorted list of layers?
What kind of deliverable is generated to send it to the docker registry while performing docker push is it a tar.gz, similar to docker save
From my point of view:
Each layer is transferred (only modified files add/update/deleted files instruction) to docker-registry.
Each layer know it's parent layer.
So when you pull down a child layer - it will pull parent layer hierarchy till the top(base layer).
Each layer is identified by identifiers (sha256 code and not by name).
Any change in the hierarchy will cause a different sha256 code-name for all the child image layers(even though there is no change in that layer).
Feel free to add or to suggest improvements.

Where do untagged Docker images come from?

I'm creating some very simple Docker containers. I understand that after each step a new container is created. However, when using other Dockerfiles from the Hub I don't wind up with untagged images. So where do they come from? After browsing around online I have found out how to remove them but I want to gain a better understanding where they come from. Ideally I would like to prevent them from ever being created.
From their documentation
This will display untagged images, that are the leaves of the images
tree (not intermediary layers). These images occur when a new build of
an image takes the repo:tag away from the IMAGE ID, leaving it
untagged. A warning will be issued if trying to remove an image when a
container is presently using it. By having this flag it allows for
batch cleanup.
I don't quite understand this. Why are builds taking the repo:tag away from the IMAGE ID?
Whenever you assign a tag that is already in use to a new image (say, by building image foo, making a change in its Dockerfile, and then building foo again), the old image will lose that tag but will still stay around, even if all of its tags are deleted. These older versions of your images are the untagged entries in the output of docker images, and you can safely delete them with docker rmi <IMAGE HASH> (though the deletion will be refused if there's an extant container still using that image).
Docker uses a file system called AUFS, which stands for Augmented File System. Pretty much each line of a Docker file will create a new image and when you stack or augment them all on top of each other you'll get your final docker image. This is essentially a way of caching, so if you change only the 9th line of your Docker file it wont rebuild the entire image set. (Well depends on what commands you have on your Docker file, if you have a COPY or ADD nothing after that point is cached for ex)
The final image will get tagged with whatever label it has, but all these intermediary images are necessary in order to create the final image so it doesn't make sense to delete them or prevent them from being created. Hope that makes sense.

Resources