Can you share Docker containers? - docker

I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.

But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).

Related

How to instruct docker or docker-compose to automatically build image specified in FROM

When processing a Dockerfile, how do I instruct docker build to build the image specified in FROM locally using another Dockerfile if it is not already available?
Here's the context. I have a large Dockerfile that starts from base Ubuntu image, installs Apache, then PHP, then some custom configuration on top of that. Whether this is a good idea is another point, let's assume the build steps cannot be changed. The problem is, every time I change anything in the config, everything has to be rebuilt from scratch, and this takes a while.
I would like to have a hierarchy of Dockerfiles instead:
my-apache : based on stock Ubuntu
my-apache-php: based on my-apache
final: based on my-apache-php
The first two images would be relatively static and can be uploaded to dockerhub, but I would like to retain an option to build them locally as part of the same build process. Only one container will exist, based on the final image. Thus, putting all three as "services" in docker-compose.yml is not a good idea.
The only solution I can think of is to have a manual build script that for each image checks whether it is available on Dockerhub or locally, and if not, invokes docker build.
Are there better solutions?
I have found this article on automatically detecting dependencies between docker files and building them in proper order:
https://philpep.org/blog/a-makefile-for-your-dockerfiles
Actual makefile from Philippe's git repo provides even more functionality:
https://github.com/philpep/dockerfiles/blob/master/Makefile

Persisting changes to Windows Registry between restarts of a Windows Container

Given a Windows application running in a Docker Windows Container, and while running changes are made to the Windows registry by the running applications, is there a docker switch/command that allows changes to the Windows Registry to be persisted, so that when the container is restarted the changed values are retained.
As a comparison, file changes can be persisted between container restarts by exposing mount points e.g.
docker volume create externalstore
docker run -v externalstore:\data microsoft/windowsservercore
What is the equivalent feature for Windows Registry?
I think you're after dynamic changes (each start and stop of the container contains different user keys you want to save for the next run), like a roaming profile, rather than a static set of registry settings but I'm writing for static as it's an easier and more likely answer.
It's worth noting the distinction between a container and an image.
Images are static templates.
Containers are started from images and while they can be stopped and restarted, you usually throw them entirely away after each execution with most enterprise designs such as with Kubernetes.
If you wish to run a docker container like a VM (not generally recommended), stopping and starting it, your registry settings should persist between runs.
It's possible to convert a container to an image by using the docker commit command. In this method, you would start the container, make the needed changes, then commit the container to an image. New containers would be started from the new image. While this is possible, it's not really recommended for the same reason that cloning a machine or upgrading an OS is not. You will get extra artifacts (files, settings, logs) that you don't really want in the image. If this is done repeatedly, it'll end up like a bad photocopy.
A better way to make a static change is to build a new image using a dockerfile. You'll need to read up on that (beyond the scope of this answer) but essentially you're writing a docker script that will make a change to an existing docker image and save it to a new image (done with docker build). The advantage of this is that it's cleaner, more repeatable, and each step of the build process is layered. Layers are advantageous for space savings. An image made with a windowsservercore base and application layer, then copied to another machine which already had a copy of the windowsservercore base, would only take up the additional space of the application layer.
If you want to repeatedly create containers and apply consistent settings to them but without building a new image, you could do a couple things:
Mount a volume with a script and set the execution point of the container/image to run that script. The script could import the registry settings and then kick off whatever application you were originally using as the execution point, note that the script would need to be a continuous loop. The MS SQL Developer image is a good example, https://github.com/Microsoft/mssql-docker/tree/master/windows/mssql-server-windows-developer. The script could export the settings you want. Not sure if there's an easy way to detect "shutdown" and have it run at that point, but you could easily set it to run in a loop writing continuously to the mounted volume.
Leverage a control system such as Docker Compose or Kubernetes to handle the setting for you (not sure offhand how practical this is for registry settings)
Have the application set the registry settings
Open ports to the container which allow remote management of the container (not recommended for security reasons)
Mount a volume where the registry files are located in the container (I'm not certain where these are or if this will work correctly)
TL;DR: You should make a new image using a dockerfile for static changes. For dynamic changes, you will probably need to use some clever scripting.

Docker multiple inheriting entry_point execution

I have the following Dockerfile:
FROM gitlab-registry.foo.ru/project/my_project
FROM aerospike/aerospike-server
And above the first and second ones have an ENTRYPOINT.
As it known, only one ENTRYPOINT will be executed. Does it exist the way to run all of the parents ENTRYPOINT?
Is it correct, that I can use the Docker-Compose for tasks like this?
From the comments above, there's a fundamental misunderstanding of what docker is doing. A container is an isolated process. When you start a docker container, it starts a process for your application, and when that process exits, the container exits. A good best practice is one application per container. Even though there are ways to launch multiple programs, I wouldn't recommend them, as it complicates health checks, upgrades, signal handling, logging, and failure detection.
There is no clean way to merge multiple images together. In the Dockerfile you listed, you defined a multi-stage build that could have been used to copy files from the first stage into the final stage. The resulting image will be the last FROM section, not a merge of the two images. The typical use of multi-stage builds is replacing the separate compile images or external build processes, and to have a single command with a compiling image and a runtime image that outputs the application inside the runtime image. This is very different from what you're looking for.
The preferred method to run multiple applications in docker is as multiple containers from different images, and using docker networking to connect them together. You'll want to start with a docker-compose.yml which can be used by either docker-compose on a standalone docker engine, or with docker stack deploy to use the features of swarm mode.
Simple answer is No.
Your Dockerfile uses Docker Multi-Stage builds which are used to transfer dependencies from one image to another. The last FROM statement is the base image for the resulting image.
The entrypoint from base image will only be inherited. You need to exlicilty set the entrypoint if you want a different one from that specified in the base image coming from the last FROM instruction.

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

Recreating Docker images instead of reusing - for microservices

One microservice stays in one docker container. Now, let's say that I want to upgrade the microservice - for example, some configuration is changed, and I need to re-run it.
I have two options:
I can try to re-use existing image, by having a script that runs on containers startup and that updates the microservice by reading new config (if there is) from some shared volume. After the update, script runs the microservice.
I can simply drop the existing image and container and create the new image (with new name) and new container with updated configuration/code.
Solution #2 seems more robust to me. There is no 'update' procedure, just single container creation.
However, what bothers me is if this re-creation of the image has some bad side-effects? Like a lot of dangling images or something similar. Imagine that this may happens very often during the time user plays with the app - for example, if developer is trying out something, he wants to play with different configurations of microservice, and he will re-start it often. But once it is configured, this will not change. Also, when I say configuration I dont mean just config files, but also user code etc.
For production changes you'll want to deploy a new image for changes to the file. This ensures your process is repeatable.
However, developing by making a new image every time you write a new line of code would be a nightmare. The best option is to run your docker container and mount the source directory of the container to your file system. That way, when you make changes in your editor, the code in the container updates too.
You can achieve this like so:
docker run -v /Users/me/myapp:/src myapp_image
That way you only have to build myapp_image once and can easily make changes thereafter.
Now, if you had a running container that was not mounted and you wanted to make changes to the file, you can do that too. It's not recommended, but it's easy to see why you might want to.
If you run:
docker exec -it <my-container-id> bash
This will put you into the container and you can make changes in vim/nano/editor of your choice while you're inside.
Your option #2 is definitely preferable for a production environment. Ideally you should have some automation around this process, typically to perform something like a blue-green deploy where you replace containers based on the old image one by one with those from the new, testing as you go and then only when you are satisfied with the new deployment do you clean up the containers from the previous version and remove the image. That way you can quickly roll-back to the previous version if needed.
In a development environment you may want to take a different approach where you bind mount the application in to the container at runtime allowing you to make updates dynamically without having to rebuild the image. There is a nice example in the Docker Compose docs that illustrates how you can have a common base compose YML and then extend it so that you get different behavior in development and production scenarios.

Resources