Docker multiple inheriting entry_point execution - docker

I have the following Dockerfile:
FROM gitlab-registry.foo.ru/project/my_project
FROM aerospike/aerospike-server
And above the first and second ones have an ENTRYPOINT.
As it known, only one ENTRYPOINT will be executed. Does it exist the way to run all of the parents ENTRYPOINT?
Is it correct, that I can use the Docker-Compose for tasks like this?

From the comments above, there's a fundamental misunderstanding of what docker is doing. A container is an isolated process. When you start a docker container, it starts a process for your application, and when that process exits, the container exits. A good best practice is one application per container. Even though there are ways to launch multiple programs, I wouldn't recommend them, as it complicates health checks, upgrades, signal handling, logging, and failure detection.
There is no clean way to merge multiple images together. In the Dockerfile you listed, you defined a multi-stage build that could have been used to copy files from the first stage into the final stage. The resulting image will be the last FROM section, not a merge of the two images. The typical use of multi-stage builds is replacing the separate compile images or external build processes, and to have a single command with a compiling image and a runtime image that outputs the application inside the runtime image. This is very different from what you're looking for.
The preferred method to run multiple applications in docker is as multiple containers from different images, and using docker networking to connect them together. You'll want to start with a docker-compose.yml which can be used by either docker-compose on a standalone docker engine, or with docker stack deploy to use the features of swarm mode.

Simple answer is No.
Your Dockerfile uses Docker Multi-Stage builds which are used to transfer dependencies from one image to another. The last FROM statement is the base image for the resulting image.
The entrypoint from base image will only be inherited. You need to exlicilty set the entrypoint if you want a different one from that specified in the base image coming from the last FROM instruction.

Related

Should `docker run` be avoided, to execute an image created with `docker-compose`?

I've checked on SO but couldn't find a exhaustive answer.
My docker-composer.yml defines few things including volumes
app:
volumes:
- "./:/app"
...
If I use docker run to instance the image, then I will need to specify again the same volumes specified in docker-compose.yml.
docker run -v "./:/app"
That might be desirable for some use cases, but in general having the same definition specified in 2 different places is not really maintainable (or obvious for future devs). I'd like to avoid defining the same config in different locations (one for docker-compose and one as arguments for docker run).
Can it be stated that if configuring volumes (or others parameters) inside docker-compose.yml then, in order to have them, the image should be run via docker-compose up rather than docker run -v redundant:volume:specification?
Note: I am asking about best practices more than personal opinions.
You should think of the docker-compose.yml as not unlike a very specialized shell script that runs docker run for you. It's not a bad idea to try to minimize the number of require mounts and options (for example, don't bind-mount code over the code in your image) but it's also not especially a best practice to say "this is only runnable via this docker-compose.yml file".
Also consider that there are other ways to run a container, with different syntaxes. Both Kubernetes and Hashicorp's Nomad have very different syntaxes, and can't reuse the docker-compose.yml. If you're running the image in a different context, you'll basically have to rewrite this configuration anyways.
In limited scopes – "for this development project, in this environment, in this specific repository" – it's reasonable enough to say "the standard way to run this with standard options is via docker-compose up", but it's still possible to run the image a different way if you need to.
In general one should rely on docker-compose, once starting to use it, since just relying on docker <cmd> might miss some configuration and give unexpected results (especially if freshly landing on the project and not having confidence with it).
Executing the images with docker run will lead to the following disadvantages:
having to remember adding eventual parameters at each run, that are instead implicit into with docker-compose
even when remembering, or having a bash script calling docker run with the right parameters, future changes to these parameters will need to be reflected in two different places. This is not very maintainable and error prone.
eventual other correlated images will not run and one has to remember to run them manually; or add them into a script, ending again with definition in two different places.
However, for a broader view considering other runners (k8s) check David Maze's answer.

Intro to Docker for FreeBSD Jail User - How and should I start the container with systemd?

We're currently migrating room server to the cloud for reliability, but our provider doesn't have the FreeBSD option. Although I'm prepared to pay and upload a custom system image for deployment, I nontheless want to learn how to start a application system instance using Docker.
in FreeBSD Jail, what I did was to extract an entire base.txz directory hierarchy as system content into /usr/jail/app, and pkg -r /usr/jail/app install apache24 php perl; then I configured /etc/jail.conf to start the /etc/rc script in the jail.
I followed the official FreeBSD Handbook, and this is generally what I've worked out so far.
But Docker is another world entirely.
To build a Docker image, there are two options: a) import from a tarball, b) use a Dockerfile. The latter of which lets you specify a "CMD", which is the default command to run, but
Q1. why isn't it available from a)?
Q2. where are information like "CMD ENV" stored? in the image? in the container?
Q3. How to start a GNU/Linux system in a container? Do I just run systemd and let it figure out the rest from configuration? Do I need to pass to it some special arguments or envvars?
You should think of a Docker container as a packaging around a single running daemon. The ideal Docker container runs one process and one process only. Systemd in particular is so heavyweight and invasive that it's actively difficult to run inside a Docker container; if you need multiple processes in a container then a lighter-weight init system like supervisord can work for you, but that's usually an exception more than a standard packaging.
Docker has an official tutorial on building and running custom images which is worth a read through; this is a pretty typical use case for Docker. In particular, best practice is to write a Dockerfile that describes how to build an image and check it into source control. Containers should avoid having persistent data if they can (storing everything in an external database is ideal); if you change an image, you need to delete and recreate any containers based on it. If local data is unavoidable then either Docker volumes or bind mounts will let you keep data "outside" the container.
While Docker has several other ways to create containers and images, none of them are as reproducible. You should avoid the import, export, and commit commands; and you should only use save and load if you can't use or set up a Docker registry and are forced to move images between systems via a tar file.
On your specific questions:
Q1. I suspect the best reason the non-docker build paths to create images don't easily let you specify things like CMD is just an implementation detail: if you look at the docker history of an image you'll see the CMD winds up being its own layer. Don't worry about it and use a Dockerfile.
Q2. The default CMD, any set ENV variables, and other related metadata are stored in the image alongside the filesystem tree. (Once you launch a container, it has a normal Unix process tree, with the initial process being pid 1.)
Q3. You don't "start a system in a container". Generally run one process or service in a container, and manage their lifecycles independently.

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

How should I create Dockerfile to run multiple services through docker-compose?

I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.

Can you share Docker containers?

I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).

Resources