I'd like to build and operate containers only with docker-compose. Is it possible to do it with using all features of the Dockerfile, but without using it?
IMHO not, but I'm not sure if it is true.
For example: are there any replacement of Dockerfile's RUN,ADD,COPY commands in the docker-compose.yaml? I can't find.
In general, no: the Dockerfile says how to build an image, and the docker-compose.yml says how to run it. There are various combinations of things that it's impossible to do in one place or the other.
In some cases you can simulate things the Dockerfile might do with docker-compose.yml directives (you can use volumes: to mount content into a running container, which looks similar to COPYing the content into an image) but they're not the same and you generally can't use exclusively one, unless you can describe your application using only prebuilt images.
I'd tend to recommend the Dockerfile COPY and CMD directives over trying to use only the docker-compose.yml equivalents. Environment variables are often runtime configuration ("what is the host name of the database server?") and those it makes more sense to specify in the docker-compose.yml or other runtime configuration.
Related
I've checked on SO but couldn't find a exhaustive answer.
My docker-composer.yml defines few things including volumes
app:
volumes:
- "./:/app"
...
If I use docker run to instance the image, then I will need to specify again the same volumes specified in docker-compose.yml.
docker run -v "./:/app"
That might be desirable for some use cases, but in general having the same definition specified in 2 different places is not really maintainable (or obvious for future devs). I'd like to avoid defining the same config in different locations (one for docker-compose and one as arguments for docker run).
Can it be stated that if configuring volumes (or others parameters) inside docker-compose.yml then, in order to have them, the image should be run via docker-compose up rather than docker run -v redundant:volume:specification?
Note: I am asking about best practices more than personal opinions.
You should think of the docker-compose.yml as not unlike a very specialized shell script that runs docker run for you. It's not a bad idea to try to minimize the number of require mounts and options (for example, don't bind-mount code over the code in your image) but it's also not especially a best practice to say "this is only runnable via this docker-compose.yml file".
Also consider that there are other ways to run a container, with different syntaxes. Both Kubernetes and Hashicorp's Nomad have very different syntaxes, and can't reuse the docker-compose.yml. If you're running the image in a different context, you'll basically have to rewrite this configuration anyways.
In limited scopes – "for this development project, in this environment, in this specific repository" – it's reasonable enough to say "the standard way to run this with standard options is via docker-compose up", but it's still possible to run the image a different way if you need to.
In general one should rely on docker-compose, once starting to use it, since just relying on docker <cmd> might miss some configuration and give unexpected results (especially if freshly landing on the project and not having confidence with it).
Executing the images with docker run will lead to the following disadvantages:
having to remember adding eventual parameters at each run, that are instead implicit into with docker-compose
even when remembering, or having a bash script calling docker run with the right parameters, future changes to these parameters will need to be reflected in two different places. This is not very maintainable and error prone.
eventual other correlated images will not run and one has to remember to run them manually; or add them into a script, ending again with definition in two different places.
However, for a broader view considering other runners (k8s) check David Maze's answer.
I want to deploy a docker application in a production environment (single host) using a docker-compose file provided by the application creator. The docker based solution is being used as a drop-in replacement for a monolithic binary installer.
The application ships with a default configuration but with an expectation that the administrator will want to apply moderate configuration changes.
There appears to be a few ways to apply custom configuration to the services that are defined in the docker-compose.yml file however I am not sure which is considered best practice. The two I am considering between at the moment are:
Bake the configuration into a new image. Here, I would add a build step for each service defined in the docker-compose file and create a minimal Dockerfile which uses COPY to replace the existing configuration files in the image with my custom config files. Using sed and echo in CMD statements could also be used to change configuration inline without replacing the files wholesale.
Use a bind mount with configuration stored on the host. In this case, I would store all custom configuration files in a directory on the host machine and define bind mounts in the volumes parameter for each service in the docker-compose file.
The first option seems the cleanest to me as the application is completely self-contained, however I would need to rebuild the image if I wanted to make any further configuration changes. The second option seems the easiest as I can make configuration changes on the fly (restarting services as required in the container).
Is there a recommended method for injecting custom configuration into Docker services?
Given your context, I think using a bind mount would be better.
A Docker image is supposed to be reusable in different context, and building an entire image solely for a specific configuration (i.e. environment) would defeat that purpose:
instead of the generic configuration provided by the base image, you create an environment-specific image
everytime you need to change the configuration you'll need to rebuild the entire image, whereas with a bind mount a simple restart or re-read of the configuration file by application will be sufficient
Docker documentation recommend that:
Dockerfile best practice
You are strongly encouraged to use VOLUME for any mutable and/or
user-serviceable parts of your image.
Good use cases for bind mounts
Sharing configuration files from the host machine to containers.
I have the following Dockerfile:
FROM gitlab-registry.foo.ru/project/my_project
FROM aerospike/aerospike-server
And above the first and second ones have an ENTRYPOINT.
As it known, only one ENTRYPOINT will be executed. Does it exist the way to run all of the parents ENTRYPOINT?
Is it correct, that I can use the Docker-Compose for tasks like this?
From the comments above, there's a fundamental misunderstanding of what docker is doing. A container is an isolated process. When you start a docker container, it starts a process for your application, and when that process exits, the container exits. A good best practice is one application per container. Even though there are ways to launch multiple programs, I wouldn't recommend them, as it complicates health checks, upgrades, signal handling, logging, and failure detection.
There is no clean way to merge multiple images together. In the Dockerfile you listed, you defined a multi-stage build that could have been used to copy files from the first stage into the final stage. The resulting image will be the last FROM section, not a merge of the two images. The typical use of multi-stage builds is replacing the separate compile images or external build processes, and to have a single command with a compiling image and a runtime image that outputs the application inside the runtime image. This is very different from what you're looking for.
The preferred method to run multiple applications in docker is as multiple containers from different images, and using docker networking to connect them together. You'll want to start with a docker-compose.yml which can be used by either docker-compose on a standalone docker engine, or with docker stack deploy to use the features of swarm mode.
Simple answer is No.
Your Dockerfile uses Docker Multi-Stage builds which are used to transfer dependencies from one image to another. The last FROM statement is the base image for the resulting image.
The entrypoint from base image will only be inherited. You need to exlicilty set the entrypoint if you want a different one from that specified in the base image coming from the last FROM instruction.
I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"
I've built a simple Docker Compose project as a development environment. I have PHP-FPM, Nginx, MongoDB, and Code containers.
Now I want to automate the process and deploy to production.
The docker-compose.yml can be extended and can define multiple environments. See https://docs.docker.com/compose/extends/ for more information.
However, there are Dockerfiles for my containers. And for the dev environment are needed more packages than in production.
The main question is should I use separate dockerfiles for dev and prod and manage them in docker-compose.yml and production.yml ?
Separate dockerfiles are easy approach but there is code duplication.
The other solution is to use environment variables and somehow handle them from bash script (maybe as entrypoint ?).
I am searching for other ideas.
According to the official docs:
... you’ll probably want to define a separate Compose file, say
production.yml, which specifies production-appropriate configuration.
Note: The extends keyword is useful for maintaining multiple Compose
files which re-use common services without having to manually copy and
paste.
In docker-compose version >= 1.5.0 you can use environment variables, may be this suits you?
If the packages needed for development aren't too heavy (i.e. the image size isn't significally bigger) you could just create Dockerfiles that include all the components and then decide whether to activate them based on the value of an environment variable in the entrypoint.
That way you would could have the main docker-compose.yml providing the production environment while development.yml would just add the correct environment variable value where needed.
In this situation it might be worth considering using an "onbuild" image to handle the commonalities among environments, then using separate images to handle the specifics. Some official images have onbuild versions, e.g., Node. Or you can create your own.