How to skip executing a single service with docker-compose? - docker

I have docker-compose.yml file with a set of services inside. In some cases for development I want docker not to start on of services which is a helper one when I run docker-compose up.
Is it possible to do and if so, how to?

I suppose this is still not implemented yet, see the discussion here.

Related

Is there a way to specify the time duration for which services should run using compose file

I have deployed services using docker stack deploy in two nodes. I would like to know is there a way to specify the duration of time for which each service should run using a compose file or in some other ways (may be a using an option while deploying the stack). Basically the stack should be removed after the specified time.
Thank you in advance.
Using Docker Compose features, no. Docker Compose is intended to run and manage services, but you cannot tell Docker Compose "remove this stack after X time".
However you can use another tool (such as a cron job or an external script) which will call docker-compose down or stop on your stack after it was run a specified time, or a more complex task manager which will leverage Compose itself.

Doubts about docker

I'm new with docker, and have some doubts.
In a dev environment (not server), is better to use just one container, with apache, php and mysql for exemple, and use just a docker and a Dockerfile, or is better to use one container for each service, and use docker-compose to do it?
I have made this here with docker-compose, but I don't know if it is the best way, seems to me unnecessary complexity, but I'm newb.
I have the following situation, I work with magento, and is a common need to have a clear instalation for isolate modules and test, so I want create my magento 2 docker environment, where have just a clear magento and must have some easy way of put my module files inside, for test, and ons shutdown, the environment backs to clear magento 2 instalation, without my files, what is the best way to get this environemnt?
Thanks in advance.
I'd certainly recommend using a docker stack (defined in a docker-compose), and not trying to spin up a whole application stack inside a single container. You should have one service per container generally.
I believe what you are looking for in the second part of your question is a deployment orchestration tool. Docker does not replace deployment orchestration, but you can run shell scripts that do application setup in the Dockerfiles that build the containers you use in your stack.
As for access to files inside your containers, I'd look into docker volumes.

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

How should I create Dockerfile to run multiple services through docker-compose?

I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.

Running several apps via docker-compose

We are trying to run two apps via docker-compose. These apps are (obviously) in separate folders, each of them having their own docker-compose.yml . On the filesystem it looks like this:
dir/app1/
-...
-docker-compose.yml
dir/app2/
-...
-docker-compose.yml
Now we need a way to compose these guys together for they have some nitty-gritty integration via http.
The issue with default docker-compose behaviour is that if treats all relative paths with respect to folder it is being run at. So if you go to dir from the example above and run
docker-compose up -f app1/docker-compose.yml -f app2/docker-compose.yml
you'll end up being out of luck if any of your docker-compose.yml's uses relative paths to env files or whatever.
Here's the list of ways that actually work, but have their drawbacks:
Run those apps separately, and use networks.
It is described in full at Communication between multiple docker-compose projects
I've tested that just now, and it works. Drawbacks:
you have to mention network in docker-compose.yml and push that to repository some day, rendering entire app being un-runnable without the app that publishes the network.
you have to come up with some clever way for those apps to actually wait for each other
2 Use absolute paths. Well, it is just bad and does not need any elaboration.
3 Expose the ports you need on host machine and make them talk to host without knowing a thing about each other. That is too, obviously, meh.
So, the question is: how can one manage the task with just docker-compose ?
Thanks to everyone for your feedback. Within our team we have agreed to the following solution:
Use networks & override
Long story short, your original docker-compose.yml's should not change a bit. All you have to do is to make docker-compose.override.yml near it, which publishes the network and hooks your services into it.
So, whoever wants to have a standalone app runs
docker-compose -f docker-compose.yml up
But when you need to run apps side-by-side and communicating with each other, you should go with
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Resources