Currently, I use Docker Compose to start multiple containers in one shot. I have containers started and running already, but while doing docker-compose up -d, I just want to exclude some containers while taking other containers up or down.
Use the following to exclude specific services:
docker-compose up --scale <service name>=0
Think you have to go the "other way". You can start single containers from your docker-compose.yml via:
docker-compose up -d --no-deps ServiceName
If you're looking to exclude some containers because they are not related to the Composition you might be interested in dobi.
dobi lets you define the images and containers (run resources) used to build and run your application. It also has a compose resource for starting the Compose project.
So using dobi you would only put the containers you want to run together into the docker-compose.yml, and the other containers would be just in the dobi.yml.
Related
I'm new to the world of containers and specially when it comes to Docker Compose. I'm confused about some concepts and I can't find information about them.
Basically I want to know if I can handle settings in different "docker-compose.yml" files in a isolated manner. I explain better... I would like to know if I can up or stop resources referring to a specific "docker-compose.yml" individually.
PLUS:
To better explain my doubt I'll show you some conjectures about what I'm trying to explain.
It seems to me that it is possible to have multiple configurations for Docker Compose using different ".yml" files like the example below...
EXAMPLE
docker-compose -f docker-compose.a.yml -f docker-compose.b.yml up -d
... and that I can also handle each of these settings individually, such as stopping all the resources referring to a specific docker-compose.yml...
EXAMPLE
docker-compose -f docker-compose.b.yml stop
[Ref(s).: https://gabrieltanner.org/blog/docker-compose#using-multiple-docker-compose-files , https://stackoverflow.com/q/29835905/3223785 , https://stackoverflow.com/questions/42287426/docker-multiple-environments , https://runnable.com/docker/advanced-docker-compose-configuration ]
Yes, it is possible. I'm not exactly sure what you are trying to do, but to be able to manage the services using -f option the way that you described, there shouldn't be a service with the same name on multiple files.
For example, if you have a service called db in docker-compose.a.yml and one other db service in docker-compose.b.yml. The following command will only built one container for db service:
docker-compose -f docker-compose.a.yml -f docker-compose.b.yml up -d
Take a look at -p option. It will make a project with the services isolated inside it. Then you can manage them using following commands with the same docker-compose.yml file:
docker-compose -p foo up -d
docker-compose -p foo stop [service_name]
yes you can.
It is just a matter of preference but i usually create a folder for every project i have, each of them have a unique docker-compose.yml file in it with all its dependencies (frontend / database /redis)
Then to start a specific project i just go inside its folder and run docker-compose up. it then only starts this project without touching others.
you can also type this if you only want to start redis.
docker-compose up redis
All docker-compose subcommands (up, stop, down...) must be executed consuming a docker-compose<.SOME_OPT_VAL>.yml file.
This docker-compose.yml file must be in the folder where the docker-compose command is executed or must be informed via the -f flag. This way, these subcommands will be executed on the "services" (resources) defined in the docker-compose.yml file.
There is also the possibility of defining the service where a certain subcommand will be executed...
MODELS
docker-compose <SUBCOMAMND> <OPT_SERVICE_NAME>
docker-compose -f <DOCKER_COMPOSE_YML> <SUBCOMAMND> <OPT_SERVICE_NAME>
EXAMPLES
docker-compose stop api
docker-compose -f docker-compose.yml stop api
I'm new to docker and trying to understand what's best for my project (a webapp).
So far, I understand that I can either :
use docker-compose up -d to start a container defined by a set of rule in a docker-compose.yaml
build an image from a dockerfile and then create a container from this image
If I understand correctly, docker-compose up -d allows me (via volumes) to mount files (e.g my application) into the container. If i build an image however, I am able to embed my application natively in it (with a Dockerfile and COPY instruction).
Is my understanding correct ? How should I choose between those 2 choices ?
Docker Compose is simply a convenience wrapper around the docker command.
Everything you can do in docker compose, you can do plainly with running docker.
For example, these docker commands:
$ docker build -t temp .
$ docker run -i -p 3000:80 -v $PWD/public:/docroot/ temp
are similar to having this docker compose file:
version: '3'
services:
web:
build: .
image: temp
ports: ["3000:80"]
volumes:
- ./public:/docroot
and running:
$ docker-compose up web
Although docker compose advantages are most obvious when using multiple containers, it can also be used to start a single container.
My advice to you is: Start without docker compose, to understand how to build a simple image, and how to run it using the docker command line. When you feel comfortable with it, take a look at docker compose.
As for the best practice in regards to copying files to the container, or mounting them - the answer is both, and here is why:
When you are in development mode, you do not want to build the image on every code change. This is where the volume mount comes into play. However, your final docker image should contain your code so it can be deployed anywhere else. After all, this is why we use containers right? This is where the COPY comes into play.
Finally, remember that when you mount a volume to the container, it will "shadow" the contents of that folder in the container - this is how using both mount and COPY actually works as you expect it to work.
Docker-compose is just a container orchestrator.
I just provides you a simple way to create multiple related containers. The relationship between containers can be volumes, networks, start order, environment variables, etc.
In background, docker-compose uses plain docker. So, anything you can do using docker-compose (mounting volumes, using custom networks, scaling) can be done using docker commands (but of course is harder).
Is there a docker command which works like the vagrant up command?
I'd like to use the arangodb docker image and provide a Dockerfile for my team without forcing my teammates to get educated on the details of its operation, it should 'just work'. Within the the project root, I would expect the database to start and stop with a standard docker command. Does this not exist? If so, why not?
Docker Compose could do it.
docker-compose up builds image, creates container and starts it.
docker-compose stop stops the container.
docker-compose start restarts the container.
docker-compose down stops the container and removes image and the container.
With Docker compose file you can configure the ArangoDB (expose ports, volume mapping for db initialisation, etc.). Place the compose file to the project root, and run the up command.
I already have a running container for both postgres and redis in use for various things. However, I started those from the command line months ago. Now I'm trying to install a new application and the recipe for this involves writing out a docker compose file which includes both postgres and redis as services.
Can the compose file be modified in such a way as to specify the already-running containers? Postgres already does a fine job of siloing any of the data, and I can't imagine that it would be a problem to reuse the running redis.
Should I even reuse them? It occurs to me that I could run multiple containers for both, and I'm not sure there would be any disadvantage to that (other than a cluttered docker ps output).
When I set container_name to the names of the existing containers, I get what I assume is a rather typical error of:
cb7cb3e78dc50b527f71b71b7842e1a1c". You have to remove (or rename) that container to be able to reuse that name.
Followed by a few that compain that the ports are already in use (5432, 6579, etc).
Other answers here on Stackoverflow suggest that if I had originally invoked these services from another compose file with the exact same details, I could do so here as well and it would reuse them. But the command I used to start them was somehow never written to my bash_history, so I'm not even sure of the details (other than name, ports, and restart always).
Are you looking for docker-compose's external_links keyword?
external_links allows you reuse already running containers.
According to docker-compose specification:
This keyword links to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to the legacy option links when specifying both the container name and the link alias (CONTAINER:ALIAS).
And here's the syntax:
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
You can give name for your container. If there is no container with the given name, then it is the first time to run the image. If the named container is found, restart the container.
In this way, you can reuse the container. Here is my sample script.
containerName="IamContainer"
if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerName}\$"; then
docker restart ${containerName}
else
docker run --name ${containerName} -d hello-world
fi
You probably don't want to keep using a container that you don't know how to create. However, the good news is that you should be able to figure out how you can create your container again by inspecting it with the command
$ docker container inspect ID
This will display all settings, the docker-compose specific ones will be under Config.Labels. For container reuse across projects, you'd be interested in the values of com.docker.compose.project and com.docker.compose.service, so that you can pass them to docker-compose --project-name and use them as the service's name in your docker-compose.yaml.
Is there any possibility to launch containers of different images simultaneously from a single "Dockerfile" ?
There is a misconception here. A Dockerfile is not responsible for launching a container. It's responsible for building an image (which you can then use docker run ... to create a container from). More info can be found on the official Docker documentation.
If you need to run many docker containers simultaneously I'd suggest you had a look at Docker Compose which you can use to run containers based on images either from the docker registry or custom-built via Dockerfiles
Also somewhat new to Docker, but my understanding is that the Dockerfile is used to create Docker images, and then you start containers from images.
If you want to run multiple containers you need to use an orchestrator like docker swarm or Kubernetes.
Those have their own configuration files that tell it which images to spin up.