How to rename service in docker compose override? - docker

Let's say I have a couple of services web1 and web2 and I can spin them up in prod or dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Now let's say I do the same for a testing config. If my test yml were only changing the container name, for example,
version: '3.6'
services:
web1:
container_name: web1_test
web2:
container_name: web2_test
and I had my web services already running, then this would recreate the services, effectively replacing their containers with new ones bearing the new config (in this case a new name). But I'd rather not, it'd be nice to just spin them up and down without interfering with the originals.
A better experience would be
version: '3.6'
services:
web1:
service_name: web1_test
web2:
service_name: web2_test
then I could start the test versions and stop them without touching the originals.
docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d web1_test web2_test
Is there any way to leave the original services up and spin up some new test instances with a simple config overlay?
Note: I'm currently using docker-compose run to meet my needs. In practice I'm also modifying env variables and ports likes so:
docker-compose -f Docker/docker-compose.yml -f Docker/docker-compose.dev.yml run -d --name web1_test -e VAR1=web1_test_var -p 5001:5000 web1
so I already know 'how to get it done', I'm looking more for, am I missing a better way to accomplish the same? It'd be nice to have the port and env and name stuff in a config wouldn't it?

Instead of using container_name per service you could use different project names for the same docker-compose.yml using the flag: -p, --project-name NAME.
docker-compose -f docker-compose.yml -p foo up -d
foo_web_1
foo_web_2
docker-compose -f docker-compose.yml -p bar up -d
bar_web_1
bar_web_2

Related

How to run multiples docker container when run docker-compose up ( gitlab-ci)

I need to deploy a new container each time that i do "docker-compose up" because the container will run a SQL SERVER database in a Gitlab pipeline for each merge request that will be created in the repository.
Is there a flag that should be passed to do this? I know the --force-recreate, but it recreate the SAME container. I neeed to every time to the command docker-compose up been called to create another container with the same configurations.
There is the --scale SERVICE=NUM, but it is not what i need. Why? because when i scale i can not control which host port docker will grab and use.
how do i intend to do this? By a environment variable. Look:
docker-compose file
version: '2'
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: ${CI_PIPELINE_ID}
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${DATABASE_PASSWORD}
ports:
- "${CI_PIPELINE_ID}:1433"
my gitlab-ci:
stages:
- database_deploy
- build_and_test
- database_stop
database_deploy:
image: docker:latest
stage: database_deploy
services:
- name: docker
script:
- apk add py-pip
- pip install docker-compose==1.8.0
- cd ./docker; docker-compose up -d; docker ps
build_and_test:
image: maven:latest
stage: build_and_test
script:
- mvn test -Dquarkus.test.profile=homolog
- mvn checkstyle:check
artifacts:
paths:
- target
database_stop: &database_stop
image: docker:latest
stage: database_stop
services:
- name: docker
script:
- docker stop $CI_PIPELINE_ID
- docker rm -f $CI_PIPELINE_ID
- docker ps
cleanup_deployment_failure:
needs: ["build_and_test"]
when: on_failure
<<: *database_stop
Docker-compose groups your services in "projects". By default, the project name is the name of the directory that contains your docker-compose.yml file. When you run docker up, docker-compose will create any containers in the project that don't already exist.
Since you want docker-compose up to create new containers every time -- with different configurations -- you need to tell docker-compose that it's running in a different project each time. You can do this with the --project-name (-p) flag.
For example, let's say I have this docker-compose.yml:
version: "3"
services:
web:
image: "alpinelinux/darkhttpd"
ports:
- "${HOSTPORT}:8080"
I can bring up multiple instances of this stack by setting HOSTPORT and specifying a project name for each invocation of docker-compsoe:
$ HOSTPORT=8081 docker-compose -p project1 up -d
$ HOSTPORT=8082 docker-compose -p project2 up -d
After running those two commands, we see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
825ea98cca55 alpinelinux/darkhttpd "darkhttpd /var/www/…" 4 seconds ago Up 3 seconds 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp project2_web_1
776c12d38bbb alpinelinux/darkhttpd "darkhttpd /var/www/…" 9 seconds ago Up 8 seconds 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp project1_web_1
And I think that's exactly what you're looking for.
Note that with this configuration, you will need to specify the project name and a value for HOSTPORT every time you run docker-compose.
You can also set the project name using the COMPOSE_PROJECT_NAME environment variable. This means you can actually organize things using environment files.
We can reproduce the above behavior by creating project1.env with:
COMPOSE_PROJECT_NAME=project1
HOSTPORT=8081
And project2.env with:
COMPOSE_PROJECT_NAME=project2
HOSTPORT=8082
And then running:
$ docker-compose --env-file project1.env up -d
$ docker-compose --env-file project2.env up -d
As before, you'll need to provide --env-file every time you run docker-compose.

Start particular service from docker-compose

I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.

Why does docker-compose depends on working directory?

When calling docker-compose in different directories, I get conflict errors and problems with networking:
Problem with conflicts
docker-compose.yml
version: '3'
services:
redis:
image: "redis:alpine"
container_name: redis
I. create and start docker container by docker-compose => OK
$ docker-compose up --force-recreate -d
Creating redis ... done
II. recreate and start docker container by docker-compose => OK
$ docker-compose up --force-recreate -d
Recreating redis ... done
III. copy docker-compose.yml to other directory.
Then try to recreate from other directory => ERROR
$ cp docker-compose.yml red2/
$ cd red2/
$ docker-compose up --force-recreate -d
Creating redis ... error
ERROR: for redis Cannot create container for service redis: Conflict. The container name "/redis" is already in use by container "1ba060b545f716731ac1c5992b680e4d4b3639fc0ffeb291899c712f0839d23a". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.
Different Networks
Containers created from docker-compose in different directories also do not share the same network.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a4af52e89cd red2_default bridge local
57695428bd9d redis_default bridge local
Usecase
My usecase for that szenario:
Call docker-compose from different deployment jobs.
Start containers for testing
Questions
Why is there the directory dependency? Is there an option to switch it off?
Does docker ps show which directory was used?
Answer for 1:
The directory name is used as the default project name.
You should better specify the project name:
docker-compose -p myproject up --force-recreate -d
Question 2 still open

Executing docker run command from config file

I have several arguments in my docker run command like
docker run --rm -v /apps/hastebin/data:/app/data --name hastebin -d -p 7777:7777 -e STORAGE_TYPE=file rlister/hastebin
Can I put all the arguments of this in a default/config file so that I dont have to mention it explicitly in the run command?
You can try docker compose
With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration
In your case docker-compose.yml file will looks like
version: '2'
services:
hastebin:
image: rlister/hastebin
ports:
- "7777:7777"
volumes:
- /apps/hastebin/data:/app/data
environment:
- STORAGE_TYPE=file
And you can run service by command docker-compose up

Adding files to standard images using docker-compose

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql

Resources