I am relatively new to using docker-compose and am running a stack with the following command
docker-compose \
--project-name version-12 \
-f installation/docker-compose-common.yml \
-f installation/docker-compose-erpnext.yml \
--project-directory installation \
up -d
now, with the non-default docker-compose.yml files I can't manage to have docker-compose stop, docker-compose ps to work. I have tried to use the -f, or --project-name flags but couldn't make it happen.
Can anyone kindly advise how to make this work in such a scenario?
You need to repeat all of the docker-compose options for every command you need to run.
There are two ways around this. One is to write a shell script wrapper that invokes this command:
#!/bin/sh
# I am `docker-compose-erpnext.sh`
# Run me with any normal `docker-compose` options
exec docker-compose \
--project-name version-12 \
-f installation/docker-compose-common.yml \
-f installation/docker-compose-erpnext.yml \
--project-directory installation \
"$#"
Docker Compose also supports environment variables for most of its settings; many of these in turn can also be included in a .env file. You can't specify --project-directory this way, but it's documented to default to the directory of the Compose file.
export COMPOSE_PROJECT_NAME=version-12
export COMPOSE_FILE=installation/docker-compose-common.yml:installation/docker-compose-erpnext.yml
docker-compose up -d
docker-compose ps
You can put these two settings in a file name .env in the directory from which you're running docker-compose (not the installation subdirectory); but if you have multiple deployments you're trying to manage, you can't specify an alternate name for the file (there is neither a CLI option nor an environment variable setting for it).
Related
let's say I have a project. I want to make this project with different environments (dev, stag, prod) using Docker. So I will have three project containers with different environments. I have tried docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build but when I run another environment, it just only replaces the container.
how to achieve this? note: each docker-compose.{env}.yml have .env.{env_name} file self.
Compose has the notion of a project name. This is used to identify all of the Docker resources that belong to a specific Compose setup. It defaults to the basename of the current directory; in your setup, this means all three environments have the same project name and their containers will replace each other's.
You can use the docker-compose -p option to override the project name for a single docker-compose invocation. Like the -f options you have, you need to provide this option on every Compose invocation (or set the $COMPOSE_PROJECT_NAME environment variable).
# Run the same images in all environments
# (requires a fixed `image:` name for each thing that is `build:`ed)
docker-compose build
# Start the three environments
docker-compose \
-p dev \ # alternate project name
-f docker-compose.yml \ # base Compose file
-f docker-compose.dev.yml \ # per-environment overrides
up -d
docker-compose -p staging -f docker-compose.yml -f docker-compose.staging.yaml up -d
docker-compose -p prod -f docker-compose.yml -f docker-compose.prod.yaml up -d
We are trying to store the container names in my Makefile but I see below error when executing the build, someone please advise. Thanks.
.PHONY: metadata
metadata: .env1
docker pull IMAGE_NAME
docker run $IMAGE_NAME;
ID:= $(shell docker ps --format '{{.Names}}')
#echo ${ID}
docker cp ${ID}:/app/.env .env2
Container names are not shown in below "ID" Variable when executing the makefile from Jenkins
ID:=
/bin/sh: ID:=: command not found
There are a couple of things you can do in terms of pure Docker mechanics to simplify this.
You can specify an alternate command when you docker run an image: anything after the image name is taken as the image to run. For instance, you can cat the file as the main container command, and replace everything you have above as:
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --rm \
-e "ARTIFACTORY_USER=${ARTIFACTORY_CREDENTIALS_USR}" \
-e "ARTIFACTORY_PASSWORD=${ARTIFACTORY_CREDENTIALS_PSW}" \
--env-file .env1 \
"${ARTIFACTDATA_IMAGE_NAME}" \
cat /app/.env \
> $#
(It is usually better to avoid docker cp, docker exec, and other imperative-type commands; it is fairly inexpensive and better practice to run a new container when you need to.)
If you can't do this, you can docker run --name your choice of names, and then use that container name in the docker cp option.
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --name getmetadata ...
docker cp getmetadata:/app/.env $#
docker stop getmetadata
docker rm getmetadata
If you really can't avoid this at all, each line of the Makefile runs in a separate shell. On the one hand this means you need to join together lines if you want variables from one line to be visible in a later line; on the other, it means you have normal shell functionality available and don't need to use the GNU Make $(shell ...) extension (which evaluates when the Makefile is loaded and not when you're running the command).
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
# Note here:
# $$ escapes $ for the shell
# Multiple shell commands joined together with && \
# Beyond that, pure Bourne shell syntax
ID=$$(docker run -d ...) && \
echo "$$ID" && \
docker cp "$$ID:/app/.env" "$#"
Because of using multiple micro-services, with each micro-service having their own database dependencies (some overlap). I have a custom bash file that allows the developer to choose which microservices they want to run locally (for testing), it essentially builds a command:
EDIT: thanks to answer pointing out, you do need -f before every compose .yml file, I do use this, I just didn't originally type it out here.
docker-compose -f \
-f <docker-compose.ms1.yml> -f <docker-compose.ms2.yml> \
-f <docker-compose.dba> -f <docker-compose.dbb> \
up ms1-container ms2-container \
dba-container dbb container
Now this works fine, but traditionally (using a single .yml file and just running docker-compose up), if I wanted to see output logs, I would do docker-compose logs -f, or if I wanted to restart a particular service in the compose file, I would:
docker-compose stop <service_name>
docker-compose rm <service_name>
docker-compose create <service_name>
docker-compose start <service_name>
But now with it all started dynamically, how can I restart a particular docker-compose service, and also how can I tap back into the logs with logs -f?
First I think your docker-compose command not valid, it should be:
docker-compose -f docker-compose_1.yaml -f docker-compose_2.yaml up -d
Then, everything is same with the one you just use one docker-compose.yaml:
E.g.
docker-compose_1.yaml:
version: '3'
services:
frontend:
image: alpine
command: "tail -f /dev/null"
docker-compose_2.yaml:
version: '3'
services:
backend:
image: alpine
command: "tail -f /dev/null"
You can still use docker-compose -f docker-compose_1.yaml -f docker-compose_2.yaml stop frontend to stop one service:
shubuntu1#shubuntu1:~/77$ docker-compose -f docker-compose_1.yaml -f docker-compose_2.yaml ps
Name Command State Ports
----------------------------------------------------
77_backend_1 tail -f /dev/null Up
77_frontend_1 tail -f /dev/null Exit 137
For logs, docker-compose -f docker-compose_1.yaml -f docker-compose_2.yaml logs for all service, while docker-compose -f docker-compose_1.yaml -f docker-compose_2.yaml logs backend for one service.
Reference to official guide:
You can supply multiple -f configuration files. When you supply multiple files, Compose combines them into a single configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add to their predecessors.
i am a totally docker newb, so sorry for that
i have stand-alone docker image (some node app),
that i want to run in different environments.
i want to set up the env file with run RUN --env-file <path>
How ever, i want to use the env files that inside the image (so i can use different files per env),
and not on server.
so would be the path inside image.
is there any way to do so?
perhaps like "cp" (docker cp [OPTIONS] CONTAINER:<path>)
but doesn't seem to work.
what the best practice here?
am i making sense?
Thanks!!
Docker bind mounts are a fairly effective way to inject configuration files like this into a running container. I would not try to describe every possible configuration in your built image; instead, let that be configuration that's pushed in from the host.
Pick some single specific file to hold the configuration. For the sake of argument, let's say it's /usr/src/app/env. Set up your application however it's built to read that file at startup time. Either make sure the application can still start up if the file is missing, or build your image with some file there with reasonable default settings.
Now when you run your container, it will always read settings from that known file; but, you can specify a host file that will be there:
docker run -v $PWD/env.development:/usr/src/app/env myimage
Now you can locally have an env.development that specifies extended logging and a local database, and an env.production with minimal logging and pointing at your production database. If you set up a third environment (say a shared test database with some known data in it) you can just run the container with this new configuration, without rebuilding it.
Following is the command to run docker
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Example
docker run --name test -it debian
focus on following switch
--env , -e Set environment variables
--env-file You can pass environment variables to your containers with the -e flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
In case, you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each
line to be in the VAR=VAL format, mimicking the argument passed to
--env. Comment lines need only be prefixed with #
In docker-compose there is a .env file which can hold all the properties of used in docker-compose.yml
Is there an equivalent of that in docker run command? I have exhausted the docs and forums but couldn't find any answers.
Here is what I am looking for:
Rather than
docker run -v /dir1:/dir1 -v /dir2:dir2 -p 80:80 repo/image
run docker run -config config.yml repo/image' with config.yml file holding all the property mappings
One option could be to have the parameters stored in a file and just get the string of the file using cat:
docker run $(cat config.file) repo/image
Where config.file content should be something like:
-v /dir1:/dir1 -v /dir2:dir2 -p 80:80
This does seem to be an unfortunate gap. The best workaround I've found is to use docker-compose with a docker-compose.yml file to define the container and all it's flags and then use
docker-compose run your-service-name-here
To just run a single one-off container.
Unfortunatelly, it is no way to do this as you describe.
However you can add several env config files and merge them to one .env like describe this answer
$ awk -F= '!a[$1]++' first.env second.env > .env