I'm new to Docker. I am writing a docker-compose file which creates 2 containers, foo and bar, sharing a volume data:
services:
foo:
container_name: foo
build: ./foo
volumes:
- data:/var/lib/
bar:
container_name: bar
build: ./bar
volumes:
- data:/var/lib/
depends_on:
- foo
volumes:
data:
Now, I want to use the environment variable TAG, to tag containers and volumes, in order to specify if it's for test or production. I expects something like this:
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- data_${TAG}:/var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- data_${TAG}:/var/lib/
depends_on:
- foo
volumes:
data_${TAG}:
Obviously, docker-compose is unhappy because of the last line containing data_${TAG}:.
How can I name my volume with TAG env variable?
If you create your volumes in advance, you can use the variable on external volume names like this (note that the reference inside of compose is a fixed name but it points to a variable external volume name):
$ cat docker-compose.volvar.yml
version: '2'
volumes:
data:
external:
name: test-data-${TAG}
services:
source:
image: busybox
command: /bin/sh -c 'echo From ${TAG} >>/data/common.log && sleep 10m'
environment:
- TAG
volumes:
- data:/data
target:
image: busybox
command: tail -f /data/common.log
depends_on:
- source
environment:
- TAG
volumes:
- data:/data
Create your volumes in advance with a docker volume create command:
$ docker volume create test-data-dev
test-data-dev
$ docker volume create test-data-uat
test-data-uat
$ docker volume create test-data-stage
test-data-stage
And here's an example of running it (I didn't use different directories or change the project name, so compose just replaced my containers each time, but I could have easily changed the project to run them all concurrently with the same results):
$ TAG=dev docker-compose -f docker-compose.volvar.yml up -d
Creating test_source_1
Creating test_target_1
$ docker logs test_target_1
From dev
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
$ TAG=stage docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From stage
$ # just to show that the volumes are saved and unique,
$ # rerunning uat generates a second line
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
From uat
I don't know if this is possible like that but here is what I do:
I have a docker-compose.yml file like that
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- /var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- /var/lib/
depends_on:
- foo
And then I create a file docker-compose.override.yml that contains
services:
foo:
volumes:
- data_dev:/var/lib/
bar:
volumes:
- data_dev:/var/lib/
This way when you launch docker-compose, it will use the main file and use the other file to override its values.
You should then have 3 files:
docker-compose.yml
docker-compose.override-prod.yml
docker-compose.override-dev.yml
And then when you build you have the choice between those 2:
(What I do) I copy docker-compose.override-.yml to docker-compose.override.yml and Docker Compose with automatically takes those 2 files
You can provide the 2 files to use to the docker compose file (I forgot what the paramter is... I guess it's "-f")
I hope it helps
Related
I have the following two docker compose files:
docker-compose.yml :
version: '2.3'
services:
# test11 service
test11:
build: test11/.
image: "test11"
and
docker-compose.yml (file inside the folder named test11 that contains Dockerfile and the following docker-compose ):
version: '2.3'
networks:
citrixhoneypot_local:
services:
# CitrixHoneypot service
test11:
build: .
container_name: test11
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "test11:2006"
# read_only: true
volumes:
- test11:/opt/test11/logs
volumes:
test11:
driver:local
when i run docker-compose up --build for the first file, everything seems ok and the container build and i can run exec -it sh on it and get access.
but the problem is that the volume isn't made in the path
/var/lib/docker/volumes and i can't find it there.
also when i write in /opt/test11/logs from inside the docker container no file is made under /var/lib/docker/volumes .
I tried this with bind path too. same problem with that too.
I'm trying to mount a directory in my organization's server (Y:/some/path) to a docker container. My docker-compose.yml looks like this:
services:
dagit:
volumes:
- type: bind
source: Y:/some/path
target: /data
build:
context: .
dockerfile: Dagit.Dockerfile
ports:
- "3000:3000"
container_name: dagit-container
dagster:
volumes:
- type: bind
source: Y:/some/path
target: /data
build:
context: .
dockerfile: Daemon.Dockerfile
container_name: dagster-container
tty: true
env_file: .env
I have shared Y:/some/path with Docker. On docker compose up -d --build, I get the following error: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /host_mnt/uC/my_ip/some/path. How can I properly mount such a directory? Also, what would be the correct way to declare this mount in the top level volumes key and reference it in both services so I don't have to duplicate the code?
Take the following docker-compose.yml file:
version: '3.9'
services:
DockerA:
image: ubuntu:latest
container_name: DockerA
command: ["sleep", "300d"]
volumes:
- "./data:/root/data"
DockerB:
image: ubuntu:latest
container_name: DockerB
command: [ "sleep", "300d" ]
volumes:
- "./data:/root/data"
volumes:
data:
And next to that file (same directory) create a directory called 'data'.
Now start the dockers with:
docker-compose up -d
If you enter the DockerA container and create something in the data directory:
docker exec -it DockerA /bin/bash
# cd /root/data
# echo "Hello World!" > x.txt
then go into DockerB
docker exec -it DockerB /bin/bash
# cd /root/data
# cat x.txt
you will see the same contents of x.txt.
Now back to the host and check the x.txt file in your ./data directory.
Also same contents.
If you edit x.txt on the Host, it will immediatelly be reflected in both DockerA and DockerB.
Assuming your enterprise share is Windows/SAMBA CIFS/SMB-Share, add it as Volume directly and not via Windows mount.
volumes:
shared_folder:
driver_opts:
type: cifs
o: username=smbuser,password=smbpass,uid=UID for mount,gid=gid for mount,vers=3.0,rw
device: //hostname_or_ip/folder
Version depends on your Server Type, rw = readwrite
Then map the volume
services:
dagit:
volumes:
- shared_folder:/root/data
I need to create docker container with Solr that has custom configuration created.
In order to create that config when installed not in docker container I need to do the following:
cp -r /opt/solr/server/solr/configsets/basic_configs /opt/solr/server/solr/configsets/myconf
Then I have to copy my custom schema.xml to that location:
cp conf/schema.xml solr/server/solr/configsets/myconf/conf
And then remove managed schema:
rm /opt/solr/server/solr/configsets/nutch/conf/managed-schema
I have this docker-compose.yml that I need to modify to do the same as commands above:
version: '3.3'
services:
solr:
image: "solr:7.3.1"
ports:
- "8983:8983"
volumes:
- ./solr/conf/solr-mapping.xml:/opt/solr/conf/schema.xml
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
schema.xml can go to valumes part, but I don't really understand where should I place this first cp and rm commands.
Thanks!
You can add your commands to your docker compose file (
https://docs.docker.com/compose/compose-file/#command)
version: '3.3'
services:
solr:
image: "solr:7.3.1"
ports:
- "8983:8983"
volumes:
- ./solr/conf/solr-mapping.xml:/opt/solr/conf/schema.xml
command: 'bash -e -c "precreate-core mycore; cp /opt/solr/conf/schema.xml /opt/solr/server/solr/mycores/mycore/conf/schema.xml; cp /opt/solr/conf/solrconfig.xml /opt/solr/server/solr/mycores/mycore/conf/solrconfig.xml; rm /opt/solr/server/solr/mycores/mycore/conf/managed-schema; solr-foreground;"'
it works for me
How can I share files between 2 different container? I need some Yamal files settings for exist container.
someone have simple example?
You should mount the same folder/volume to all running containers:
Example:
docker-compose.yaml:
version: "3"
services:
first:
image: ubuntu:18.04
command: /bin/sh -c "while sleep 1000; do :; done"
volumes:
- "test:/data"
second:
image: ubuntu:18.04
command: /bin/sh -c "while sleep 1000; do :; done"
volumes:
- "test:/data"
volumes:
test:
Run:
docker-compose up -d
docker-compose exec first /bin/sh -c "echo 'Hello shared folder' > /data/example.txt"
docker-compose exec second /bin/sh -c "cat /data/example.txt"
You will see: Hello shared folder
We share same volume between two containers first write file and second read.
here is a simple toy example that demonstrate a basic use of volumes in docker
docker-compose.yml
version: "3.7"
services:
container1:
image: alpine:latest
volumes:
- type: bind
source: ./mydata
target: /opt/app/static
entrypoint:
- cat
- /opt/app/static/conf.yml
container2:
image: alpine:latest
volumes:
- type: bind
source: ./mydata
target: /opt/app/static2
entrypoint:
- cat
- /opt/app/static2/conf.yml
conf.yml (reside under mydata folder)
a simple text file
the containers get mounted with the local mydata folder
when running docker-compose up the containers get created and output the content of the conf.yml file to stdout
...
container2_1 | a simple text file
container1_1 | a simple text file
the docker-compose file is annotated with version 3.7 but compatible to version 2.4+ thus it can be also written like
volumes:
- ./mydata:/opt/app/static
docker-compose.yml
version: '2'
services:
app:
build:
context: .
command: python src/app.py
restart: on-failure
depends_on:
- db
environment:
- TJBOT_DB_HOST=db
- TJBOT_API_KEY
- TJBOT_AUTO_QUESTION_TIME
env_file:
- .env
db:
image: mongo:3.0.14
volumes:
- mongodbdata:/data/db
volumes:
mongodbdata:
If I change the .env file, how could I reload the container to use the new environment variables with minimum downtime?
If you are running the yml with docker-compose, you can just run docker-compose up -d and it will recreate any containers that have changes and leave all unchanged services untouched.
$ cat docker-compose.env2.yml
version: '2'
services:
test:
image: busybox
# command: env
command: tail -f /dev/null
environment:
- MY_VAR=hello
- MY_VAR2=world
test2:
image: busybox
command: tail -f /dev/null
environment:
- MY_VAR=same ole same ole
$ docker-compose -f docker-compose.env2.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
Creating test_test2_1
$ vi docker-compose.env2.yml # edit the file to change MY_VAR
$ docker-compose -f docker-compose.env2.yml up -d
Recreating test_test_1
test_test2_1 is up-to-date
If you run the containers as a docker stack deploy -c docker-compose.yml with a version 3 file format, you can do a rolling update of the service which will prevent any downtime if you have multiple instances of your service running. This functionality is still very new, you'll want 1.13.1 to fix some of the issues with updates, and as with anything this new, bugs are still being worked out.