sharing files between two Docker containers - docker

How can I share files between 2 different container? I need some Yamal files settings for exist container.
someone have simple example?

You should mount the same folder/volume to all running containers:
Example:
docker-compose.yaml:
version: "3"
services:
first:
image: ubuntu:18.04
command: /bin/sh -c "while sleep 1000; do :; done"
volumes:
- "test:/data"
second:
image: ubuntu:18.04
command: /bin/sh -c "while sleep 1000; do :; done"
volumes:
- "test:/data"
volumes:
test:
Run:
docker-compose up -d
docker-compose exec first /bin/sh -c "echo 'Hello shared folder' > /data/example.txt"
docker-compose exec second /bin/sh -c "cat /data/example.txt"
You will see: Hello shared folder
We share same volume between two containers first write file and second read.

here is a simple toy example that demonstrate a basic use of volumes in docker
docker-compose.yml
version: "3.7"
services:
container1:
image: alpine:latest
volumes:
- type: bind
source: ./mydata
target: /opt/app/static
entrypoint:
- cat
- /opt/app/static/conf.yml
container2:
image: alpine:latest
volumes:
- type: bind
source: ./mydata
target: /opt/app/static2
entrypoint:
- cat
- /opt/app/static2/conf.yml
conf.yml (reside under mydata folder)
a simple text file
the containers get mounted with the local mydata folder
when running docker-compose up the containers get created and output the content of the conf.yml file to stdout
...
container2_1 | a simple text file
container1_1 | a simple text file
the docker-compose file is annotated with version 3.7 but compatible to version 2.4+ thus it can be also written like
volumes:
- ./mydata:/opt/app/static

Related

BUSYBOX script is wrong

I need to start script in busybox container which will outuput the date and words the busybox is running
when I'm up my compose file i just see that:
busybox_1 | tail: invalid number 'sh ./5sec.sh'
This is my script:
while true; do
sleep 5
date
echo busybox is running
done
It's my Dockerfile:
FROM busybox:noauto
COPY /5sec.sh /5sec.sh
RUN chmod 777 5sec.sh
CMD ./5sec.sh
It's my compose file (just in case) :
version: '3'
services:
nginx:
image: "nginx:latest"
env_file: .env
ports:
- $HTTP_PORT:80
volumes:
- nginx-vol:/var/log/nginx
busybox:
image: "busybox:noauto"
volumes:
- nginx-vol:/var/log/nginx
volumes:
nginx-vol:
Help me please. How to start script automaticly. (Sorry for bad English)
I don't know what is this docker image busybox:noauto (probably your local image - build by you), and I guess this is reason of your problem. It's look like this image have some RUN command with tail or something like it.
I propose to use some standard busybox from dockerhub for your base image, for example busybox:1:
FROM busybox:1
COPY /5sec.sh /5sec.sh
RUN chmod 777 5sec.sh
CMD ./5sec.sh
Second question you should use build instead of image in you docker-compose.yaml if you want build image by yourself from your Dockerfile:
version: '3'
services:
nginx:
image: "nginx:latest"
env_file: .env
ports:
- $HTTP_PORT:80
volumes:
- ./nginx-vol:/var/log/nginx
busybox:
build: .
volumes:
- ./nginx-vol:/var/log/nginx
This should solve your problem.
Notes:
chmod 777 isn't a good practice
script should start with Shebang - #!/bin/sh in your case

Copy files inside of docker container before volumes are created (Solr docker image with custom configuration)

I need to create docker container with Solr that has custom configuration created.
In order to create that config when installed not in docker container I need to do the following:
cp -r /opt/solr/server/solr/configsets/basic_configs /opt/solr/server/solr/configsets/myconf
Then I have to copy my custom schema.xml to that location:
cp conf/schema.xml solr/server/solr/configsets/myconf/conf
And then remove managed schema:
rm /opt/solr/server/solr/configsets/nutch/conf/managed-schema
I have this docker-compose.yml that I need to modify to do the same as commands above:
version: '3.3'
services:
solr:
image: "solr:7.3.1"
ports:
- "8983:8983"
volumes:
- ./solr/conf/solr-mapping.xml:/opt/solr/conf/schema.xml
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
schema.xml can go to valumes part, but I don't really understand where should I place this first cp and rm commands.
Thanks!
You can add your commands to your docker compose file (
https://docs.docker.com/compose/compose-file/#command)
version: '3.3'
services:
solr:
image: "solr:7.3.1"
ports:
- "8983:8983"
volumes:
- ./solr/conf/solr-mapping.xml:/opt/solr/conf/schema.xml
command: 'bash -e -c "precreate-core mycore; cp /opt/solr/conf/schema.xml /opt/solr/server/solr/mycores/mycore/conf/schema.xml; cp /opt/solr/conf/solrconfig.xml /opt/solr/server/solr/mycores/mycore/conf/solrconfig.xml; rm /opt/solr/server/solr/mycores/mycore/conf/managed-schema; solr-foreground;"'
it works for me

docker-compose run commands after up

I have the following docker-compose file
version: '3.2'
services:
nd-db:
image: postgres:9.6
ports:
- 5432:5432
volumes:
- type: volume
source: nd-data
target: /var/lib/postgresql/data
- type: volume
source: nd-sql
target: /sql
environment:
- POSTGRES_USER="admin"
nd-app:
image: node-docker
ports:
- 3000:3000
volumes:
- type: volume
source: ndapp-src
target: /src/app
- type: volume
source: ndapp-public
target: /src/public
links:
- nd-db
volumes:
nd-data:
nd-sql:
ndapp-src:
ndapp-public:
nd-app contains a migrations.sql and seeds.sql file. I want to run them once the container is up.
If I ran the commands manually they would look like this
docker exec nd-db psql admin admin -f /sql/migrations.sql
docker exec nd-db psql admin admin -f /sql/seeds.sql
When you run up with this docker-compose file, it will run the container entrypoint command for both the nd-db and nd-app containers as part of starting them up. In the case of nd-db, this does some prep work then starts the postgres database.
The entrypoint command is defined in the Dockerfile, and expects to combine configured bits of ENTRYPOINT and CMD. What you might do is override the ENTRYPOINT in a custom Dockerfile or overriding it in your docker-compose.yml.
Looking at the postgres:9.6 Dockerfile, it has the following two lines:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
You could add the following to your nd-db configuration in docker-compose.yml to retain the existing entrypoint but also "daisy-chain" a custom migration-script.sh step.
entrypoint: ["docker-entrypoint.sh", "migration-script.sh"]
The custom script needs only one special behavior: it needs to do a passthru execution of any following arguments, so the container continues on to start postgres:
#!/usr/bin/env bash
set -exo pipefail
psql admin admin -f /sql/migrations.sql
psql admin admin -f /sql/seeds.sql
exec "$#"
Does docker-composer -f path/to/config.yml name_of_container nd-db psql admin admin -f /sql/migrations.sql work?
I’ve found that you have to specify the config and container when running commands from the laptop.

env file-named volume in docker-compose

I'm new to Docker. I am writing a docker-compose file which creates 2 containers, foo and bar, sharing a volume data:
services:
foo:
container_name: foo
build: ./foo
volumes:
- data:/var/lib/
bar:
container_name: bar
build: ./bar
volumes:
- data:/var/lib/
depends_on:
- foo
volumes:
data:
Now, I want to use the environment variable TAG, to tag containers and volumes, in order to specify if it's for test or production. I expects something like this:
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- data_${TAG}:/var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- data_${TAG}:/var/lib/
depends_on:
- foo
volumes:
data_${TAG}:
Obviously, docker-compose is unhappy because of the last line containing data_${TAG}:.
How can I name my volume with TAG env variable?
If you create your volumes in advance, you can use the variable on external volume names like this (note that the reference inside of compose is a fixed name but it points to a variable external volume name):
$ cat docker-compose.volvar.yml
version: '2'
volumes:
data:
external:
name: test-data-${TAG}
services:
source:
image: busybox
command: /bin/sh -c 'echo From ${TAG} >>/data/common.log && sleep 10m'
environment:
- TAG
volumes:
- data:/data
target:
image: busybox
command: tail -f /data/common.log
depends_on:
- source
environment:
- TAG
volumes:
- data:/data
Create your volumes in advance with a docker volume create command:
$ docker volume create test-data-dev
test-data-dev
$ docker volume create test-data-uat
test-data-uat
$ docker volume create test-data-stage
test-data-stage
And here's an example of running it (I didn't use different directories or change the project name, so compose just replaced my containers each time, but I could have easily changed the project to run them all concurrently with the same results):
$ TAG=dev docker-compose -f docker-compose.volvar.yml up -d
Creating test_source_1
Creating test_target_1
$ docker logs test_target_1
From dev
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
$ TAG=stage docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From stage
$ # just to show that the volumes are saved and unique,
$ # rerunning uat generates a second line
$ TAG=uat docker-compose -f docker-compose.volvar.yml up -d
Recreating test_source_1
Recreating test_target_1
$ docker logs test_target_1
From uat
From uat
I don't know if this is possible like that but here is what I do:
I have a docker-compose.yml file like that
services:
foo:
container_name: foo_${TAG}
build: ./foo
volumes:
- /var/lib/
bar:
container_name: bar_${TAG}
build: ./bar
volumes:
- /var/lib/
depends_on:
- foo
And then I create a file docker-compose.override.yml that contains
services:
foo:
volumes:
- data_dev:/var/lib/
bar:
volumes:
- data_dev:/var/lib/
This way when you launch docker-compose, it will use the main file and use the other file to override its values.
You should then have 3 files:
docker-compose.yml
docker-compose.override-prod.yml
docker-compose.override-dev.yml
And then when you build you have the choice between those 2:
(What I do) I copy docker-compose.override-.yml to docker-compose.override.yml and Docker Compose with automatically takes those 2 files
You can provide the 2 files to use to the docker compose file (I forgot what the paramter is... I guess it's "-f")
I hope it helps

How to reload environment variables in docker-compose container with minimum downtime?

docker-compose.yml
version: '2'
services:
app:
build:
context: .
command: python src/app.py
restart: on-failure
depends_on:
- db
environment:
- TJBOT_DB_HOST=db
- TJBOT_API_KEY
- TJBOT_AUTO_QUESTION_TIME
env_file:
- .env
db:
image: mongo:3.0.14
volumes:
- mongodbdata:/data/db
volumes:
mongodbdata:
If I change the .env file, how could I reload the container to use the new environment variables with minimum downtime?
If you are running the yml with docker-compose, you can just run docker-compose up -d and it will recreate any containers that have changes and leave all unchanged services untouched.
$ cat docker-compose.env2.yml
version: '2'
services:
test:
image: busybox
# command: env
command: tail -f /dev/null
environment:
- MY_VAR=hello
- MY_VAR2=world
test2:
image: busybox
command: tail -f /dev/null
environment:
- MY_VAR=same ole same ole
$ docker-compose -f docker-compose.env2.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
Creating test_test2_1
$ vi docker-compose.env2.yml # edit the file to change MY_VAR
$ docker-compose -f docker-compose.env2.yml up -d
Recreating test_test_1
test_test2_1 is up-to-date
If you run the containers as a docker stack deploy -c docker-compose.yml with a version 3 file format, you can do a rolling update of the service which will prevent any downtime if you have multiple instances of your service running. This functionality is still very new, you'll want 1.13.1 to fix some of the issues with updates, and as with anything this new, bugs are still being worked out.

Resources