I have the following docker-compose file
version: '3.2'
services:
nd-db:
image: postgres:9.6
ports:
- 5432:5432
volumes:
- type: volume
source: nd-data
target: /var/lib/postgresql/data
- type: volume
source: nd-sql
target: /sql
environment:
- POSTGRES_USER="admin"
nd-app:
image: node-docker
ports:
- 3000:3000
volumes:
- type: volume
source: ndapp-src
target: /src/app
- type: volume
source: ndapp-public
target: /src/public
links:
- nd-db
volumes:
nd-data:
nd-sql:
ndapp-src:
ndapp-public:
nd-app contains a migrations.sql and seeds.sql file. I want to run them once the container is up.
If I ran the commands manually they would look like this
docker exec nd-db psql admin admin -f /sql/migrations.sql
docker exec nd-db psql admin admin -f /sql/seeds.sql
When you run up with this docker-compose file, it will run the container entrypoint command for both the nd-db and nd-app containers as part of starting them up. In the case of nd-db, this does some prep work then starts the postgres database.
The entrypoint command is defined in the Dockerfile, and expects to combine configured bits of ENTRYPOINT and CMD. What you might do is override the ENTRYPOINT in a custom Dockerfile or overriding it in your docker-compose.yml.
Looking at the postgres:9.6 Dockerfile, it has the following two lines:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
You could add the following to your nd-db configuration in docker-compose.yml to retain the existing entrypoint but also "daisy-chain" a custom migration-script.sh step.
entrypoint: ["docker-entrypoint.sh", "migration-script.sh"]
The custom script needs only one special behavior: it needs to do a passthru execution of any following arguments, so the container continues on to start postgres:
#!/usr/bin/env bash
set -exo pipefail
psql admin admin -f /sql/migrations.sql
psql admin admin -f /sql/seeds.sql
exec "$#"
Does docker-composer -f path/to/config.yml name_of_container nd-db psql admin admin -f /sql/migrations.sql work?
I’ve found that you have to specify the config and container when running commands from the laptop.
Related
I'm running a Docker Compose environment in which to execute a bunch of Selenium tests. In order to do that, I'm using the images Selenium provides. I run two containers: one as the Selenium HUB and another as a Selenium Firefox Node. I have a third container defined who is in charge of the tests execution (let's call it Tests Node).
I created a Docker volume used by the Selenium Firefox Node and the Tests Node so certain files can be shared by them both. I called the volume selenium_volume and it's mounted for both on /selenium_tests.
I soon faced a problem for that folder is created in both systems for user root, but the user the Selenium Firefox Node uses by default is seluser. It has read but not write permissions, which I need.
I tried to use the following as the container entrypoint, so I make seluser the owner of the directory: bash -c 'sudo chown -R seluser:seluser /selenium_tests && /opt/bin/entry_point.sh', but it's not working.
When I connect to the container after it's launched (docker exec -ti selenium-firefox bash), I see the folder still belongs to root. If I then once connected run the command sudo chown -R seluser:seluser /selenium_tests && /opt/bin/entry_point.sh', the folder permissions are changed and we reach to the point I was expecting.
I would like to know why it's working when I run the command manually but it's not when running through the entrypoint before the entrypoint script of the container.
Currently my docker-compose.yml looks similar to the following:
version: '3.8'
services:
tests-node:
build: .
depends_on:
- selenium-hub
- selenium-firefox
# the wait-for-it.sh command makes the container wait until the Selenium containers are ready to work
entrypoint: ["./wait-for-it.sh", "-t", "15", "selenium-firefox:5555", "--"]
command: ["execute_tests.sh"]
volumes:
- ./remote_reports/:/selenium_tests/reports/
- type: volume
source: selenium_volume
target: /selenium_tests
networks:
selenium_net: {}
selenium-hub:
image: selenium/hub:3.141.59
ports:
- "4444:4444"
networks:
selenium_net: {}
selenium-firefox:
image: selenium/node-firefox:3.141.59
depends_on:
- selenium-hub
entrypoint: bash -c 'sudo chown -R seluser:seluser /selenium_tests && /opt/bin/entry_point.sh'
volumes:
- /dev/shm:/dev/shm
- type: volume
source: selenium_volume
target: /selenium_tests
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
selenium_net: {}
expose:
- 5555
volumes:
selenium_volume:
networks:
selenium_net:
driver: bridge
I tried to run the chown command as command after the entrypoint but it never reaches I think because the script that is launched during the entrypoint keeps running in foreground.
I would like to do all the things directly on the docker-compose.yml and avoiding to create any Dockerfile, but I don't know if this is possible at this point.
I've found a possible solution. Using this as the entrypoint:
bash -c '( while ! timeout 1 bash -c "echo > /dev/tcp/localhost/5555"; do sleep 1; done ; sudo chown -R seluser:seluser /selenium_tests ) & /opt/bin/entry_point.sh'
It runs the part of the code before the & in background, allowing entry_point.sh to run without a problem. When the port 5555 is ready, it finally gives the permissions to the desired folder to seluser.
The simplest way is to add chmod -R 777 /selenium_tests to execute_tests.sh before starting the tests. In this case, #aorestr, you don't need to change entrypoint of the selenium-firefox container.
I am trying to run Symfony 3 console command inside of my docker container but not able to getting proper output.
docker-compose.yaml
version: '3.4'
services:
app:
build:
context: .
target: symfony_docker_php
args:
SYMFONY_VERSION: ${SYMFONY_VERSION:-}
STABILITY: ${STABILITY:-stable}
volumes:
# Comment out the next line in production
- ./:/srv/app:rw,cached
# If you develop on Linux, comment out the following volumes to just use bind-mounted project directory from host
- /srv/app/var/
- /srv/app/var/cache/
- /srv/app/var/logs/
- /srv/app/var/sessions/
environment:
- SYMFONY_VERSION
nginx:
build:
context: .
target: symfony_docker_nginx
depends_on:
- app
volumes:
# Comment out the next line in production
- ./docker/nginx/conf.d:/etc/nginx/conf.d:ro
- ./public:/srv/app/public:ro
ports:
- '80:80'
My console command
docker-compose exec nginx php bin/console
It returns the following response
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
Copy from https://docs.docker.com/compose/reference/exec/
To disable this behavior, you can either the -T flag to disable pseudo-tty allocation.
docker-compose exec -T nginx <command>
Or, set COMPOSE_INTERACTIVE_NO_CLI value as 1
export COMPOSE_INTERACTIVE_NO_CLI=1
For php bin/console to run you need to run from app container like below.
docker-compose exec -T app php bin/console
Following case:
I want to build with docker-compose two containers. One is MySQL, the other is a .war File executed with springboot that is dependend on MySQL and needs a working db. After I build the mysql container, I want to fill the db with my mysqldump file, before the other container is built.
My first idea was to have it in my mysql Dockerfile as
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
but of course it wants to execute it while building.
I have no idea how to do it in the docker-compose file as Command, maybe that would work. Or do I need to build a script?
docker-compose.yml
version: "3"
services:
mysqldb:
networks:
- appsb-mysql
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=appsb
build: ./mysql
app-sb:
image: openjdk:8-jdk-alpine
build: ./app-sb/
ports:
- "8080:8080"
networks:
- appsb-mysql
depends_on:
- mysqldb
networks:
- appsb-mysql:
Dockerfile for mysqldb:
FROM mysql:5.7
COPY target/appsb.sql /
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
Dockerfile for the other springboot appsb:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/appsb.war /
RUN java -jar /appsb.war
Here is a similar issue (loading a dump.sql at start up) for a MySQL container: Setting up MySQL and importing dump within Dockerfile.
Option 1: import via a command in Dockerfile.
Option 2: exec. a bash script from docker-compose.yml
Option 3: exec. an import command from docker-compose.yml
I have such docker-compose.yml f.e.:
version: '3'
services:
db:
#build: db
image: percona:5.7.24-centos
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: bc
MYSQL_PASSWORD: pass
volumes:
- ./db:/docker-entrypoint-initdb.d
and the script is f.e.:
mkdir /home/workdirectory/
There is no sudo in that image.
Default user is mysql.
Initial place is just /.
So how can I execute scripts inside ./db as a root on that image?
You can inherit your own docker image from percona:5.7.24-centos and switch user or install sudo. Or just create necessary directories in Dockerfile.
I'd suggest you use the following assuming your script is a bash script and it's placed inside the db folder:
script.sh
#!/bin/bash
mkdir -p /home/workdirectory/some-sub-folder/
then make sure your container is up and running by executing docker-compose up -d
then use the following command to execute some script from inside the container:
docker exec db /docker-entrypoint-initdb.d/script.sh
Link:
https://docs.docker.com/engine/reference/commandline/exec/
I'm trying to replicate this docker command in a docker-compose.yml file
docker run --name rails -d -p 80:3000 -v "$PWD"/app:/www -w /www -ti rails
My docker-compose.yml file:
rails:
image: rails
container_name: rails
ports:
- 80:3000
volumes:
- ./app:/wwww
When I'm doing docker-compose up -d, the container is created but it does not strat.
When I'm adding tty: true to my docker docker-compose.yml file, the container start well but my volume is not mounted.
How can I replicate excatly my docker command in a docker-compose.yml?
There are some ways to solve your problem.
Solution 1: If you want to use the rails image in your docker-compose.yml, you need to set the command and working directory for it like
rails:
image: rails
container_name: rails
command: bash -c "bundle install && rails server -b 0.0.0.0"
ports:
- 80:3000
volumes:
- ./app:/www
working_dir: /www
This will create a new container from the rails image every time you run docker-compose up.
Solution 2: Move your docker-compose.yml to the same directory with Gemfile, and create Dockerfile in that directory in order to build a docker container in advance (to avoid running bundle installevery time)
#Dockerfile
FROM rails:onbuild
I use rails:onbuild here for simplicity reasons (about the differences between rails:onbuild and rails:<version>, please see the documentation).
After that, modify the docker-compose.yml to
rails:
build: .
container_name: rails
ports:
- 80:3000
volumes:
- .:/www
working_dir: /www
Run docker-compose up and this should work!
If you modify your Gemfile, you may also need to rebuild your container by docker-compose build before running docker-compose up.
Thanks for your answer. It helped me to find the solutions.
It was actually a volume problem. I wanted to mount the volume with the directory /www. But it was not possible.
So I used the directory used by default with the rails images:
/usr/src/app
rails:
image: rails
container_name: rails
ports:
- 80:3000
working_dir: /usr/src/app
volumes:
- ./app:/usr/src/app
tty: true
Now my docker-compose up -d command works