Can I use RUN command in docker-compose.yml? - docker

Is it possible to RUN a command within docker-compose.yml file? So instead of having a dockerfile where I have something like this RUN mkdir foo, to have the same command withing the docker-compose.yml file
services:
server:
container_name: nginx
image: nginx:stable-alpine
volumes:
- ./public:/var/www/html/public
ports:
- "${PORT:-80}:80"
???: 'mkdir foo' // <--- sudo code

Related

what is the point to run supervisor on top of docker container?

Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you

Convert a docker run command to docker-compose - setting directory dependency

I have two docker run commands - the second container need to be ran in a folder created by the first. As in below
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/mainmyoh:v1 init myprojectname
cd myprojectname
The above myprojectname folder was created by the first container. I need to run the second container in this folder as below.
docker run -v $(pwd):/project \
-w /project \
-p 3000:3000 \
gcr.io/base-project/myoh:v1
Here is the docker-compose file I have so far:
version: '3.3'
services:
firstim:
volumes:
- '$(pwd):/projects'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- '$(pwd):/projects'
ports:
- 3000:3000
What need to change to achieve this.
You can make the two services use a shared named volume:
version: '3.3'
services:
firstim:
volumes:
- '.:/projects'
- 'my-project-volume:/projects/myprojectname'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- 'my-project-volume:/projects'
ports:
- 3000:3000
volumes:
my-project-volume:
Also, just an observation: in your example the working_dir: references /project while the volumes point to /projects. I assume this is a typo and this might be something you want to fix.
You can build a custom image that does this required setup for you. When secondim runs, you want the current working directory to be /project, you want the current directory's code to be embedded there, and you want the init command to have run. That's easy to express in Dockerfile syntax:
FROM gcr.io/base-project/mainmyoh:v1
WORKDIR /project
COPY . .
RUN init myprojectname
CMD whatever should be run to start the real project
Then you can tell Compose to build it for you:
version: '3.5'
services:
# no build-only first image
secondim:
build: .
image: gcr.io/base-project/mainmyoh:v1
ports:
- '3000:3000'
In another question you ask about running a similar setup in Kubernetes. This Dockerfile-based setup can translate directly into a Kubernetes Deployment/Service, without worrying about questions like "what kind of volume do I need to use" or "how do I copy the code into the cluster separately from the image".

Set secret variable when using Docker in TravisCI

I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks
Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test

docker and composer install

I am having problems with docker-compose getting php-fpm container up with composer install.
I have folder structure like:
docker-compose.yml
containers/
nginx
Dockerfile
php-fpm
Dockerfile
docker-compose.yml:
version: '3'
services:
nginx:
build:
context: ./containers/nginx
ports:
- 80:80
php-fpm:
build:
context: ./containers/php-fpm
tty: true
and in php-fpm/Dockerfile:
FROM php:7.1.5-fpm-alpine
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /srv/www
ENTRYPOINT composer install --prefer-source --no-interaction --no-autoloader
With current ENTRYPOINT, it seems that composer install gets stuck at "Generating autoload files", because nothing after that is outputted and container does not appear in docker ps list.
How can I keep above folder structure, but still able to run composer install after build or run (in this case I would need to add if conditions)?
The problem is that you need a composer.json.
Please, follow these steps:
Go to Get composer web -> getting started and create a composer.json file.
Put this composer.json file in your dir structure as follows, for example:
docker-compose.yml
containers/
nginx
Dockerfile
php-fpm
Dockerfile
composer/
composer.json
Mount volume (modifying your docker-compose.yml file) to share composer.json to /var/www/html in php-fpm docker, because although you specify /srv/www/ as workdir, composer looks up in /var/www/html.
version: '3'
services:
nginx:
build:
context: ./containers/nginx
ports:
- "80:80"
php-fpm:
build:
context: ./containers/php-fpm
tty: true
volumes:
- ./composer/composer.json:/var/www/html/composer.json
I hope it's useful for you.

Docker-compose extend not finding environment variables

I have part of a docker-compose file as so
docker-compose.yml
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
volumes:
- "/Sites/pitch/pitchjob/:/Sites"
restart: always
depends_on:
- memcached01
- memcached02
links:
- memcached01
- memcached02
extends:
file: "shared/common.yml"
service: pitch-common-env
my extended yml file is
compose.yml
version: '2.0'
services:
pitch-common-env:
environment:
APP_VOL_DIR: Sites
WEB_ROOT_FOLDER: web
CONFIG_FOLDER: app/config
APP_NAME: sony_pitch
in the docker file for pitchjob-fpm01 i have a command like so
PitchjobDockerfile
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
But when I run the command to bring up the stack
docker-compose -f docker-compose-base.yml up --build --force-recreate --remove-orphans
I get the following error
failed to build: The command '/bin/sh -c chown -Rf www-data:www-data
/$APP_VOL_DIR' returned a non-zero code: 1
I'm guessing this is because it doesn't have the $APP_VOL_DIR, but why is that so if the docker compose is extending another compose file that defines
environment: variables
You can use build-time arguments for that.
In Dockerfile define:
ARG APP_VOL_DIR=app_vol_dir
# Set folder groups
RUN chown -Rf www-data:www-data /$APP_VOL_DIR
Then in docker-compose.yml set app_vol_dir as build argument:
pitchjob-fpm01:
container_name: pitchjob-fpm01
env_file:
- env/base.env
build:
context: ./pitch
dockerfile: PitchjobDockerfile
args:
- app_vol_dir=Sites
I think your problem is not with the overrides, but with the way you are trying to do environment variable substitution. From the docs:
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, RUN [ "echo", "$HOME" ]will not do variable substitution
on $HOME. If you want shell processing then either use theshell form
or execute a shell directly, for example:RUN [ "sh", "-c", "echo
$HOME" ].

Resources