Convert a docker run command to docker-compose - setting directory dependency - docker

I have two docker run commands - the second container need to be ran in a folder created by the first. As in below
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/mainmyoh:v1 init myprojectname
cd myprojectname
The above myprojectname folder was created by the first container. I need to run the second container in this folder as below.
docker run -v $(pwd):/project \
-w /project \
-p 3000:3000 \
gcr.io/base-project/myoh:v1
Here is the docker-compose file I have so far:
version: '3.3'
services:
firstim:
volumes:
- '$(pwd):/projects'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- '$(pwd):/projects'
ports:
- 3000:3000
What need to change to achieve this.

You can make the two services use a shared named volume:
version: '3.3'
services:
firstim:
volumes:
- '.:/projects'
- 'my-project-volume:/projects/myprojectname'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- 'my-project-volume:/projects'
ports:
- 3000:3000
volumes:
my-project-volume:
Also, just an observation: in your example the working_dir: references /project while the volumes point to /projects. I assume this is a typo and this might be something you want to fix.

You can build a custom image that does this required setup for you. When secondim runs, you want the current working directory to be /project, you want the current directory's code to be embedded there, and you want the init command to have run. That's easy to express in Dockerfile syntax:
FROM gcr.io/base-project/mainmyoh:v1
WORKDIR /project
COPY . .
RUN init myprojectname
CMD whatever should be run to start the real project
Then you can tell Compose to build it for you:
version: '3.5'
services:
# no build-only first image
secondim:
build: .
image: gcr.io/base-project/mainmyoh:v1
ports:
- '3000:3000'
In another question you ask about running a similar setup in Kubernetes. This Dockerfile-based setup can translate directly into a Kubernetes Deployment/Service, without worrying about questions like "what kind of volume do I need to use" or "how do I copy the code into the cluster separately from the image".

Related

docker build a custom image from docker-compose.yml

I have a setup where I have a Dockerfile and a docker-compose.yml.
Dockerfile:
# syntax=docker/dockerfile:1
FROM php:7.4
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN apt-get -y update
RUN apt-get -y install git
COPY . .
RUN composer install
YML file:
version: '3.8'
services:
foo_db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=foo
- MYSQL_DATABASE=foo
foo_app:
image: foo_php
platform: linux/x86_64
restart: unless-stopped
ports:
- 8000:8000
links:
- foo_db
environment:
- DB_CONNECTION=mysql
- DB_HOST=foo_db
- DB_PORT=3306
- DB_PASSWORD=foo
command: sh -c "php artisan serve --host=0.0.0.0 --port=8000"
foo_phpmyadmin:
image: phpmyadmin
links:
- foo_db
environment:
PMA_HOST: foo_db
PMA_PORT: 3306
PMA_ARBITRARY: 1
PMA_USER: root
PMA_PASSWORD: foo
restart: always
ports:
- 8081:80
In order to set this up on a new workstation the steps I am taking are first running:
docker build -t foo_php .
As I understand it this runs the commands in the Dockerfile and creates a new image called foo_php.
Once that is done I am running docker compose up.
Question:
How can I tell docker that I would like my foo_app image to be automatically built, so that I can skip the step of first building the image. Ideally I would have one command similar to docker compose up that I could call each time I want to launch my containers. The first time it would build the images it needs including this custom image of mine described in the Dockerfile, and subsequent times calling it would just run these images. Does a method to achieve this exist?
You can ask docker compose to build the image every time:
docker compose up --build
But you need to also instruct docker compose on what to build:
foo_app:
image: foo_php
build:
context: .
where context points to the folder containing your Dockerfile

Can't mount volume in Docker Compose right

Firstly, I went all that step by step: https://docs.docker.com/language/golang/develop/
Works perfectly.
Then I started to try the same with my golang project. It requires not only db in a volume but also 'assets' and 'creds' directories which I was able to provide working with normal Dockerfile and --mount flag in 'docker run' comand.
So my schema was:
Create a volume 'roach'.
Create a temp container for copying folders.
docker container create --name temp -v roach:/data busybox \
docker cp assets/ temp:/data \
docker rm temp
Run my container with
docker run -it --rm \
--mount 'type=volume,src=roach,dst=/usr/data' \
--network mynet \
--name postgres-server \
-p 80:8080 \
-e PGUSER=totoro \
-e PGPASSWORD=myfriend \
-e PGHOST=db \
-e PGPORT=26257 \
-e PGDATABASE=mydb \
postgres-server
Go files have acces to /usr/data/my_folders
BTW here is Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go mod tidy
RUN go build -o /t main/main.go main/inst_list.go
## Deploy
FROM gcr.io/distroless/base-debian10
ENV GO111MODULE=on
ENV GOOGLE_APPLICATION_CREDENTIALS='/usr/data/credentials/creds.json'
WORKDIR /
COPY --from=build /t /t
EXPOSE 8080
USER root:root
ENTRYPOINT ["/t"]
================================================================
Then I started to try to make a Docker-compose.yml file like in the end of that example.
It has no --mount flags but I found plenty ways to specify mount path.
I tried much more but left 3 variants of it in code bellow(2 of 3 are commented):
version: '3.8'
services:
docker-t-roach:
depends_on:
- roach
build:
context: .
container_name: postgres-server
hostname: postgres-server
networks:
- mynet
ports:
- 80:8080
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
# - type: volume
# source: roach
# target: /usr/data
- roach:/usr/data
# - "${PWD}/cockroach-data/roach:/usr/data"
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
and it still doesn't work. Moreover it creates 2 Volumes: 'roach' and 'WORKDIRNAME_roach'. I actually tried to copy my folders to both of those. It's not working. The output of build command is alwaysl like that:
postgres-server | STARTED AT
postgres-server | Sep 4 10:43:10
postgres-server | lstat /usr/data/assets/current_batch: no such file or directory
postgres-server | 2022/09/04 10:43:10 lstat /usr/data/assets/current_batch: no such file or directory
(first 2 strings are produced my my go.files, 'assets' is the folder I'm copying)
I think that I'm seaking in the wrong place: maybe the way I copy folders doesn't work with this kind of build?
UPDATE:
At the same time command
docker run -it --rm -v roach:/data ubuntu ls /data/usr
showes that my folders are there. But container is in kind of cycle that doesn't let him see them.
Mihai is tried to help but I didn't understand what he meant. He actually meant that I had to add volume to my app service. I did it now and it works. In example bellow I named 2 volumes for db and app different just for better accuracy:
version: '3.8'
services:
docker-parser:
depends_on:
- roach
build:
context: .
container_name: parser
hostname: parser
networks:
- mynet
ports:
- 80:8080
volumes:
- assets:/data
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/db
command: start-single-node --insecure
volumes:
assets:
roach:
networks:
mynet:
driver: bridge

what is the point to run supervisor on top of docker container?

Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you

How to add docker run param to docker compose file?

I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:

Set secret variable when using Docker in TravisCI

I build a backend with NodeJS and would like to use TravisCI and Docker to run tests.
In my code, I have a secret env: process.env.SOME_API_KEY
This is my Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
My docker compose:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
And this is my TravisCI
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec api npm run test
I also set SOME_API_KEY='xxx' in my travis setting variables. However, it seems that the container doesn't receive the SOME_API_KEY.
How can I pass the SOME_API_KEY from travisCI to docker? Thanks
Containers in general do not inherit the environment from which they are run. Consider something like this:
export SOMEVARIABLE=somevalue
docker run --rm alpine sh -c 'echo $SOMEVARIABLE'
That will never print out the value of $SOMEVARIABLE because there is no magic process to import environment variables from your local shell into the container. If you want a travis environment variable exposed inside your docker containers, you will need to do that explicitly by creating an appropriate environment block in your docker-compose.yml. For example, I use the following docker-compose.yml:
version: "3"
services:
example:
image: alpine
command: sh -c 'echo $SOMEVARIABLE'
environment:
SOMEVARIABLE: "${SOMEVARIABLE}"
I can then run the following:
export SOMEVARIABLE=somevalue
docker-compose up
And see the following output:
Recreating docker_example_1 ... done
Attaching to docker_example_1
example_1 | somevalue
docker_example_1 exited with code 0
So you will need to write something like:
version: "3"
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
ports:
- "3000:3000"
depends_on:
- mongo
environment:
SOME_API_KEY: "${SOME_API_KEY}"
mongo:
image: mongo:4.0.6
ports:
- "27017:27017"
I have a similar issue and solve it passing the environment variable to the container in docker-compose exec command. If the variable is in the Travis environment, you can do:
sudo: required
services:
- docker
before_script:
- docker-compose up -d --build
script:
- docker-compose exec -e SOME_API_KEY=$SOME_API_KEY api npm run test

Resources