Using base Images for Services in docker-compose with different args - docker

My Setup:
I have 3 Services defined in my docker-compose.yml: frontend backend and postgresql. postgresql is pulled from docker-hub.
frontend and backend are built from their own Dockerfiles, most of the Code of these Dockerfiles is the same and only EXPOSE ENTRPOINT CMD and ARG-Values differ from each other. That is why I wanted to create a 'base-Dockerfile' that these two Services can "include".
Sadly I found out I can not simply "include" a Dockerfile into another Dockerfile, I have to create an Image.
So I tried to create a base image for frontend and backend in my docker-compose.yml:
services:
frontend_base:
image: frontend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/frontend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/frontend/client
backend_base:
image: backend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/backend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/backend/api
frontend:
depends_on:
- frontend_base
# Some more stuff for the service
backend:
depends_on:
- backend_base
# Some more stuff for the service
My 'base-Dockerfile':
FROM node:18
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
The Problem I am facing:
My frontend and backend Dockerfiles try to pull the 'base-image' from docker.io
=> ERROR [docker-backend internal] load metadata for docker.io/library/backend_base_image:latest 0.9s
=> ERROR [docker-frontend internal] load metadata for docker.io/library/frontend_base_image:latest 0.9s
=> CANCELED [frontend_base_image internal] load metadata for docker.io/library/node:18
My Research:
I do not know if my approach is possible, I did not find much Resources about this (integrated with docker-compose) online, only Resources about building the Images via Shell and then using them in a Dockerfile. I also tried this and ran into some other issues, where I could not provide correct arguments to the base-Dockerfile.
So I firstly wanted to find out if it is possible with docker-compose.
I am sorry if this is super obvious and my Question is dumb, I am relatively new to Docker.

We could use the feature of a multistage containerfile to define all three images in a single containerfile:
FROM node:18 AS base
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
FROM base AS frontend
...
FROM base AS backend
...
In our docker-compose.yml, we can then build a specific stage for the frontend- and backend-service:
...
frontend:
image: frontend
build:
context: ./
target: frontend
dockerfile: base.dockerfile
...
backend:
image: backend
build:
context: ./
target: backend
dockerfile: base.dockerfile
...

If you want a single base image with shared tools, you can do this almost exactly the way you describe; but the one caveat is that you can't describe the base image in the docker-compose.yml file. You need to run separately from Compose
docker build -t base-image -f base.dockerfile .
I would not try to install any application code in that base Dockerfile. Where you for example install an init wrapper that needs to be shared across all of your application images, that does make sense. I think it's fine to tie a Dockerfile to a specific source-tree and image layout, and don't typically recommend passing filesystem paths as ARGs.
# base.dockerfile
FROM node:18
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64 \
&& chmod +x /usr/local/bin/dumb-init
COPY docker/tools/start.sh /usr/local/bin/
ENTRYPOINT ["dumb-init", "--"]
CMD ["start.sh"]
The per-image Dockerfiles will look pretty similar – and like every other Node Dockerfile – but there's no harm in repeating this, in much the same way that your components probably have similar-looking but self-contained package.json files.
# */Dockerfile
FROM base-image
WORKDIR /app # also creates it
COPY package*.json ./
RUN npm ci
COPY ./ ./
RUN npm build
EXPOSE 3000
# CMD ["npm", "run", "start"] # if the start.sh from the base is wrong
Of note, this gives you some flexibility to change things if the two image setups aren't identical; if you need an additional build step, or if you want to run a dev server, or package the frontend into a lighter-weight Nginx server.
In the Compose file you'd declare these normally with a build: block. Compose isn't aware of the base image and there's no way to tell it about it.
version: '3.8'
services:
frontend:
build: ./app/frontend/client
ports: ['3000:3000']
backend:
build: ./app/backend/api
ports: ['3001:3000']
One thing I've done here which at least reduces the number of variable references is to consistently use . as the current directory name. In the Compose file that's the directory containing the docker-compose.yml; on the left-hand side of COPY it's the build: context directory on the host; on the right-hand side of COPY it's the most recent WORKDIR. Using . where appropriate means you don't have to repeat the directory name, so you do have a little flexibility if you do need to rearrange your source tree or container filesystem.

Related

How should I dockerize and deploy a NestJS monorepo project?

I have NestJS monorepo project with structure as below:
...
apps
app1
app2
app3
...
If I got an idea correctly, I have possibility to run all the applications in same time, i.e. I run command and have access to apps by paths like http://my.domain/app1/, http://my.domain/app2/, http://my.domain/app3/ or in some similar way. And I need to put all apps in a docker container(s) and run them from there.
I haven't found something about this proceess. Did I undestand the idea correctly and where could I know more about deployment NestJS monorepo project?
This is how I solved it:
apps
app1
Dockerfile
...
app2
Dockerfile
...
app3
Dockerfile
...
docker-compose.yml
Each Dockerfile does the same:
FROM node:16.15.0-alpine3.15 AS development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:16.15.0-alpine3.15 AS production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production --omit=dev
COPY --from=development /usr/src/app/dist ./dist
CMD ["npm", "run", "start-app1:prod"]
Where the last line should start the application so adjust that to your project naming.
Later you should build each of the images in your CI/CD pipeline and deploy them separately. To run the docker build from the root folder of the project you just need to provide a Dockerfile path for -f parameter, for example:
docker build -f apps/app1/Dockerfile -t app1:version1 .
docker build -f apps/app2/Dockerfile -t app2:version1 .
docker build -f apps/app3/Dockerfile -t app3:version1 .
To run it locally for tests, utilize docker-compose.yml
version: '3.8'
services:
app1:
image: app1:version1
ports:
- 3000:3000 # set according to your project setup
app2:
...
app3:
...
And start it by calling docker compose up

Docker: Shared volume when build

I have this files:
docker-compose.yml (shortened):
version: '3.7'
services:
php-fpm:
build:
context: .
dockerfile: docker/php/Dockerfile
target: dev
volumes:
- .:/app
frontend:
build:
context: .
dockerfile: docker/php/Dockerfile
target: frontend
volumes:
- .:/app
docker/php/Dockerfile (shortened):
FROM alpine:3.13 AS frontend
WORKDIR /app
COPY . .
RUN apk add npm
RUN npm install
RUN npx webpack -p --color --progress
FROM php:7.4-fpm AS dev
ENTRYPOINT ["docker-php-entrypoint"]
WORKDIR /app
COPY ./docker/php/www-dev.conf /usr/local/etc/php-fpm.d/www.conf
CMD ["php-fpm"]
I want to use all what building in frontend (as I understood at the stage build at this time volumes are not available) in php-fpm container, but I get something like this: file_get_contents(/app/static/frontend.version): failed to open stream.
How I can do this? I don't understand very well in Docker and the only solution I have is to move build script to php-fpm container.
You need to delete the volumes: in your docker-compose.yml file. They replace the entire contents of the image's /app directory with content from the host, which means everything that gets done in the Dockerfile gets completely ignored.
The Dockerfile you show uses a setup called a multi-stage build. The important thing you can do with this is build the first part of your image using Node, then COPY --from=frontend the static files into the second part. You do not need to declare a second container in docker-compose.yml to run the first stage, the build sequence runs this automatically. This at a minimum looks like
COPY --from=frontend /app/build ./static
You will also need to COPY the rest of your application code into the image.
If you move the Dockerfile up to the top of your project's source tree, then the docker-compose.yml file becomes as simple as
version: '3.8'
services:
php-fpm:
build: . # default Dockerfile, default target (last stage)
# do not overwrite application code with volumes:
# no separate frontend: container
But you've put a little bit more logic in the Dockerfile. I might write:
FROM node:lts AS frontend # use a prebuilt Node image
WORKDIR /app
COPY package*.json . # install dependencies first to save time on rebuild
RUN npm install
COPY . . # (or a more specific subdirectory?)
RUN npx webpack -p --color --progress
FROM php:7.4-fpm AS dev
WORKDIR /app
COPY . . # (or a more specific subdirectory?)
COPY --from=frontend /app/build ./static
COPY ./docker/php/www-dev.conf /usr/local/etc/php-fpm.d/www.conf
# don't need to repeat unmodified ENTRYPOINT/CMD from base image

Docker error can't copy a file after build it

I'm trying to copy my ./dist after building my angular app.
here is my Dockerfile
# Create image based off of the official Node 10 image
FROM node:12-alpine
RUN apk update && apk add --no-cache make git
RUN mkdir -p /home/project/frontend
# Change directory so that our commands run inside this new directory
WORKDIR /home/project/frontend
# Copy dependency definitions
COPY package*.json ./
RUN npm cache verify
## installing packages
RUN npm install
COPY ./ ./
RUN npm run build --output-path=./dist
COPY /dist /var/www/front
but when I run docker-compose build dashboard I get this error
Service 'dashboard' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builderxxx/dist: no such file or directory
I don't know why is there something wrong?
if you need to check also docker-compose file
...
dashboard:
container_name: dashboard
build: ./frontend
image: dashboard
container_name: dashboard
restart: unless-stopped
networks:
- app-network
...
The Dockerfile COPY directive copies content from the build context (the host-system directory in the build: line) into the image. If you're just trying to move around content within the image, you can RUN cp or RUN mv to use the ordinary Linux shell commands instead.
RUN npm run build --output-path=./dist \
&& cp -a dist /var/www/front

Setting build args in docker-compose.yml using env_file

I'm trying to use Docker and Docker Compose to create a containerized app. I have a PubNub account, which allows me to use different API keys for different environments (dev, test, prod). To help me build images for this, I am trying to use build args set with an env_file.
It's not working.
WARNING: The PUB_KEY variable is not set. Defaulting to a blank string.
WARNING: The SUB_KEY variable is not set. Defaulting to a blank string.
Questions:
What mistake am I making in setting the build args?
How do I fix it?
Is this a good way to set ENV variables for the containers scan and flask?
At the very bottom is an IntelliJ IDE screenshot, or the text code is just below.
Here is the docker-compose.yml content:
version: '3.6'
services:
scan:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: scan
image: bt-beacon/scan:v1
flask:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: flask
image: bt-beacon/flask:v1
ports:
- "5000:5000"
And the Dockerfile:
# --- BASE NODE ---
FROM python:3.6-jessie as base
ARG pub_key
ARG sub_key
RUN test -n "$pub_key"
RUN test -n "$sub_key"
# --- SCAN NODE ---
FROM base as scan
ENV PUB_KEY=$pub_key
ENV SUB_KEY=$sub_key
COPY app/requirements.scan.txt /
RUN apt-get update
RUN apt-get -y install bluetooth bluez bluez-hcidump python-bluez python-numpy python3-dev libbluetooth-dev libcap2-bin
RUN pip install -r /requirements.scan.txt
RUN setcap 'cap_net_raw,cap_net_admin+eip' $(readlink -f $(which python))
COPY app/src /app
WORKDIR /app
CMD ["./scan.py", "$pub_key", "$sub_key"]
# -- FLASK APP ---
FROM base as flask
ENV SUB_KEY=$sub_key
COPY app/requirements.flask.txt /
COPY app/src /app
RUN pip install -r /requirements.flask.txt
WORKDIR /app
EXPOSE 5000
CMD ["flask", "run"]
Finally, sample.env:
# PubNub app keys here
PUB_KEY=xyz1
SUB_KEY=xyz2
env_file can only set environment variables inside a service container. Variables from env_file cannot be injected into docker-compose.yml itself.
You have such options (described there in detail):
inject these variables into the shell, from which you run docker-compose up
create .env file containing these variables (syntax identical to your sample.env)
Personally I would separate image building process and container launching process (take away image building responsibility from docker-compose to external script, then building process can be configured easily).

Keep Docker intermediate layers in multistage build

I'm attempting to have a dev container and a "production" container built from a single Dockerfile, it already "works" but I do not have access to the dev container after the build (multistage intermediaries are cached, but not tagged in a useful way).
The Dockerfile is as-so:
# See https://github.com/facebook/flow/issues/3649 why here
# is a separate one for a flow using image ... :(
FROM node:8.9.4-slim AS graphql-dev
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
RUN apt update && apt install -y libelf1
ADD ./.babelrc /graphql-api/
ADD ./.eslintignore /graphql-api/
ADD ./.eslintrc /graphql-api/
ADD ./.flowconfig /graphql-api/
ADD ./.npmrc /graphql-api/
ADD ./*.json5 /graphql-api/
ADD ./lib/ /graphql-api/lib
ADD ./package.json /graphql-api/
ADD ./schema/ /graphql-api/schema
ADD ./yarn.lock /graphql-api/
RUN yarn install --production --silent && npm install --silent
CMD ["npm", "run", "lint-flow-test"]
# Cleans node_modules etc, see github.com/tj/node-prune
# this container contains no node, etc (golang:latest)
FROM golang:latest AS graphql-cleaner
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-dev graphql-api .
RUN go get github.com/tj/node-prune/cmd/node-prune
RUN node-prune
# Minimal end-container (Alpine 💖)
FROM node:8.9.4-alpine
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-cleaner graphql-api .
EXPOSE 3000
CMD ["npm", "start"]
Ideally I'd be able to start graphql-dev and the final container both with a docker-compose.yml, as so:
version: '3'
services:
graphql-dev:
image: graphql-dev
build: ./Dockerfile
volumes:
- ./lib:/graphql-api/lib
- ./schema:/graphql-api/schema
graphql-prod:
image: graphql
build: ./Dockerfile
The two final steps are the "shrinking" for the final build (saves over 250Mb for us) are not really required except for in the production build.
If I extract the dockerfile into two.. somehow Dockerfile.prod and Dockerfile.dev then I have to manage dependencies between them as I can't force prod to always build dev (can I?)
If I were somehow able to specify target on the build in the docker-compose.yml file I could do it, there were some issues, but specifying a target under build in my yml file yields an error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.graphql-dev.build contains unsupported option: 'target'

Resources