multi-stage build in docker compose? - docker

How can I specify multi-stage build with in a docker-compose.yml?
For each variant (e.g. dev, prod...) I have a multi-stage build with 2 docker files:
dev: Dockerfile.base + Dockerfile.dev
or prod: Dockerfile.base + Dockerfile.prod
File Dockerfile.base (common for all variants):
FROM python:3.6
RUN apt-get update && apt-get upgrade -y
RUN pip install pipenv pip
COPY Pipfile ./
# some more common configuration...
File Dockerfile.dev:
FROM flaskapp:base
RUN pipenv install --system --skip-lock --dev
ENV FLASK_ENV development
ENV FLASK_DEBUG 1
File Dockerfile.prod:
FROM flaskapp:base
RUN pipenv install --system --skip-lock
ENV FLASK_ENV production
Without docker-compose, I can build as:
# Building dev
docker build --tag flaskapp:base -f Dockerfile.base .
docker build --tag flaskapp:dev -f Dockerfile.dev .
# or building prod
docker build --tag flaskapp:base -f Dockerfile.base .
docker build --tag flaskapp:dev -f Dockerfile.dev .
According to the compose-file doc, I can specify a Dockerfile to build.
# docker-compose.yml
version: '3'
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
But how can I specify 2 Dockerfiles in docker-compose.yml (for multi-stage build)?

As mentioned in the comments, a multi-stage build involves a single Dockerfile to perform multiple stages. What you have is a common base image.
You could convert these to a non-traditional multi-stage build with a syntax like (I say non-traditional because you do not perform any copying between the layers and instead use just the from line to pick from a prior stage):
FROM python:3.6 as base
RUN apt-get update && apt-get upgrade -y
RUN pip install pipenv pip
COPY Pipfile ./
# some more common configuration...
FROM base as dev
RUN pipenv install --system --skip-lock --dev
ENV FLASK_ENV development
ENV FLASK_DEBUG 1
FROM base as prod
RUN pipenv install --system --skip-lock
ENV FLASK_ENV production
Then you can build one stage or another using the --target syntax to build, or a compose file like:
# docker-compose.yml
version: '3.4'
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile
target: prod
The biggest downside is the current build engine will go through every stage until it reaches the target. Build caching can mean that's only a sub-second process. And BuildKit which is coming out of experimental in 18.09 and will need upstream support from docker-compose will be more intelligent about only running the needed commands to get your desired target built.
All that said, I believe this is trying to fit a square peg in a round hole. The docker-compose developer is encouraging users to move away from doing the build within the compose file itself since it's not supported in swarm mode. Instead, the recommended solution is to perform builds with a CI/CD build server, and push those images to a registry. Then you can run the same compose file with docker-compose or docker stack deploy or even some k8s equivalents, without needing to redesign your workflow.

you can use as well concating of docker-compose files, with including both dockerfile pointing to your existing dockerfiles and run docker-compose -f docker-compose.yml -f docker-compose.prod.yml build

Related

How to let docker container install latest dependencies version

I have a docker image that is build from Dockerfile. It has pip installation inside with custom library. Now the image is used in docker-compose service.
requirements.txt
example-library-1
example-library-2
Dockerfile
FROM python:3.7-alpine
WORKDIR /code
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
docker build -t example_user/example_image .
docker-compose.yml
version: "3.9"
services:
web:
image: example_user/example_image
ports:
- "8000:5000"
When I have a new version for example-library-1, how do I update the docker-compose service to use the new library?
I had tried docker-compose up -d and docker-compose restart web but it does not check for new library version.
Is there a way to do this without docker-compose down and docker-compose up -d?
You can add a build key to the compose file.
services:
web:
image: example_user/example_image
build:
context: path/to/context
dockerfile: path/to/Dockerfile
Then you run compose with the build flag.
docker compose up --build
That said, compose does not have something like rolling updates. You need to stop to compose service and do up with the build flag in order to rebuild.
If you need to rolling updates, you need to use a different operator, like swarm or Kubernetes.
The docker-compose build flag is a good idea.
After that I recommend to use --no-cache flag because the docker buildkit may cache some layer.
Check here the documentation.
Don't forget to check the DOCKER_BUILDKIT environment variable too.

How should I dockerize and deploy a NestJS monorepo project?

I have NestJS monorepo project with structure as below:
...
apps
app1
app2
app3
...
If I got an idea correctly, I have possibility to run all the applications in same time, i.e. I run command and have access to apps by paths like http://my.domain/app1/, http://my.domain/app2/, http://my.domain/app3/ or in some similar way. And I need to put all apps in a docker container(s) and run them from there.
I haven't found something about this proceess. Did I undestand the idea correctly and where could I know more about deployment NestJS monorepo project?
This is how I solved it:
apps
app1
Dockerfile
...
app2
Dockerfile
...
app3
Dockerfile
...
docker-compose.yml
Each Dockerfile does the same:
FROM node:16.15.0-alpine3.15 AS development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:16.15.0-alpine3.15 AS production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production --omit=dev
COPY --from=development /usr/src/app/dist ./dist
CMD ["npm", "run", "start-app1:prod"]
Where the last line should start the application so adjust that to your project naming.
Later you should build each of the images in your CI/CD pipeline and deploy them separately. To run the docker build from the root folder of the project you just need to provide a Dockerfile path for -f parameter, for example:
docker build -f apps/app1/Dockerfile -t app1:version1 .
docker build -f apps/app2/Dockerfile -t app2:version1 .
docker build -f apps/app3/Dockerfile -t app3:version1 .
To run it locally for tests, utilize docker-compose.yml
version: '3.8'
services:
app1:
image: app1:version1
ports:
- 3000:3000 # set according to your project setup
app2:
...
app3:
...
And start it by calling docker compose up

Docker force-build parent image

I'm running a multi-service application with several images. The environment of each image is pretty much similar, so, in order to avoid code duplication, a "base" image is created/tagged with the required programs/configuration. Then, this "base" image is used as a parent image for the various "application" images. An (illustrative) example is given below:
dockerfile_base: which I build with docker build -f dockerfile_base -t app_base:latest .
FROM ubuntu:latest
RUN apt-get update && apt-get install -y
build-essentials
dockerfile_1: which is built with docker build -f dockerfile_1 -t app_1 .
FROM app_base:latest
COPY . .
RUN make test
And finally an example dockerfile_2 which describes a different service based again on "app_base" and is built with docker build -f dockerfile_2 -t app_2 .
FROM app_base:latest
COPY . .
RUN make deploy
Usually, the "base" image is built manually at first. Then, the "app" images are also manually built. Finally, the services (images app_1, app_2, etc.) are run using docker run for tests or docker-compose for demo deployment.
This creates an issue: When working on a new workspace (e.g. a newcomer's PC) where no docker images are yet created, or when something changes in the "dockerfile_base", running just the docker build command for the app images will result in error or incorrect images. So, the question is: is there a way in docker to define these chain-builds? I guess that's difficult for docker build command, but would it be possible with docker-compose?
OK, so this is what I came up with which essentially streamlines the whole multi-build multi-image process with just 2 commands. The docker-compose.yaml file was created like this:
version: "3.4"
services:
# dummy service used only for building the images
dummy_app_base:
image: app_base:latest
build:
dockerfile: "${PWD}/dockerfile_base"
context: "${PWD}"
command: [ "echo", "\"dummy_app_base:latest EXIT\"" ]
app_1:
image: app_1:latest
build:
dockerfile: "${PWD}/dockerfile_1"
context: "${PWD}"
app_2:
image: app_2:latest
build:
dockerfile: "${PWD}/dockerfile_2"
context: "${PWD}"
So, to build all the images, I simply run docker-compose build. The build command essentially builds and tags all the images in the order they appear in the docker-compose.yaml file, so when building app_1 and app_2, the dependency app_base:latest is already built. Then, running everything with docker-compose up. Note: This WILL create a dangling container for dummy_app_base service, but overriding its command with an echo, it will simply exit immediately.
edit: even in one command: docker-compose up --build
Multi-stage builds were invented for problems like this. An example might be:
FROM ubuntu:latest as app_base
RUN apt-get update && apt-get install -y build-essentials
FROM app_base as app_name
COPY . .
RUN make

Docker-compose new file to run 3 build commands

My question is related to docker-compose, I need to create a new docker-compose.yml file only for this purpose: to run 3 docker build commands:
docker build --target node-sdk -f ./Dockerfile.sdk -t casino-node-sdk:12.16.3 .
How can I do this as I don't really have services to run, context or images?
What I've tried
version: "3.8"
services:
build:
command: docker build --target node-sdk -f ./Dockerfile.sdk -t casino-node-sdk:12.16.3 .
command: docker build --target node-sdk-ssh -f ./Dockerfile.sdk -t casino-node-sdk-ssh:12.16.3 .
command: docker build --target node-run -f ./Dockerfile.sdk -t casino-node-run:12.16.3 .
Error
Service build has neither an image nor a build context specified. At least one must be provided.
Could you please help me with some ideas? I am a beginner at this.
For example: Let us assume, this is the content of the DockerFile you are having. Aliases are given as base, dev and prod.
DockerFile
FROM python:3.6 as base
RUN apt-get update && apt-get upgrade -y
RUN pip install pipenv pip
COPY Pipfile ./
# some more common configuration...
FROM base as dev
RUN pipenv install --system --skip-lock --dev
ENV FLASK_ENV development
ENV FLASK_DEBUG 1
FROM base as prod
RUN pipenv install --system --skip-lock
ENV FLASK_ENV production
docker-compose.yml
version: '3.8'
services:
app1:
build:
context: ./dir
dockerfile: Dockerfile
target: base
app2:
build:
context: ./dir
dockerfile: Dockerfile
target: dev
app3:
build:
context: ./dir
dockerfile: Dockerfile
target: prod
Here prod is the stage alias that you are using in the DockerFile. It can be base, dev, prod.

Keep Docker intermediate layers in multistage build

I'm attempting to have a dev container and a "production" container built from a single Dockerfile, it already "works" but I do not have access to the dev container after the build (multistage intermediaries are cached, but not tagged in a useful way).
The Dockerfile is as-so:
# See https://github.com/facebook/flow/issues/3649 why here
# is a separate one for a flow using image ... :(
FROM node:8.9.4-slim AS graphql-dev
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
RUN apt update && apt install -y libelf1
ADD ./.babelrc /graphql-api/
ADD ./.eslintignore /graphql-api/
ADD ./.eslintrc /graphql-api/
ADD ./.flowconfig /graphql-api/
ADD ./.npmrc /graphql-api/
ADD ./*.json5 /graphql-api/
ADD ./lib/ /graphql-api/lib
ADD ./package.json /graphql-api/
ADD ./schema/ /graphql-api/schema
ADD ./yarn.lock /graphql-api/
RUN yarn install --production --silent && npm install --silent
CMD ["npm", "run", "lint-flow-test"]
# Cleans node_modules etc, see github.com/tj/node-prune
# this container contains no node, etc (golang:latest)
FROM golang:latest AS graphql-cleaner
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-dev graphql-api .
RUN go get github.com/tj/node-prune/cmd/node-prune
RUN node-prune
# Minimal end-container (Alpine 💖)
FROM node:8.9.4-alpine
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-cleaner graphql-api .
EXPOSE 3000
CMD ["npm", "start"]
Ideally I'd be able to start graphql-dev and the final container both with a docker-compose.yml, as so:
version: '3'
services:
graphql-dev:
image: graphql-dev
build: ./Dockerfile
volumes:
- ./lib:/graphql-api/lib
- ./schema:/graphql-api/schema
graphql-prod:
image: graphql
build: ./Dockerfile
The two final steps are the "shrinking" for the final build (saves over 250Mb for us) are not really required except for in the production build.
If I extract the dockerfile into two.. somehow Dockerfile.prod and Dockerfile.dev then I have to manage dependencies between them as I can't force prod to always build dev (can I?)
If I were somehow able to specify target on the build in the docker-compose.yml file I could do it, there were some issues, but specifying a target under build in my yml file yields an error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.graphql-dev.build contains unsupported option: 'target'

Resources