Using docker --squash in docker-compose when building images - docker

Is there a way to use the --squash option in docker-compose when building new docker images? Right now they have implemented --squash in docker as of 6 months ago, but I have not seen any docs about how to use this in docker-compose.yml.
Is there a work around here? (I see an open issue filed requesting this feature)

Instead of using --squash, you can use Docker multi-stage builds.
Here is a simple example for a Python app that uses the Django web framework. We want to separate out the testing dependencies into a different image, so that we do not deploy the testing dependencies to production. Additionally, we want to separate our automated documentation utilities from our test utilities.
Here is the Dockerfile:
# the AS keyword lets us name the image
FROM python:3.6.7 AS base
WORKDIR /app
RUN pip install django
# base is the image we have defined above
FROM base AS testing
RUN pip install pytest
# same base as above
FROM base AS documentation
RUN pip install sphinx
In order to use this file to build different images, we need the --target flag for docker build. The argument of --target should name the name of the image after the AS keyword in the Dockerfile.
Build the base image:
docker build --target base --tag base .
Build the testing image:
docker build --target testing --tag testing .
Build the documentation image:
docker build --target documentation --tag documentation .
This lets you build images that branch from the same base image, which can significantly reduce build-time for larger images.
You can also use multi-stage builds in Docker Compose. As of version 3.4 of docker-compose.yml, you can use the target keyword in your YAML.
Here is a docker-compose.yml file that references the Dockerfile above:
version: '3.4'
services:
testing:
build:
context: .
target: testing
documentation:
build:
context: .
target: documentation
If you run docker-compose build using this docker-compose.yml, it will build the testing and documentation images in the Dockerfile. As with any other docker-compose.yml, you can also add ports, environment variables, runtime commands, and so on.

You can achieve squash result with trick like
FROM oracle AS needs-squashing
ENV NEEDED_VAR some_value
COPY ./giant.zip ./somewhere/giant.zip
RUN echo "install giant in zip"
RUN rm ./somewhere/giant.zip
FROM scratch
COPY --from=needs-squashing / /

Related

Dockerfile with extension

Is it possible to have multiple Dockerfile's with different extensions to link some services separately or in other use cases. For example:
\Dockerfile.web \Dockerfile.celery
and why?
You may have multiple Dockerfiles but you can only run one at a time when building an image. By default docker build looks for a file named Dockerfile. To specify another file use the -f flag.
docker build // uses Dockerfile
docker build -f Dockerfile // does the same as when run without -f
docker build -f Dockerfile.web // uses Dockerfile.web
https://docs.docker.com/engine/reference/commandline/build/

Docker force-build parent image

I'm running a multi-service application with several images. The environment of each image is pretty much similar, so, in order to avoid code duplication, a "base" image is created/tagged with the required programs/configuration. Then, this "base" image is used as a parent image for the various "application" images. An (illustrative) example is given below:
dockerfile_base: which I build with docker build -f dockerfile_base -t app_base:latest .
FROM ubuntu:latest
RUN apt-get update && apt-get install -y
build-essentials
dockerfile_1: which is built with docker build -f dockerfile_1 -t app_1 .
FROM app_base:latest
COPY . .
RUN make test
And finally an example dockerfile_2 which describes a different service based again on "app_base" and is built with docker build -f dockerfile_2 -t app_2 .
FROM app_base:latest
COPY . .
RUN make deploy
Usually, the "base" image is built manually at first. Then, the "app" images are also manually built. Finally, the services (images app_1, app_2, etc.) are run using docker run for tests or docker-compose for demo deployment.
This creates an issue: When working on a new workspace (e.g. a newcomer's PC) where no docker images are yet created, or when something changes in the "dockerfile_base", running just the docker build command for the app images will result in error or incorrect images. So, the question is: is there a way in docker to define these chain-builds? I guess that's difficult for docker build command, but would it be possible with docker-compose?
OK, so this is what I came up with which essentially streamlines the whole multi-build multi-image process with just 2 commands. The docker-compose.yaml file was created like this:
version: "3.4"
services:
# dummy service used only for building the images
dummy_app_base:
image: app_base:latest
build:
dockerfile: "${PWD}/dockerfile_base"
context: "${PWD}"
command: [ "echo", "\"dummy_app_base:latest EXIT\"" ]
app_1:
image: app_1:latest
build:
dockerfile: "${PWD}/dockerfile_1"
context: "${PWD}"
app_2:
image: app_2:latest
build:
dockerfile: "${PWD}/dockerfile_2"
context: "${PWD}"
So, to build all the images, I simply run docker-compose build. The build command essentially builds and tags all the images in the order they appear in the docker-compose.yaml file, so when building app_1 and app_2, the dependency app_base:latest is already built. Then, running everything with docker-compose up. Note: This WILL create a dangling container for dummy_app_base service, but overriding its command with an echo, it will simply exit immediately.
edit: even in one command: docker-compose up --build
Multi-stage builds were invented for problems like this. An example might be:
FROM ubuntu:latest as app_base
RUN apt-get update && apt-get install -y build-essentials
FROM app_base as app_name
COPY . .
RUN make

What is the purpose of building a docker image inside a compose file?

I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.

How to change source code without rebuilding image in Docker?

What is the best practice to use Docker container for dev/prod.
Let's say I want my changes to be applied automatically during development without rebuilding and restarting images. As far as I understand I can inject volume for this when running container.
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
Where pwd/src stands for the directory source code. It's working fine so far.
But how to delivery code to production? I think it worse to keep code along with binaries into the docker container. Do I need to create another similar docker file which will use COPY instead? Or it's better to deploy source-code separately like for dev-mode and mount it to docker.
The best practice is to build a new docker image for every version of your code. That has many advantages in production environments as faster deployments, independence from other systems, easier rollbacks, exportability, etc.
It is possible to do it within the same Dockerfile, using multi-stage builds.
The following is a simple example for a NodeJS app:
FROM node:10 as dev
WORKDIR /src
CMD ["myapp.js"]
FROM node:10
COPY package.json .
RUN npm install
COPY . .
Note that this Dockerfile is only for demo purposes, it can be improved in many ways.
When working on dev environment use the following commands to build the base image and run your code with a mounted folder:
docker build --target dev -t username/node-web-app0 .
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
And when you're ready for production, just exec docker run without the --target argument to build the full image, that contains the code:
docker build -t username/node-web-app0:v0.1 .
docker push username/node-web-app0:v0.1

How to set image name in Dockerfile?

You can set image name when building a custom image, like this:
docker build -t dude/man:v2 . # Will be named dude/man:v2
Is there a way to define the name of the image in Dockerfile, so I don't have to mention it in the docker build command?
Using -t on invocation
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Workaround using docker-compose
Tagging of the image isn't supported inside the Dockerfile. This needs to be done in your build command. As a workaround, you can do the build with a docker-compose.yml that identifies the target image name and then run a docker-compose build. A sample docker-compose.yml would look like
version: '2'
services:
man:
build: .
image: dude/man:v2
That said, there's a push against doing the build with compose since that doesn't work with swarm mode deploys. So you're back to running the command as you've given in your question:
docker build -t dude/man:v2 .
Personally, I tend to build with a small shell script in my folder (build.sh) which passes any args and includes the name of the image there to save typing. And for production, the build is handled by a ci/cd server that has the image name inside the pipeline script.
Workaround using docker-compose
Here is another version if you have to reference a specific docker file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: ../..
dockerfile: ./docker/nginx/Dockerfile
image: my_nginx:latest
Then you just run
docker-compose build
My Dockerfile alone solution is adding a shebang line:
#!/usr/bin/env -S docker build . --tag=dude/man:v2 --network=host --file
FROM ubuntu:22.04
# ...
Then chmod +x Dockerfile and ./Dockerfile is to go.
I even add more docker build command line arguments like specifying a host network.
NOTE: env with -S/--split-string support is only available for newer coreutils versions.
With a specific Dockerfile you could try:
docker build --tag <Docker Image name> --file <specific Dockerfile> .
for example
docker build --tag second --file Dockerfile_Second .
Workaround using Docker (and a Makefile)
Generally in Docker you can't say what you want the image to be tagged as in the Dockerfile. So what you do is
Create a Dockerfile
Create a Makefile
.PHONY: all
all: docker build -t image_name .
Use make instead of invoking docker build directly
Or, use buildah
But here is a better idea... Don't build images with Docker! Instead build them with buildah, the new build tool provided by the podman crew which uses shell (or any language), allows building in the cloud easily (without using a different project like kaniko), and allows rootless building of images! At the end of the build script just save the image inside with buildah commit. Here is what it looks like.
#!/bin/sh
# Create a new offline container from the `alpine:3` image, return the id.
ctr=$(buildah from "alpine:3")
# Create a new mount, return the path on the host.
mnt=$(buildah mount "$ctr")
# Copy files to the mount
cp -Rv files/* "$mnt/"
# Do some things or whatever
buildah config --author "Evan Carroll" --env "FOO=bar" -- "$ctr"
# Run a script inside the container
buildah run "$ctr" -- /bin/sh <<EOF
echo "This is just a regular shell script"
echo "Do all the things."
EOF
# Another one, same layer though
buildah run "$ctr" -- /bin/sh <<EOF
echo "Another one!"
echo "No excess layers created as with RUN."
EOF
# Commit this container as "myImageName"
buildah commit -- "$ctr" "myImageName"
Now you don't have to hack around with a Makefile. You have one shell script that does everything, and is far more powerful than a Dockerfile.
Side note, buildah can also build from Dockerfiles (using buildah bud), but this short coming is with the Dockerfile. So that won't help.

Resources