I'm running a multi-service application with several images. The environment of each image is pretty much similar, so, in order to avoid code duplication, a "base" image is created/tagged with the required programs/configuration. Then, this "base" image is used as a parent image for the various "application" images. An (illustrative) example is given below:
dockerfile_base: which I build with docker build -f dockerfile_base -t app_base:latest .
FROM ubuntu:latest
RUN apt-get update && apt-get install -y
build-essentials
dockerfile_1: which is built with docker build -f dockerfile_1 -t app_1 .
FROM app_base:latest
COPY . .
RUN make test
And finally an example dockerfile_2 which describes a different service based again on "app_base" and is built with docker build -f dockerfile_2 -t app_2 .
FROM app_base:latest
COPY . .
RUN make deploy
Usually, the "base" image is built manually at first. Then, the "app" images are also manually built. Finally, the services (images app_1, app_2, etc.) are run using docker run for tests or docker-compose for demo deployment.
This creates an issue: When working on a new workspace (e.g. a newcomer's PC) where no docker images are yet created, or when something changes in the "dockerfile_base", running just the docker build command for the app images will result in error or incorrect images. So, the question is: is there a way in docker to define these chain-builds? I guess that's difficult for docker build command, but would it be possible with docker-compose?
OK, so this is what I came up with which essentially streamlines the whole multi-build multi-image process with just 2 commands. The docker-compose.yaml file was created like this:
version: "3.4"
services:
# dummy service used only for building the images
dummy_app_base:
image: app_base:latest
build:
dockerfile: "${PWD}/dockerfile_base"
context: "${PWD}"
command: [ "echo", "\"dummy_app_base:latest EXIT\"" ]
app_1:
image: app_1:latest
build:
dockerfile: "${PWD}/dockerfile_1"
context: "${PWD}"
app_2:
image: app_2:latest
build:
dockerfile: "${PWD}/dockerfile_2"
context: "${PWD}"
So, to build all the images, I simply run docker-compose build. The build command essentially builds and tags all the images in the order they appear in the docker-compose.yaml file, so when building app_1 and app_2, the dependency app_base:latest is already built. Then, running everything with docker-compose up. Note: This WILL create a dangling container for dummy_app_base service, but overriding its command with an echo, it will simply exit immediately.
edit: even in one command: docker-compose up --build
Multi-stage builds were invented for problems like this. An example might be:
FROM ubuntu:latest as app_base
RUN apt-get update && apt-get install -y build-essentials
FROM app_base as app_name
COPY . .
RUN make
Related
I am working on a docker app. The purpose of this repo is to output some json into a volume. I am using a Dockerfile, docker-compose and a Makefile. I'll show the contents of each file below. Goal/desired outcome is that when I run using make up that the container runs and outputs the json.
Directory looks like this:
docker-compose.yaml
Dockerfile
Makefile
main/ # a directory
Here are the contents of directory Main:
example.R
Not sure the best order to show these files. Throughout my setup I refer to a variable $PROJECTS_DIR which is a global on the host / local:
echo $PROJECTS_DIR
/home/doug/Projects
Here are my files:
docker-compose.yaml:
version: "3.5"
services:
nextzen_ga_extract_marketing:
build:
context: .
environment:
start_date: "2020-11-18"
start_date: "2020-11-19"
volumes:
- ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline:/home/rstudio/Projects/nextzen_google_analytics_extract_pipeline
Dockerfile:
FROM rocker/tidyverse:latest
ADD main main
WORKDIR "/main"
RUN apt-get update && apt-get install -y \
less \
vim
ENTRYPOINT ["Rscript", "example.R"]
Makefile:
.PHONY: build
build:
docker-compose build
.PHONY: up
up:
docker-compose pull
docker-compose up -d
.PHONY: restart
restart:
docker-compose restart
.PHONY: down
down:
docker-compose down
Here is the contents of the 'main' file of the Docker app, example.R:
library(jsonlite)
unlink("../output_data", recursive = TRUE) # delete any existing data from previous runs
dir.create('../output_data')
write(toJSON(mtcars), '../output_data/ga_tables.json')
If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/main and then run sudo Rscript example.R then the file runs and outputs the json in '../output_data/ga_tables.json as expected.
I am struggling to get this to happen when running the container. If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/ and then in the terminal run make up for:
docker-compose pull
docker-compose up -d
I then see:
make up
docker-compose pull
docker-compose up -d
Creating network "nextzengoogleanalyticsextractpipeline_default" with the default driver
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 ...
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 .
It 'looks' like everything ran as expected with no errors. Except no output appears in directory output_data as expected?
I guess I'm misunderstanding or misusing ENTRYPOINT in the Dockerfile with ENTRYPOINT ["Rscript", "example.R"]. My goal is that this file would run when the container is run.
How can I 'run' (if that's the correct terminology) my app so that it outputs json into /output_data/ga_tables.json?
Not sure what other info to provide? Any help much appreciated, I'm still getting to grips with docker.
If you run your application from /main and its output is supposed to go into ../output_data (so effectively /output_data), you need to bind mount this directory to have this output available on host. Therefore I would update your docker-compose.yaml to read something like this:
volumes:
- /path/to/output_data/on/host:/output_data
Bear in mind however that your script will not be able to remove /output_data when bind-mounted this way, so you might want to change your step to removing directory contents and not directory itself.
In my case, I got this working when I used full paths as opposed to relative paths.
I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.
I am trying to build an image/container with docker compose. The container builds/runs successfully, but the image REPOSITORY and TAG both appear as <none> in the output for docker images and the container gets an auto-generated name (e.g. eloquent_wiles). I would for it to tag the image/container with the names specified in my config files (in this case I would like them to be named 'myservice' and the image to be tagged 'v2').
I have the following docker-compose.yml:
version: '3'
services:
myservice:
build: .
image: myservice:v2
container_name: myservice
ports:
- "1337:1337"
This is my Dockerfile:
FROM node:10
WORKDIR /usr/src/myservice
COPY . /usr/src/myservice
EXPOSE 1337/tcp
RUN yarn \
&& yarn transpile \
&& node ./build/grpc-server.js
docker -v gives Docker version 18.09.2, build 6247962
docker-compose -v gives docker-compose version 1.22.0, build f46880fe
And I am running docker-compose build. I get the same results using docker-compose version 2.
I don't suppose anyone can spot what I'm doing wrong?
Build a named image: docker build -t <repo>:<tag> . in the directory where the Dockerfile is.
Deploy a named service: docker stack deploy -c <your_yaml_file> <your_stack> --with-registry-auth in the directory where your YAML is.
That was very silly of me. The issue was with the last line of the Dockerfile where you can see I start the server as part of the build process instead of as an entrypoint, blocking the build from reaching the step where it tags the images.
Thanks to #Mihai for pointing out that the <none>:<none> image is an intermediate and not the result of my build.
Is there a way to use the --squash option in docker-compose when building new docker images? Right now they have implemented --squash in docker as of 6 months ago, but I have not seen any docs about how to use this in docker-compose.yml.
Is there a work around here? (I see an open issue filed requesting this feature)
Instead of using --squash, you can use Docker multi-stage builds.
Here is a simple example for a Python app that uses the Django web framework. We want to separate out the testing dependencies into a different image, so that we do not deploy the testing dependencies to production. Additionally, we want to separate our automated documentation utilities from our test utilities.
Here is the Dockerfile:
# the AS keyword lets us name the image
FROM python:3.6.7 AS base
WORKDIR /app
RUN pip install django
# base is the image we have defined above
FROM base AS testing
RUN pip install pytest
# same base as above
FROM base AS documentation
RUN pip install sphinx
In order to use this file to build different images, we need the --target flag for docker build. The argument of --target should name the name of the image after the AS keyword in the Dockerfile.
Build the base image:
docker build --target base --tag base .
Build the testing image:
docker build --target testing --tag testing .
Build the documentation image:
docker build --target documentation --tag documentation .
This lets you build images that branch from the same base image, which can significantly reduce build-time for larger images.
You can also use multi-stage builds in Docker Compose. As of version 3.4 of docker-compose.yml, you can use the target keyword in your YAML.
Here is a docker-compose.yml file that references the Dockerfile above:
version: '3.4'
services:
testing:
build:
context: .
target: testing
documentation:
build:
context: .
target: documentation
If you run docker-compose build using this docker-compose.yml, it will build the testing and documentation images in the Dockerfile. As with any other docker-compose.yml, you can also add ports, environment variables, runtime commands, and so on.
You can achieve squash result with trick like
FROM oracle AS needs-squashing
ENV NEEDED_VAR some_value
COPY ./giant.zip ./somewhere/giant.zip
RUN echo "install giant in zip"
RUN rm ./somewhere/giant.zip
FROM scratch
COPY --from=needs-squashing / /
You can set image name when building a custom image, like this:
docker build -t dude/man:v2 . # Will be named dude/man:v2
Is there a way to define the name of the image in Dockerfile, so I don't have to mention it in the docker build command?
Using -t on invocation
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Workaround using docker-compose
Tagging of the image isn't supported inside the Dockerfile. This needs to be done in your build command. As a workaround, you can do the build with a docker-compose.yml that identifies the target image name and then run a docker-compose build. A sample docker-compose.yml would look like
version: '2'
services:
man:
build: .
image: dude/man:v2
That said, there's a push against doing the build with compose since that doesn't work with swarm mode deploys. So you're back to running the command as you've given in your question:
docker build -t dude/man:v2 .
Personally, I tend to build with a small shell script in my folder (build.sh) which passes any args and includes the name of the image there to save typing. And for production, the build is handled by a ci/cd server that has the image name inside the pipeline script.
Workaround using docker-compose
Here is another version if you have to reference a specific docker file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: ../..
dockerfile: ./docker/nginx/Dockerfile
image: my_nginx:latest
Then you just run
docker-compose build
My Dockerfile alone solution is adding a shebang line:
#!/usr/bin/env -S docker build . --tag=dude/man:v2 --network=host --file
FROM ubuntu:22.04
# ...
Then chmod +x Dockerfile and ./Dockerfile is to go.
I even add more docker build command line arguments like specifying a host network.
NOTE: env with -S/--split-string support is only available for newer coreutils versions.
With a specific Dockerfile you could try:
docker build --tag <Docker Image name> --file <specific Dockerfile> .
for example
docker build --tag second --file Dockerfile_Second .
Workaround using Docker (and a Makefile)
Generally in Docker you can't say what you want the image to be tagged as in the Dockerfile. So what you do is
Create a Dockerfile
Create a Makefile
.PHONY: all
all: docker build -t image_name .
Use make instead of invoking docker build directly
Or, use buildah
But here is a better idea... Don't build images with Docker! Instead build them with buildah, the new build tool provided by the podman crew which uses shell (or any language), allows building in the cloud easily (without using a different project like kaniko), and allows rootless building of images! At the end of the build script just save the image inside with buildah commit. Here is what it looks like.
#!/bin/sh
# Create a new offline container from the `alpine:3` image, return the id.
ctr=$(buildah from "alpine:3")
# Create a new mount, return the path on the host.
mnt=$(buildah mount "$ctr")
# Copy files to the mount
cp -Rv files/* "$mnt/"
# Do some things or whatever
buildah config --author "Evan Carroll" --env "FOO=bar" -- "$ctr"
# Run a script inside the container
buildah run "$ctr" -- /bin/sh <<EOF
echo "This is just a regular shell script"
echo "Do all the things."
EOF
# Another one, same layer though
buildah run "$ctr" -- /bin/sh <<EOF
echo "Another one!"
echo "No excess layers created as with RUN."
EOF
# Commit this container as "myImageName"
buildah commit -- "$ctr" "myImageName"
Now you don't have to hack around with a Makefile. You have one shell script that does everything, and is far more powerful than a Dockerfile.
Side note, buildah can also build from Dockerfiles (using buildah bud), but this short coming is with the Dockerfile. So that won't help.