Tagging image/container with docker compose - docker

I am trying to build an image/container with docker compose. The container builds/runs successfully, but the image REPOSITORY and TAG both appear as <none> in the output for docker images and the container gets an auto-generated name (e.g. eloquent_wiles). I would for it to tag the image/container with the names specified in my config files (in this case I would like them to be named 'myservice' and the image to be tagged 'v2').
I have the following docker-compose.yml:
version: '3'
services:
myservice:
build: .
image: myservice:v2
container_name: myservice
ports:
- "1337:1337"
This is my Dockerfile:
FROM node:10
WORKDIR /usr/src/myservice
COPY . /usr/src/myservice
EXPOSE 1337/tcp
RUN yarn \
&& yarn transpile \
&& node ./build/grpc-server.js
docker -v gives Docker version 18.09.2, build 6247962
docker-compose -v gives docker-compose version 1.22.0, build f46880fe
And I am running docker-compose build. I get the same results using docker-compose version 2.
I don't suppose anyone can spot what I'm doing wrong?

Build a named image: docker build -t <repo>:<tag> . in the directory where the Dockerfile is.
Deploy a named service: docker stack deploy -c <your_yaml_file> <your_stack> --with-registry-auth in the directory where your YAML is.

That was very silly of me. The issue was with the last line of the Dockerfile where you can see I start the server as part of the build process instead of as an entrypoint, blocking the build from reaching the step where it tags the images.
Thanks to #Mihai for pointing out that the <none>:<none> image is an intermediate and not the result of my build.

Related

How to let docker container install latest dependencies version

I have a docker image that is build from Dockerfile. It has pip installation inside with custom library. Now the image is used in docker-compose service.
requirements.txt
example-library-1
example-library-2
Dockerfile
FROM python:3.7-alpine
WORKDIR /code
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
docker build -t example_user/example_image .
docker-compose.yml
version: "3.9"
services:
web:
image: example_user/example_image
ports:
- "8000:5000"
When I have a new version for example-library-1, how do I update the docker-compose service to use the new library?
I had tried docker-compose up -d and docker-compose restart web but it does not check for new library version.
Is there a way to do this without docker-compose down and docker-compose up -d?
You can add a build key to the compose file.
services:
web:
image: example_user/example_image
build:
context: path/to/context
dockerfile: path/to/Dockerfile
Then you run compose with the build flag.
docker compose up --build
That said, compose does not have something like rolling updates. You need to stop to compose service and do up with the build flag in order to rebuild.
If you need to rolling updates, you need to use a different operator, like swarm or Kubernetes.
The docker-compose build flag is a good idea.
After that I recommend to use --no-cache flag because the docker buildkit may cache some layer.
Check here the documentation.
Don't forget to check the DOCKER_BUILDKIT environment variable too.

How to pass current folder as the context to docker-compose.yml and then Dockerfile?

I have a central Dockerfile. It's locate in ~/base/Dockerfile.
Let's say it only builds a simple debian image.
FROM debian
COPY test.js .
I also have a central docker-compose.yml file that uses this Dockerfile.
It is located in ~/base/docker-compose.yml.
version: "3.9"
services:
test:
build: ~/base/Dockerfile
ports:
- "5000:5000"
I also have a bash file that calls this docker-compose.yml from another directory.
For example:
mkdir temp
cd temp
setup
setup is a bash file that is registered in the /etc/bash.bashrc as a global alias.
It contains these lines:
docker-compose -f ~/base/docker-compose.yml build
docker-compose up -d
docker-compose logs --f test
I can run setup from inside any folder, and it should build a container based on that debian image. And it does.
However, it shows me that the name of the container is base_test_1, which is the default convention of docker.
This shows that it uses ~/base/. as the context.
How can I pass my current directory as the context?
Created a docker-compose.yml in the same location . Added a context and the value used is a environment variable.
~/base$ cat docker-compose.yml
version: "2.2"
services:
test:
build:
context: ${contextdir}
dockerfile: /home/myname/base/Dockerfile
ports:
- "5000:5000"
~/base$ cat Dockerfile
FROM python:3.6-alpine
COPY testfile.js .
Before triggering the docker-compose.yml build command, export the current working directory.
~/somehere$ ls
testfile.js
~/somehere$ export contextdir=$(pwd)
~/somehere$ docker-compose -f ~/base/docker-compose.yml build
Building test
Step 1/2 : FROM python:3.6-alpine
---> 815c1103df84
Step 2/2 : COPY testfile.js .
---> Using cache
---> d0cc03f02bdf
Successfully built d0cc03f02bdf
Successfully tagged base_test:latest
my composefile and dockerfile are located in ~/base/ while the testfile.js is located in ~/somehere/ (which i am assuming as the current working directory)

Docker force-build parent image

I'm running a multi-service application with several images. The environment of each image is pretty much similar, so, in order to avoid code duplication, a "base" image is created/tagged with the required programs/configuration. Then, this "base" image is used as a parent image for the various "application" images. An (illustrative) example is given below:
dockerfile_base: which I build with docker build -f dockerfile_base -t app_base:latest .
FROM ubuntu:latest
RUN apt-get update && apt-get install -y
build-essentials
dockerfile_1: which is built with docker build -f dockerfile_1 -t app_1 .
FROM app_base:latest
COPY . .
RUN make test
And finally an example dockerfile_2 which describes a different service based again on "app_base" and is built with docker build -f dockerfile_2 -t app_2 .
FROM app_base:latest
COPY . .
RUN make deploy
Usually, the "base" image is built manually at first. Then, the "app" images are also manually built. Finally, the services (images app_1, app_2, etc.) are run using docker run for tests or docker-compose for demo deployment.
This creates an issue: When working on a new workspace (e.g. a newcomer's PC) where no docker images are yet created, or when something changes in the "dockerfile_base", running just the docker build command for the app images will result in error or incorrect images. So, the question is: is there a way in docker to define these chain-builds? I guess that's difficult for docker build command, but would it be possible with docker-compose?
OK, so this is what I came up with which essentially streamlines the whole multi-build multi-image process with just 2 commands. The docker-compose.yaml file was created like this:
version: "3.4"
services:
# dummy service used only for building the images
dummy_app_base:
image: app_base:latest
build:
dockerfile: "${PWD}/dockerfile_base"
context: "${PWD}"
command: [ "echo", "\"dummy_app_base:latest EXIT\"" ]
app_1:
image: app_1:latest
build:
dockerfile: "${PWD}/dockerfile_1"
context: "${PWD}"
app_2:
image: app_2:latest
build:
dockerfile: "${PWD}/dockerfile_2"
context: "${PWD}"
So, to build all the images, I simply run docker-compose build. The build command essentially builds and tags all the images in the order they appear in the docker-compose.yaml file, so when building app_1 and app_2, the dependency app_base:latest is already built. Then, running everything with docker-compose up. Note: This WILL create a dangling container for dummy_app_base service, but overriding its command with an echo, it will simply exit immediately.
edit: even in one command: docker-compose up --build
Multi-stage builds were invented for problems like this. An example might be:
FROM ubuntu:latest as app_base
RUN apt-get update && apt-get install -y build-essentials
FROM app_base as app_name
COPY . .
RUN make

What is the purpose of building a docker image inside a compose file?

I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.

How to set image name in Dockerfile?

You can set image name when building a custom image, like this:
docker build -t dude/man:v2 . # Will be named dude/man:v2
Is there a way to define the name of the image in Dockerfile, so I don't have to mention it in the docker build command?
Using -t on invocation
How to build an image with custom name without using yml file:
docker build -t image_name .
How to run a container with custom name:
docker run -d --name container_name image_name
Workaround using docker-compose
Tagging of the image isn't supported inside the Dockerfile. This needs to be done in your build command. As a workaround, you can do the build with a docker-compose.yml that identifies the target image name and then run a docker-compose build. A sample docker-compose.yml would look like
version: '2'
services:
man:
build: .
image: dude/man:v2
That said, there's a push against doing the build with compose since that doesn't work with swarm mode deploys. So you're back to running the command as you've given in your question:
docker build -t dude/man:v2 .
Personally, I tend to build with a small shell script in my folder (build.sh) which passes any args and includes the name of the image there to save typing. And for production, the build is handled by a ci/cd server that has the image name inside the pipeline script.
Workaround using docker-compose
Here is another version if you have to reference a specific docker file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: ../..
dockerfile: ./docker/nginx/Dockerfile
image: my_nginx:latest
Then you just run
docker-compose build
My Dockerfile alone solution is adding a shebang line:
#!/usr/bin/env -S docker build . --tag=dude/man:v2 --network=host --file
FROM ubuntu:22.04
# ...
Then chmod +x Dockerfile and ./Dockerfile is to go.
I even add more docker build command line arguments like specifying a host network.
NOTE: env with -S/--split-string support is only available for newer coreutils versions.
With a specific Dockerfile you could try:
docker build --tag <Docker Image name> --file <specific Dockerfile> .
for example
docker build --tag second --file Dockerfile_Second .
Workaround using Docker (and a Makefile)
Generally in Docker you can't say what you want the image to be tagged as in the Dockerfile. So what you do is
Create a Dockerfile
Create a Makefile
.PHONY: all
all: docker build -t image_name .
Use make instead of invoking docker build directly
Or, use buildah
But here is a better idea... Don't build images with Docker! Instead build them with buildah, the new build tool provided by the podman crew which uses shell (or any language), allows building in the cloud easily (without using a different project like kaniko), and allows rootless building of images! At the end of the build script just save the image inside with buildah commit. Here is what it looks like.
#!/bin/sh
# Create a new offline container from the `alpine:3` image, return the id.
ctr=$(buildah from "alpine:3")
# Create a new mount, return the path on the host.
mnt=$(buildah mount "$ctr")
# Copy files to the mount
cp -Rv files/* "$mnt/"
# Do some things or whatever
buildah config --author "Evan Carroll" --env "FOO=bar" -- "$ctr"
# Run a script inside the container
buildah run "$ctr" -- /bin/sh <<EOF
echo "This is just a regular shell script"
echo "Do all the things."
EOF
# Another one, same layer though
buildah run "$ctr" -- /bin/sh <<EOF
echo "Another one!"
echo "No excess layers created as with RUN."
EOF
# Commit this container as "myImageName"
buildah commit -- "$ctr" "myImageName"
Now you don't have to hack around with a Makefile. You have one shell script that does everything, and is far more powerful than a Dockerfile.
Side note, buildah can also build from Dockerfiles (using buildah bud), but this short coming is with the Dockerfile. So that won't help.

Resources