How to verify the validity of docker images build from docker-compose? - docker

I am trying to come up with a CI system where I validate the Dockerfile and docker-compose.yaml files that are used to build our images.
I found Google containter-structure-tests
that can be used to verify the structre of Docker images that are built. This works if the docker images are build from Dockerfile.
Is there a way that I can verify the docker images with all the configurations that are added to the images by Docker-compose?
EDIT:
Maybe I didn't all put all my details into the questions.
Lets say I have docker-compose file with the following structure:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
ports:
- '8983:8983'
volumes:
- '${DEV_ENV_ROOT}/solr/cores:/var/data/solr'
- '${DEV_ENV_SOLR_ROOT}/nginx:/var/lib/nginx'
Now that the images would be built from Dockerfile-a and Dockerfile-b, there would be configurations made on top of image foo-b. How can I validate those configurations without building the container from image foo-b? Would that even be possible?

Assuming you have the following docker-compose.yml file:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
Build your images running the command docker-compose --project-name foo build. This will make all images' name start with the prefix foo_. So you would end up with the following image names:
foo_image-a
foo_image-b
The trick is to use a unique id (such as your CI job id) instead of foo so you can identify the very images that were just built.
Now that you know the names of your images, you can use:
container-structure-test test --image foo_image-a --config config.yaml
container-structure-test test --image foo_image-b --config config.yaml
If you are to make some kind of generic job which does not know the docker compose service names, you can use the following command to get the list of images starting with that foo_ prefix:
docker image list --filter "reference=foo_*"
REPOSITORY TAG IMAGE ID CREATED SIZE
foo_image-a latest 0c5e1cf8c1dc 16 minutes ago 4.15MB
foo_image-b latest d4e384157afb 16 minutes ago 4.15MB
and if you want a script to iterate over this result, add the --quiet option to obtain just the images' id:
docker image list --filter "reference=foo_*" --quiet
0c5e1cf8c1dc
d4e384157afb

Related

Why does building and image with docker compose fail but succeeds without it?

I am trying to build an image with docker compose and it fails, however it works with just docker. I have read some SO posts saying that the error thrown when failing happens when a file/folder cannot be found in the Dockerfile. The build works when building with docker so I dont know why it wouldn't work with docker-compose. Why is this happening?
The structure for this project is this:
parent_proj
|_proj
|_Dockerfile
|_docker-compose.yml
Here is my docker-compose file:
version: '3.4'
services:
integrations:
build:
context: .
dockerfile: proj/Dockerfile
network: host
image: int
ports:
- "5000:5000"
Here is the Dockerfile inside proj/
FROM openjdk:11
USER root
#RUN apt-get bash
ARG JAR_FILE=target/proj-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} /app2.jar
ENTRYPOINT ["java","-jar", "/app2.jar"]
When I'm inside the proj folder. I can run
docker build . -t proj
The above succeeds and I can subsequently run the container. However when I am in parent_proj and run docker compose build it fails with the error message
failed to compute cache key: failed to walk
/var/lib/docker/tmp/buildkit-mount316454722/target: lstat
/var/lib/docker/tmp/buildkit-mount316454722/target: no such file or
directory
Why does this happen? How can I build successfully with docker-compose without restructuring the project?
thanks
Your Compose build options and the docker build options you show are different. The successful command is (where -f Dockerfile is the default):
docker build ./proj -t proj # -f Dockerfile
# context: image: dockerfile:
But your Compose setup is running
docker build . -t img -f proj/Dockerfile
# context: image: dockerfile:
Which one is right? In the Dockerfile, you
COPY target/proj-0.0.1-SNAPSHOT.jar /some/container/path
That target/... source path is always relative to the build-context directory (Compose context: option, the directory parameter to docker build), even if it looks like an absolute path and even if the Dockerfile is in a different directory. If that target directory is a subdirectory of proj then you need the first form.
There's a shorthand Compose build: syntax if the only thing you need to specify is the context directory, and I'd use that here. If you don't specifically care what the image name is (you're not pushing it to a registry) then Compose can pick a reasonable name on its own; you don't need to specify image:.
version: '3.8'
services:
integrations:
build: ./proj
ports:
- "5000:5000"

Push image to another registry with volume copy

I am running an image in a docker container locally with the following commands
docker pull locustio/locust
and my docker-compose looks as below, for which I use the docker-compose up
version: '3'
services:
locust-service:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H http://master:8089
I have my volume, which is the locustfile.py which has all the code to test my system. Now I would need to push and deploy this image into another private repository along with the volume, that is the file locustfile.py.
How can I do that with the docker-compose push? Or is there any other way I can copy the volume? The docker-compose push for the above compose file doesn't seem to work
Volumes are generally intended to hold data, not application code. You should build your code into a derived Docker image, which then can be pushed.
You can write what you show here into a basic Dockerfile:
FROM locustio/locust
COPY locustfile.py /mnt/locust
# CMD must be a JSON array if it's passing additional options to an ENTRYPOINT
CMD ["-f", "/mnt/locust/locustfile.py", "-H", "http://master:8089"]
Then your docker-compose.yml file only needs to specify to build and run it, but not duplicate any of these options:
version: '3.8'
services:
locust-service:
build: .
image: my-docker-hub-name/locust
ports:
- "8089:8089"
Then docker-compose build && docker-compose push would build and push the image. On the target host you'd need to copy this docker-compose.yml file but remove the build: line.
Glancing at the Locust documentation, this is similar to what is suggested to Use docker image as a base image. You also may find it more flexible to use environment variables to set options, rather than command-line arguments, which would let you split options between the Dockerfile and the docker-compose.yml runtime configuration.
Only docker images can be pushed.
The volumes are generated when you run the image creating a container with its volumes as explained in the official documentation https://docs.docker.com/storage/volumes/ .
I report here the example in the official documentation:
docker run -d \
--name=nginxtest \
-v nginx-vol:/usr/share/nginx/html \
nginx:latest

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

reuses image built by docker

I built image using
docker build -t my-image
docker-compose.yml has
django:
build:
context: .
dockerfile: ./compose/django/Dockerfile-dev
image: my-image
Then I run docker-compose build
I see my-image is being built again even though I built it previously.
Can an image built by docker build be used by docker-compose ?
What you have written
As the docs say:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image:
How to avoid this
If you want to rebuild each time
Build as you are, and the build artifact will be saved with the name my-image
If you want to reuse the build
Change to just specify the image to use
If you only want to build if the image doesn't exist
Run compose with --no-build as this describes

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Resources