Docker: manage image build dependencies - docker

I work with a private Docker registry. Inside, there are several images, each built from Dockerfile. All images can be sorted using the dependencies as hierarchy.
The Dockerfiles are stocked inside a GitHub repository, and are built through a Jenkinsfile.
Example:
Image-1: built from node:10-alpine
Image-2: built from Image-1
Image-3: built from golang:1.11.2-alpine3.8
Image-4: built from Image-2 & Image-3
The need now is to successfully parallelize the image build when possible, and to build the images following the hierarchy. So:
First build: Image-1 & Image-3
Second build: Image-2
Third & last build: Image-4
The questions are now:
How can we parallelize the images build?
How can we build the docker images following the hierarchy?
How can we know the hierarchy when we only have the Dockerfile for each image?
Thanks for reading.
Have a nice day

Related

Docker Container (Website) Content not Updating

I have built a project with a webhost (httpd:2.4)
(Dockerfile Content:
FROM httpd:2.4
COPY . /usr/local/apache2/htdocs )
It's hosting a static website... and I'd like to be able to change that / publish future changes but that doesn't work in the way I was expecting it to...
I'm using
git clone [repository]
cd [repository]
docker-compose -f docker-compose/docker-compose.yml up -d
to run the project, which works perfectly fine
The problem is that I should be able to make changes to the website.
I supposed it would just work like that:
docker-compose -f docker-compose/docker-compose.yml down
changing the index.html (save)
docker-compose -f docker-compose/docker-compose.yml up -d
But even though (for the test) I deleted every single character in my index.html, it still shows up exactly the same as before
What am I missing? What commands would I have to run for the changes to get applied?
If you have a dockerfile, the file containing the below,
FROM httpd:2.4
COPY . /usr/local/apache2/htdocs
It means you are building a custom docker image for your need. And you are also using the COPY command to copy the project to your docker image which is done when building the custom docker image. This is a good solution to copy the code in the docker image for distribution purposes however might not be the best for development purposes.
If changes are made to the project, this is not reflected in the custom docker image until that docker image is rebuilt. After rebuilding the image, the current files of the project are copied to the docker image. Then after restarting the docker compose and by also using the latest image built, the changes will be visible.
If you do not want to build a docker image each time a change is made, it might be best to create a docker-compose file which will map your project directly to /usr/local/apache2/htdocs. This way when the changes made to the project will be reflected instantly without any build process.
Sample docker compose file with the project mapping to /usr/local/apache2/htdocs, this docker compose file needs to be located in the directory where the index.html lives.
version: '3.9'
services:
apache:
image: httpd:latest
container_name: webserver
ports:
- '8080:80'
volumes:
# mapping your root project's directory to htdocs
- ${PWD}:/usr/local/apache2/htdocs
This problem may arise if you have referenced a docker image inside your docker-compose.yml file instead of building the image there. When you reference an image, docker-compose up will create the corresponding containers with the exact same image.
You need to:
Build the image again AFTER you have made changes to your html file and BEFORE running docker-compose.
OR
Build the image inside docker-compose.yml like this

How docker's build `--cache-from` option works when base image changes?

Situation
Let's say I want to build an image named my_user/my_image based on postgres:latest and to speed-up the build I use docker build's --cache-from=my_user/my_image:my_tag option.
Question
What happens if the underling image is updated (as expected postgres:latest will keep changing over time)?
If I run another build, without any code change, will the build be executed again due to the base image update or all steps will be cached due to the --cache-from=my_user/my_image:my_tag option?
Example
Dockerfile:
FROM postgres:master
RUN echo 'Dummy Image'
Build & Push:
docker build -t my_user/my_image:my_tag --cache-from=my_user/my_image:my_tag .
docker push my_user/my_image:my_tag
I could not find detailed information on docker's documentation.

docker can someone see the files in the image layers if the docker image is pulled from repo

I have hosted a docker image on the gitlab repo.
I have some sensitive data in one of the image layers.
Now if someone pulls the image, can he sees the sensitive date on the intermediate layer.
Also can he know the Dockerfile commands I have used for the image.
I want the end user to only have the image and dont have any other info about its Dockerfile
But atleast i dont want him to see the intermediate files
If someone pulls the image, can he sees the sensitive date on the intermediate layer?
Yes.
Also can he know the Dockerfile commands I have used for the image.
Yes.
You should take these into account when designing your image build system. For example, these mean you should never put any sort of credentials into your Dockerfile, because anyone who has the image can easily retrieve them. You can mitigate this somewhat with #RavindraBagale's suggestion to use a multi-stage build but even so it's risky. Run commands like git clone that need real credentials from outside the Dockerfile.
The further corollary to this is, if you think your code is sensitive, Docker on its own is not going to protect it. Using a compiled language (Go, C++, Rust) will mean you can limit yourself to distributing the compiled binary, which is harder to reverse-engineer; JVM languages (Java, Scala, Kotlin) can only distribute the jar file, though IME the bytecode is relatively readable; for interpreted languages (Python, Ruby, Javascript) you must distribute human-readable code. This may influence your initial language choice.
You can use multi-stage builds,
manage secrets in an intermediate image layer that is later disposed off ,so that no sensitive data reaches the final image build.
such as in the following example:
FROM: ubuntu as intermediate
WORKDIR /app
COPY secret/key /tmp/
RUN scp -i /tmp/key build#acme/files .
FROM ubuntu
WORKDIR /app
COPY --from intermediate /app .
Another options to maintain secret are
docker secret : you can use docker secret if you are using docker swarm
secrets in docker-compose file (without swarm)
version: "3.6"
services:
my_service:
image: centos:7
entrypoint: "cat /run/secrets/my_secret"
secrets:
- my_secret
secrets:
my_secret:
file: ./super_duper_secret.txt

Shared builder containers in Docker or Docker Compose

My project is structured kind of like this:
project
|- docker_compose.yml
|- svc-a
|- Dockerfile
|- svc-b
|- Dockerfile
|- common-lib
|- Dockerfile
Within docker_compose.yaml:
version: 3.7
services:
common-lib:
build:
context: ./common-lib/
image: common-lib:latest
svc-a:
depends_on:
- common-lib
build:
...
svc-b:
depends_on:
- common-lib
build:
...
In common-lib/Dockerfile relatively standard:
FROM someBuilderBase:latest
COPY . .
RUN build_libraries.sh
Then in svc-a/Dockerfile I import those built libraries:
FROM common-lib:latest as common-lib
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
And the Dockerfile for svc-b is basically the same.
This works great using docker-compose build svc-a as it first builds the common-lib container because of that depends-on and I can reference to it easily with common-lib:latest. It is also great because running docker-compose build svc-b doesn't rebuild that base common library.
My problem is that I am defining a builder container as a docker compose service. When I run docker-compose up it attempts to run the common-lib as a running binary/service and spits out a slew of errors. In my real project I have a bunch of chains of these builder container services which is making docker-compose up unusuable.
I am relatively new to docker, is there a more canonical way to do this while a) avoiding code duplication building common-lib in multiple Dockerfiles and b) avoiding a manual re-run of docker build ./common-lib before running docker build ./svc-a (or b)?
The way you do it is not exactly how you should do it in Docker.
You have two options to achieve what you want :
1/ Multi stage build
This is almost what you're doing with this line (in your svc-a dockerfile)
FROM common-lib:latest as common-lib
However, instead of creating you common-lib image in another project, just copy the dockerfile content in your service :
FROM someBuilderBase:latest as common-lib
COPY . .
RUN build_libraries.sh
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
This way, you won't need to add a common-lib service in docker-compose.
2/ Inheritance
If you have a lot of images that need to use what is inside your common-lib (and you don't want to add it in every dockerfile with multi stage build), then you can just use inheritance.
What's inheritance in docker ?
It's a base image.
From your example, svc-a image is based on someBase:latest. I guess, it's the same for svc-b. In that case, just add the lib you need in someBase image (with multi-stage build for example or by creating a base image containing your lib).

ordered build of nested docker images with compose

I am building a lamp with docker-compose.
In my docker-compose.yml i have the following:
ubuntu-base:
build: ./ubuntu-base
webserver-base:
build: ./webserver-base
webserver-base is derived from the ubuntu-base image.
In webserver-base Dockerfile:
FROM docker_ubuntu-base
ubuntu-base is built
FROM ubuntu:14.04
Now, if i execute the docker-compose.yml, it does not build the ubuntu-base image, but its trying to build the webserver-base image and fails, because it does not find the ubuntu-base image.
Output:
$ docker-compose up -d
Building webserver-base
Step 1 : FROM docker_ubuntu-base
Pulling repository docker.io/library/docker_ubuntu-base
ERROR: Service 'webserver-base' failed to build: Error: image library/docker_ubuntu-base:latest not found
It all works if i build the ubuntu-base image manually first.
why does it not build the ubuntu-base image?
Sadly, build ordering is a missing feature in docker-compose, that is requested for many month now.
As workaround you can link the containers like this:
ubuntu-base:
build: ./ubuntu-base
webserver-base:
build: ./webserver-base
links:
- ubuntu-base
this way ubuntu-base gets built before webserver-base.
First do a
docker-compose build ubuntu-base
But this will not create the image docker_ubuntu-base locally because you do not have any build steps. Only docker.io/ubuntu:14.04 will be downloaded.
If you add a build step like:
FROM ubuntu:14.04
RUN date
A docker_ubuntu-base image will be created.
So first do a:
docker-compose build ubuntu-base
This will create the image docker_ubuntu-base. Then you can do a docker-compose build.
But I would advise against this nested-docker image construction. This is cumbersome because as #kev indicated you have no control over the order of the builds. Why don't you create two independent docker files? Let docker derive webserver-base from ubuntu-base by keeping the Dockerfile instructions as identical as possible and reusing the layers.

Resources