Docker Conditional build image - docker

I have to execute the same script to two docker images.
My Dockerfile are:
FROM centos:6
...
and
FROM centos:7
...
Is it possibile to have a single file and pass a parameter, something like:
FROM centos:MYPARAMS
and during the build somethings like that:
docker build --no-cache MYPARAMS=6 .
Thank you

Just to put this in right context, it is now (since May 2017) possible to achieve this with pure docker since 17.05 (https://github.com/moby/moby/pull/31352)
Dockerfile should look like (yes, commands in this order):
ARG APP_VERSION
ARG GIT_VERSION
FROM app:$APP_VERSION-$GIT_VERSION
Then build is invoked with
docker build --build-arg APP_VERSION=1 --build-arg GIT_VERSION=c351dae2 .
Docker will try to base the build on image app:1-c351dae2
Helped me immensely to reduce logic around building images.

From my knowledge, this is not possible with Docker.
The alternative solution is to use a Dockerfile "template", and then parse it using the template library of your choice. (Or even using sed command)

At https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33351864
you'll find a bash script "build" that works the way you want.
wf#mars:~/source/docker/docker-stackoverflowanswers/33351864>./build -v 6
Sending build context to Docker daemon 3.584 kB
Step 0 : FROM centos:6
6: Pulling from library/centos
fa5be2806d4c: Pull complete
ebdbe10e9b33: Downloading 4.854 MB/66.39 MB
...
wf#mars:~/source/docker/docker-stackoverflowanswers/33351864>./build -v 7
Sending build context to Docker daemon 3.584 kB
Step 0 : FROM centos:7
The essential part is the "here" document used:
#
# parameterized dockerfile
#
dockerfile() {
local l_version="$1"
cat << EOF > Dockerfile
FROM centos:$l_version
EOF
}

Related

why get error: "docker build" requires exactly 1 argument

I look a sample
Dockerfile
ARG some_variable_name
# or with a default:
# ARG some_variable_name=default_value
RUN echo "Oh dang look at that $some_variable_name
# or with ${some_variable_name}
docker build
$ docker build --build-arg some_variable_name=a_value
result
Oh dang look at that a_value
but, I used the sample always gets error
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
Why? Was I lose something?
in addition to #Ridwan answer, MAKE SURE THERE IS NO ADDITIONAL WHITE SPACE IN BETWEEN
docker build -t mytag .
You seem to have forgotten to put a dot, which represents that the Dockerfile in the local directory.
By that I meant:
docker build -t mytag .
What you were previously doing was:
docker build -t mytag
Thus forgetting to put the dot.

Why is docker build not showing any output from commands?

Snippet from my Dockerfile:
FROM node:12.18.0
RUN echo "hello world"
RUN psql --version
When I run docker build . I don't see any output from these two commands even if they are not cached. The documentation says that docker build is verbose by default. Why am I not seeing the output from commands? I used to see them before.
The output while building:
=> [7/18] RUN echo "hello world" 0.9s
The output I am seeing after building finishes:
=> CACHED [6/18] RUN apt-get install postgresql -y 0.0s
=> [7/18] RUN echo "hello world" 6.4s
=> [8/18] RUN psql --version 17.1s
The Dockerfile is created from node:12.18.0 which is based on Debian 9.
Docker version 19.03.13, build 4484c46d9d.
The output you are showing is from buildkit, which is a replacement for the classic build engine that docker ships with. You can adjust output from this with the --progress option:
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output
(default "auto")
Adding --progress=plain will show the output of the run commands that were not loaded from the cache. This can also be done by setting the BUILDKIT_PROGRESS variable:
export BUILDKIT_PROGRESS=plain
If you are debugging a build, and the steps have already been cached, add --no-cache to your build to rerun the steps and redisplay the output:
docker build --progress=plain --no-cache ...
If you don't want to use buildkit, you can revert to the older build engine by exporting DOCKER_BUILDKIT=0 in your shell, e.g.:
DOCKER_BUILDKIT=0 docker build ...
or
export DOCKER_BUILDKIT=0
docker build ...
Just use this flag --progress=plain after build.
For example:
docker-compose build --progress=plain <container_name>
OR
docker build --progress=plain .
If you don't want to use this flag every time, then permanently tell docker to use this flag by doing:
export BUILDKIT_PROGRESS=plain
Here is the official documentation when you type docker build --help.
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
In Docker 20.10 i had to use the --no-cache flag, too. Otherwise cached output is not shown.
docker build --progress=plain --no-cache .
As an alternative to specifying the --progress=plain option, you can also permanently disable the "pretty" output by setting this env variable in your shell config:
export BUILDKIT_PROGRESS=plain
Do 2 things
Instead of docker build . use this
docker build . --progress=plain
Add random junk to your RUN command every build (this tricks docker into thinking it hasn't seen the command before, so it doesn't use the cached version)
Example. If your command is RUN ls use this instead RUN ls && echo sdfjskdflsjdf (change the sdfjskdflsjdf to something else each time you build).
Why this works
I tried other answers and they all presented problems and imperfections. It's highly frustrating that Docker doesn't have some simple functionality like --verbose=true.
Here's what I ended up using (it's ludicrous but it works).
Suppose you want to see the output of ls command, this won't work docker build .
RUN ls
but this will print the output docker build --progress=plain:
RUN ls
now try again, it won't print! - that's because docker caches the unchanged layer, so the trick is to alter the command each time by adding some nonsense to it && echo sdfljsdfljksdfljk, and changing the nonsense each time docker build --progress=plain:
# This prints
RUN ls && echo sdfljsdfljksdfljk
# Next time you run it use a different token
RUN ls && echo sdlfkjsldfkjlskj
So each and every time, I mash the keyboard and come up with a new token. Stupifying. (note that I tried something like && openssl rand -base64 12 to generate a random string, but docker realises the code hasn't changed that doesn't work).
This solution is highly inferior to genuine docker support for printing output to console.
If your error looks something like this:
#7 0.584 /bin/sh: 1: /install.sh: not found
it's telling you the error is in line number 1. you are running into windows line endings
I was using VS code and I solved it pretty easily by converting the file from CRLF to LF using VS code.
just click on the CRLF button in the bottom right corner of the editor and save the file.
everything should work fine when you build the image now.

Do I need separate Dockerfiles for py2 and py3?

Currently I have 2 Dockerfiles, Dockerfile-py2:
FROM python:2.7
# stuff
and Dockerfile-py3:
FROM python:3.4
# stuff
where both instances of # stuff are identical.
I build two docker images using an invoke task:
#task
def docker(ctx):
"""Build docker images.
"""
tag = ctx.run('git log -1 --pretty=%h').stdout.strip()
for pyversion in '23':
name = 'myrepo/myimage{pyversion}'.format(pyversion=pyversion)
image = '{name}:{tag}'.format(name=name, tag=tag)
latest = '{name}:latest'.format(name=name)
ctx.run('docker build -t {image} -f Dockerfile-py{pyversion} .'.format(image=image, pyversion=pyversion))
ctx.run('docker tag {image} {latest}'.format(image=image, latest=latest))
ctx.run('docker push {name}'.format(name=name))
is there any way to prevent the duplication of # stuff so I can't get in a situation where someone edits one file but not the other?
Here is one way using Dockerfile ARGS along with docker build --build-arg:
ARG version
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now you build for python2.7 like so:
docker build -t myimg/tmp --build-arg version=2.7 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 06e28a29a3d2
Python 2.7.16
And in the same way, for python3.4:
docker build -t myimg/tmp --build-arg version=3.4 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 2283edc1b65d
Python 3.4.10
As you can imagine you can also set default values for ${version} in your dockerfile:
ARG version=3.4
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now if you just do docker build -t myimg/tmp . you will build for python3.4. But you can still override with the previous two commands.
So to answer your question, No, you don't need two different docker files.

Execute ENTRYPOINT from base image in build stage

I'm using a code generator tool which is provided as a Docker image with an ENTRYPOINT. I.e. for the manual use case I execute the following command line:
$ docker run --rm -v ${PWD}:/local some/codegen-image:latest \
generate ... parameters for code generator tool ...
So far, so good.
But I want to integrate the code generator image into my own multi-stage image build. I.e. the first stage should call the ENTRYPOINT of the base image to generate the code that will be consumed by the second stage:
# stage 1
FROM some/codegen-image:latest as codegen
... build set up steps for stage 1 ...
# now run ENTRYPOINT from base image, copy & pasted from the output of
#
# docker inspect -f '{{json .Config.Entrypoint}}' some/codegen-image:latest
#
RUN ["some_command", "option1", ..., "optionN", \
"generate", \
... parameters for code generator tool ... \
]
# stage 2
FROM some/other-image as stage2
... build set up steps for stage 2 ...
# copy-in generated code from stage 1
COPY --from=codegen /tmp/build/ .
This works but it violates the DRY principle, i.e. I need to update my Dockerfile every time the upstream project makes an incompatible change to its ENTRYPOINT.
Can I avoid the copy & paste from docker inspect output? My own research has turned up nothing so far...
Multi-Stage Dockerfile was introduced to optimize the overall size of the container docs.
The FROM directive just brings the content of the image specified, but you have to explicitly tell the container what command should be executed.
The feature you are expecting is not yet supported by docker.
Eg.
FROM some/codegen-image:latest as codegen
ARGS_ENTRYPOINT_OF_CODEGEN ["generate","parameters"]
.
.
.
FROM some/other-image as stage2
COPY --from=codegen /tmp/build/ .
It seems your approach is correct at this moment and the only way around.

How to serve a tensorflow model using docker image tensorflow/serving when there are custom ops?

I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions

Resources