I am following "Creating a Docker container action" and everything works great for me except that I would like to parametrize FROM field in my Dockerfile (I need to run CI tests against different versions of dependency, packaged as Docker image).
Ideally, in my Dockerfile, I'd like to use ARG or something something similar to:
ARG version=latest
FROM alpine:${version}
...
... but it is unclear how to pass build args.
Is there a way to something like this?
I have not found a good way to do this out of the box.
At the moment Docker container action
won't even let you to specify Dockerfile using arguments (through runs.image in action.yml).
The solution for me was using this action.
Related
I have two Dockerfiles (maybe will have more) with the list of environment variables, same for the both files. Let's say:
ENV VAR1="value1"
ENV VAR2="value2"
ENV VAR3="value3"
Can I somehow move this setup to a file, which can be used in all the Dockerfiles, where it's required?
I want to remove duplicates and have a common place for setting those variables.
You can split these into a custom base image. That image would look like
FROM ubuntu:18.04 # or whatever else you're using
ENV VAR1="value1"
ENV VAR2="value2"
ENV VAR3="value3"
# and that's all
You would have to manually build this in most situations
docker build -t my/env-base -f Dockerfile.env .
and then you can refer to it in the downstream Dockerfiles
FROM my/env-base
# the rest of the Dockerfile commands as normal
Tooling like Docker Compose won't really be aware of this image layering. There's no good way to list a base image that needs to be built as a dependency of other things, but shouldn't run a container on its own. If you do change these values you'll have to manually rebuild the base image, then rebuild the application images.
You should also consider whether you need all of these environment variables. In other SO questions I see variables used for filesystem paths (which can be fixed in an isolated Docker image), usernames (not a Docker concept really), credentials (keep far away from the image, it's really easy to get them back out), versions, and URLs. You might be able to get away with using fixed values for these (use /app rather than $INSTALL_PATH), or have a sensible default in your application code.
I have a fairly simple Dockerfile and now would like to build a docker image using rules_docker.
Trying to use container_image, it seems like I cannot use the Dockerfile as input. Is there any way to build with a Dockerfile?
Update: There is now a rule called dockerfile_image. Read here for more details: https://github.com/bazelbuild/rules_docker/blob/master/contrib/dockerfile_build.bzl#L15
I think it's by design not allowed due to non-hermetic nature of Dockerfile. We can RUN any command in Dockerfile, including the ones non-hermetic (can't always been reproduced)
Further discussion here:
https://github.com/bazelbuild/rules_docker/issues/173
https://blog.bazel.build/2015/07/28/docker_build.html
I am containerising a codebase that serves multiple applications. I have created three images;
app-base:
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
...
app-foo:
FROM app-base:latest
RUN foo-specific-setup.sh
and app-buzz which is very similar to app-foo.
This works currently, except I want to be able to build versions of app-foo and app-buzz for specific code branches and versions. It's easy to do that for app-base and tag appropriately, but app-foo and app-buzz can't dynamically select that tag, they are always pinned to app-base:latest.
Ultimately I want this build process automated by Jenkins. I could just dynamically re-write the Dockerfile, or not have three images and just have two nearly-but-not-quite identical Dockerfiles for each app that would need to be kept in sync manually (later increasing to 4 or 5). Each of those solutions has obvious drawbacks however.
I've seen lots of discussions in the past about things such as an INCLUDE statement, or dynamic tags. None seemed to come to anything.
Does anyone have a working, clean(ish) solution to this problem? As long as it means Dockerfile code can be shared across images, I'd be happy. If it also means that the shared layers of images don't need to be rebuilt for each app, then even better.
You could still use build args to do this.
Dockerfile:
FROM ubuntu
ARG APP_NAME
RUN echo $APP_NAME-specific-setup.sh >> /root/test
ENTRYPOINT cat /root/test
Build:
docker build --build-arg APP_NAME=foo -t foo .
Run:
$ docker run --rm foo
foo-specific-setup.sh
In your case you could run the correct script in the RUN using the argument you just set before. You would have one Dockerfile per app-base variant and run the correct set-up based on the build argument.
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
ARG APP_NAME
RUN $APP_NAME-specific-setup.sh
Any layers before setting the ARG would not need to be rebuilt when creating other versions.
You can then push the built images to separate docker repositories for each app.
If your apps need different ENTRYPOINT instructions, you can have an APP_NAME-entrypoint.sh per app and rename it to entrypoint.sh within your APP_NAME-specific-setup.sh (or pass it through as an argument to run).
I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.
From the following image: https://registry.hub.docker.com/u/cloudesire/activemq/dockerfile/
If I wanted to override the ACTIVEMQ_VERSION environment variable in my child docker file, I assumed I would be able to do something like the following:
FROM cloudesire/activemq:latest
MAINTAINER abc <abc#xyz.co.uk>
ENV ACTIVEMQ_VERSION 5.9.1
ADD ./src/main/resources/* /opt/activemq/conf/
However this does not seem to work. Admittedly I am new to Docker and have obviously misunderstood something. Please could someone explain why this does not work, and how/if I can achieve it another way?
That won't work. The ACTIVEMQ_VERSION has already been used by the cloudesire/activemq:latest image build to populate its image layers. All the ActiveMQ installation files based on version 5.11.1 are already extracted in their corresponding directories.
In your Dockerfile you only can build on top of what has already been build there and add your files. Your own Dockerfile build will not re-run the build instructions described in their Dockerfile.
If you need to have your own cloudesire/activemq image based on version 5.9.1 you need to clone their Dockerfile, adjust the version there and build it locally. So you could base your other Dockerfile on it.