Access OpenShift template parameter inside Dockerfile - docker

Will OpenShift build parameters (with the OpenShift Docker build strategy) be automatically exposed to Dockerfiles as Docker build arguments (ARG) or environment variables (ENV), or does this need explicit configuration in a BuildConfig e.g. in these places:
oc explain bc.spec.strategy.dockerStrategy.buildArgs
oc explain bc.spec.strategy.dockerStrategy.env
The reason I'm asking is that I have a template with several parameters, no explicit configuration yet, but an apparent situation where some parameters are accessible inside the Dockerfile (as $VAR) and others are not ($VAR is empty). I would like to understand normal behavior, before I debug my situation further.
UPDATE I've now added a oc explain bc.spec.strategy.dockerStrategy.buildArgs section for the "missing" parameter to the template like so:
strategy:
type: Docker
dockerStrategy:
buildArgs:
- name: VAR
value: ${VAR}
but its value is still empty inside the built container when I would have expected it to be true (because I started the build with oc new-app ... VAR=true. So something else must be wrong too.

This turned out to be a side-effect of my perhaps particular way of employing OpenShift's Docker build strategy.
I maintain a separate Dockerfile in a file and use a script patch.sh for inserting it into the uploaded template. This is convenient, because a Dockerfile stored in oc explain bc.spec.source.dockerfile needs certain escaping (in YAML representation) and the script takes care of this. If I must update the Dockerfile (as happens frequently during development) I just edit the file using verbatim Dockerfile syntax and then re-run the script.
The script also takes care of removing certain argument definitions from the Dockerfile (e.g. ARG VAR) and replacing references to them with references to corresponding OpenShift template parameters (e.g. $VAR to ${VAR}). The idea is for the script to turn a Dockerfile that would also be suitable for a stand-along Docker environment into one that can serve OpenShift's Docker build strategy with template parallelization.
The actual error occurred because I had added a new template parameter but not jet adjusted the script accordingly. The situation is now back to normal.
UPDATE I've now removed the special logic for manipulating arguments from my patch script and introduced build arguments under bc.spec.strategy.dockerStrategy.buildArgs instead. Entries look like this:
buildArgs:
- name: VAR
value: ${VAR}
So basically, now the build configuration does the copying (instead of my patch script the substitution).

Related

How to use docker images when building artefacts in Actions?

TL;DR: I would like to use on a self-hosted Actions runner (itself a docker container on my docker engine) specific docker images to build artefacts that I would move between the build phases, and end with a standalone executable (not a docker container to be deployed). I do not know how to use docker containers as "building engines" in Actions.
Details: I have a home project consisting of a backend in Go (cross compiled to a standalone binary) and a frontend in Javascript (actually a framework: Quasar).
I develop on my laptop in Windows and use GitHub as the SCM.
The manual steps I do are:
build a static version of the frontend which lands in a directory spa
copy that directory to the backend directory
compile the executable that embeds the spa directory
copy (scp) this executable to the final destination
For development purposes this works fine.
I now would like to use Actions to automate the whole thing. I use docker based self-hosted runners (tcardonne/github-runner).
My problem: the containers do a great job isolating the build environment from the server they run on. They are however reused across build jobs and this may create conflicts. More importantly, the default versions of software provided by these containers is not the right (usually - latest) one.
The solution would be to run the build phases in disposable docker containers (that would base on the right image, shortening the build time as a collateral nice to have). Unfortunately, I do not know how to set this up.
Note: I do not want to ultimately create docker containers, I just want to use them as "building engines" and extract the artefacts from them, and share between the jobs (in my specific case - one job would be to build the front with quasar and generate a directory, the other one would be a compilation ending up with a standalone executable copied elsewhere)
Interesting premise, you can certainly do this!
I think you may be slightly mistaken with regards to:
They are however reused across build jobs and this may create conflicts
If you run a new container from an image, then you will start with a fresh instance of that container. Files, software, etc, all adhering to the original image definition. Which is good, as this certainly aids your efforts. Let me know if I have the wrong end of the stick in regards to the above though.
Base Image
You can define your own image for building, in order to mitigate shortfalls of public images that may not be up to date, or suit your requirements. In fact, this is a common pattern for CI, and Google does something similar with their cloud build configuration. For either approach below, you will likely want to do something like the following to ensure you have all the build tools you may
As a rough example:
FROM golang:1.16.7-buster
RUN apt update && apt install -y \
git \
make \
...
&& useradd <myuser> \
&& mkdir /dist
USER myuser
You could build and publish this with the following tag:
docker build . -t <containerregistry>:buildr/golang
It would also be recommended that you maintain a separate builder image for other types of projects, such as node, python, etc.
Approaches
Building with layers
If you're looking to leverage build caching for your applications, this will be the better option for you. Caching is only effective if nothing has changed, and since the projects will be built in isolation, it makes it relatively safe.
Building your app may look something like the following:
FROM <containerregistry>:buildr/golang as builder
COPY src/ .
RUN make dependencies
RUN make
RUN mv /path/to/compiled/app /dist
FROM scratch
COPY --from=builder /dist /dist
The gist of this is that you would start building your app within the builder image, such that it includes all the build deps you require, and then use a multi stage file to publish a final static container that includes your compiled source code, with no dependencies (using the scratch image as the smallest image possible ).
Getting the final files out of your image would be a bit harder using this approach, as you would have to run an instance of the container once published in order to mount the files and persist it to disk, or use docker cp to retrieve the files from a running container (not image) to your disk.
In Github actions, this would look like running a step that builds a Docker container, where the step can occur anywhere with docker accessibility
For example:
jobs:
docker:
runs-on: ubuntu-latest
steps:
...
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: user/app:latest
Building as a process
This one can not leverage build caching as well, but you may be able to do clever things like mounting a host npm cache into your container to aid in actions like npm restore.
This approach differs from the former in that the way you build your app will be defined via CI / a purposeful script, as opposed to the Dockerfile.
In this scenario, it would make more sense to define the CMD in the parent image, and mount your source code in, thus not maintaining a image per project you are building.
This would shift the responsibility of building your application from the buildtime of the image, to the runtime. Retrieving your code from the container would be doable through volume mounting for example:
docker run -v /path/to/src:/src /path/to/dist:/dist <containerregistry>:buildr/golang
If the CMD was defined in the builder, that single script would execute and build the mounted in source code, and subsequently publish to /dist in the container, which would then be persisted to your host via that volume mapping.
Of course, this applies if you're building locally. It actually becomes a bit nicer in a Github actions context if you wish to keep your build instructions there. You can choose to run steps within your builder container using something like the following suggestion
jobs:
...
container:
runs-on: ubuntu-latest
container: <containerregistry>:buildr/golang
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
Within that run: spec, you could choose to call a build script, or enter the commands that might be present in the script yourself.
What you do with the compiled source is muchly up to you once acquired đź‘Ť
Chaining (Frontend / Backend)
You mentioned that you build static assets for your site and then embed them into your golang binary to be served.
Something like that introduces complications of course, but nothing untoward. If you do not need to retrieve your web files until you build your golang container, then you may consider taking the first approach, and copying the content from the published image as part of a Docker directive. This makes more sense if you have two separate projects, one for frontend and backend.
If everything is in one folder, then it sounds like you may just want to extend your build image to facilitate go and js, and then take the latter approach and define those build instructions in a script, makefile, or your run: config in your actions file
Conclusion
This is alot of info, I hope it's digestible for you, and more importantly, I hope it gives you some ideas as to how you can tackle your current issue. Let me know if you would like clarity in the comments

Passing arguments to Docker build while deploying AppEngine flex

I'm wondering if it's possible to feed arguments, or environment variables into a Dockerfile used by AppEngine (flex environment).
I'd like to use this command:
COPY ${STAGE}/keycloak-files/realm-config/* /opt/jboss/keycloak/realm-config/
"STAGE" variable would allow to select the origin (I have a "staging" and "production" directory, containing different configurations).
I've got two different app.yml files, one for each environment, but from what I read online, environment variables are not exposed to the Dockerfile at build time.
People suggest to pass arguments to accomplish the task. But how would that be possible with appengine, where we don't execute the docker build command directly?
As #DamPlz said there is not a straight way to pass env variables from the app.yaml to the Dockerfile during the deployment phase . Here are some workarounds that I could think of:
One option could be to create the variable in the Dockerfile directly and if you want to change it each time at runtime you can use a placeholder value and have a script update the value of the variable before running “gcloud app deploy”.
On the other hand you could use build triggers in Google Cloud Registry to modify it in the Docker image using user-defined substitutions.

How can Cloud Build take dynamic parameters to increment a registry tag?

I want my Cloud Build to push an image to a registry with an incremented tag. So, when the trigger arrives from GitHub, build the image, and if the latest tag was 1.10, tag the new one 1.11. Similarly, the 1.11 value will serve in multiple other steps in the build.
Reading the registry and incrementing the tag is easy (in a bash Cloud Build step), but Cloud Build has no way to pass parameters. (Substitutions come from outside the Cloud Build process, for example from the Git tags, and are not generated inside the process.)
This StackOverflow question and this article say that Cloud Build steps can communicate by writing files to the workspace directory.
That is clumsy. But worse, this requires using shell steps exclusively, not the native docker-building steps, nor the native image command.
How can I do this?
Sadly you can't. The Cloud Builder image have each time their own sandbox and only the /workspace directory is mounted. By the way, all the environment variable, binaries installed and so, doesn't persist from one container to the next one.
You have to use the shell script each time :( The easiest way is to have a file in your /workspace directory (for example env.var file)
# load the environment variable
source /workspace/env.var
# Add variable
echo "NEW=Variable" >> /workspace/env.var
For this, Cloud Build is boring...

Use Bamboo variables in batch script

According to this very old question you can use Bamboo variables in a batch script like %bamboo_buildNumber%, but it doesn't work for me, I just get an empty string. I also tried %bamboo.buildNumber% with the same result. The script is not in-line and is used by a Dockerfile. Does that have an influence on this? Or did something change since the above question was asked?
In the script I have a line
innosetup-compiler MySetup.iss "--DVERSION=%major%.%minor%" "--DPATCH=%bamboo_buildNumber%"
And in my Dockerfile I write
RUN ./MyScript.bat
Update:
So I think whats happening is that because the batch-script is run from the Dockerfile it is also run inside a container and doesn't have access to the Bamboo environment variables because of this. I tried passing the variable in question through the Dockerfile into the script, but it hasn't worked as of yet.
I believe that this has changed in newer versions of Bamboo. The preferred syntax now is to use ${bamboo.buildNumber} when passing variables to a build script. I even use that approach in my old /bin/sh cmd.exe scripts. You'll know you've got it working when you see the following in the logs: Substituting variable: ${bamboo.buildNumber} with xxxx
Once you verify that the above variable substitution is working, you can then troubleshoot how that variable is getting (or not getting) into your Docker scripts.
For more information on the major minor build numbers check out this page. You may need to call it slightly differently if it is a custom variable.
if we are using the script body in bamboo script task then ${bamboo.buildNumber} will work without any issue but if we need to access in bat file or a ps1 file then it is required to access in the below syntax
%bamboo_buildNumber% In a .bat file use
$Env:bamboo_buildNumber in a Powershell file

Is it possible to skip a FROM command in a multistage dockerfile?

Attempting to make a dynamic docker file, where the final image may need one of two previous images based on user input.
I don't think you can skip FROM command. Build should start from somewhere, even if it is scratch.
While for trying to create a dynamic dockerfile, you can create the dockerfile using a shell script. I came across one such script at parity-deploy.sh, which dynamically creates a docker-compose.yml file on the basis of configurations provided by user.
Dockerfiles have been able to use ARGs to allow passing in parameters during a docker build using the CLI argument --build-arg for some time. But until recently (Docker's 17.05 release, to be precise), you weren't able to use an ARG to specify all or part of your Dockerfile's mandatory FROM command.
But since the pull request Allow ARG in FROM was merged, you can now specify an image / repository to use at runtime. This is great for flexibility, and as a concrete example, I used this feature to allow me to pull from a private Docker registry when building a Dockerfile in production, or to build from a local Docker image that was created as part of a CI/testing process inside Travis CI.
To use an ARG in your Dockerfile's FROM:
ARG MYAPP_IMAGE=myorg/myapp:latest
FROM $MYAPP_IMAGE
...
Then if you want to use a different image/tag, you can provide it at runtime:
docker build -t container_tag --build-arg MYAPP_IMAGE=localimage:latest .
If you don't specify --build-arg, then Docker will use the default value in the ARG.
Typically, it's preferred that you set the FROM value in the Dockerfile itself—but there are many situations (e.g. CI testing) where you can justify making it a runtime argument.
According to the documentation, you cannot skip it. It should be the first command in the Dockerfile as well.
As such, a valid Dockerfile must start with a FROM instruction
But notice that:
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another.
You can edit the file dynamically (e.g. sed) to use the image/images that the user has specified.
Looks like docker support now : https://github.com/docker/cli/issues/1134

Resources