Passing arguments to Docker build while deploying AppEngine flex - docker

I'm wondering if it's possible to feed arguments, or environment variables into a Dockerfile used by AppEngine (flex environment).
I'd like to use this command:
COPY ${STAGE}/keycloak-files/realm-config/* /opt/jboss/keycloak/realm-config/
"STAGE" variable would allow to select the origin (I have a "staging" and "production" directory, containing different configurations).
I've got two different app.yml files, one for each environment, but from what I read online, environment variables are not exposed to the Dockerfile at build time.
People suggest to pass arguments to accomplish the task. But how would that be possible with appengine, where we don't execute the docker build command directly?

As #DamPlz said there is not a straight way to pass env variables from the app.yaml to the Dockerfile during the deployment phase . Here are some workarounds that I could think of:
One option could be to create the variable in the Dockerfile directly and if you want to change it each time at runtime you can use a placeholder value and have a script update the value of the variable before running “gcloud app deploy”.
On the other hand you could use build triggers in Google Cloud Registry to modify it in the Docker image using user-defined substitutions.

Related

How can Cloud Build take dynamic parameters to increment a registry tag?

I want my Cloud Build to push an image to a registry with an incremented tag. So, when the trigger arrives from GitHub, build the image, and if the latest tag was 1.10, tag the new one 1.11. Similarly, the 1.11 value will serve in multiple other steps in the build.
Reading the registry and incrementing the tag is easy (in a bash Cloud Build step), but Cloud Build has no way to pass parameters. (Substitutions come from outside the Cloud Build process, for example from the Git tags, and are not generated inside the process.)
This StackOverflow question and this article say that Cloud Build steps can communicate by writing files to the workspace directory.
That is clumsy. But worse, this requires using shell steps exclusively, not the native docker-building steps, nor the native image command.
How can I do this?
Sadly you can't. The Cloud Builder image have each time their own sandbox and only the /workspace directory is mounted. By the way, all the environment variable, binaries installed and so, doesn't persist from one container to the next one.
You have to use the shell script each time :( The easiest way is to have a file in your /workspace directory (for example env.var file)
# load the environment variable
source /workspace/env.var
# Add variable
echo "NEW=Variable" >> /workspace/env.var
For this, Cloud Build is boring...

Use Bamboo variables in batch script

According to this very old question you can use Bamboo variables in a batch script like %bamboo_buildNumber%, but it doesn't work for me, I just get an empty string. I also tried %bamboo.buildNumber% with the same result. The script is not in-line and is used by a Dockerfile. Does that have an influence on this? Or did something change since the above question was asked?
In the script I have a line
innosetup-compiler MySetup.iss "--DVERSION=%major%.%minor%" "--DPATCH=%bamboo_buildNumber%"
And in my Dockerfile I write
RUN ./MyScript.bat
Update:
So I think whats happening is that because the batch-script is run from the Dockerfile it is also run inside a container and doesn't have access to the Bamboo environment variables because of this. I tried passing the variable in question through the Dockerfile into the script, but it hasn't worked as of yet.
I believe that this has changed in newer versions of Bamboo. The preferred syntax now is to use ${bamboo.buildNumber} when passing variables to a build script. I even use that approach in my old /bin/sh cmd.exe scripts. You'll know you've got it working when you see the following in the logs: Substituting variable: ${bamboo.buildNumber} with xxxx
Once you verify that the above variable substitution is working, you can then troubleshoot how that variable is getting (or not getting) into your Docker scripts.
For more information on the major minor build numbers check out this page. You may need to call it slightly differently if it is a custom variable.
if we are using the script body in bamboo script task then ${bamboo.buildNumber} will work without any issue but if we need to access in bat file or a ps1 file then it is required to access in the below syntax
%bamboo_buildNumber% In a .bat file use
$Env:bamboo_buildNumber in a Powershell file

How to use a local file or retrieve it remotely (conditionally) in Dockerfile?

I'd like to be able to control the source of a file (Java Archive) in a Dockerfile which is either a download (with curl) or a local file on the same machine I build the Docker image.
I'm aware of ways to control RUN statements, e.g. Conditional ENV in Dockerfile, but since I need access to the filesystem outside the Docker build image, a RUN statement won't do. I'd need a conditional COPY or ADD or a workaround.
I'm interested in built-in Docker functions/features which avoid the use of more than one Dockerfile or wrapping the Dockerfile in a script using templating software (those just workarounds popping into my head).
you can use multi-stage build which is rather new in docker:
https://docs.docker.com/develop/develop-images/multistage-build/

Access OpenShift template parameter inside Dockerfile

Will OpenShift build parameters (with the OpenShift Docker build strategy) be automatically exposed to Dockerfiles as Docker build arguments (ARG) or environment variables (ENV), or does this need explicit configuration in a BuildConfig e.g. in these places:
oc explain bc.spec.strategy.dockerStrategy.buildArgs
oc explain bc.spec.strategy.dockerStrategy.env
The reason I'm asking is that I have a template with several parameters, no explicit configuration yet, but an apparent situation where some parameters are accessible inside the Dockerfile (as $VAR) and others are not ($VAR is empty). I would like to understand normal behavior, before I debug my situation further.
UPDATE I've now added a oc explain bc.spec.strategy.dockerStrategy.buildArgs section for the "missing" parameter to the template like so:
strategy:
type: Docker
dockerStrategy:
buildArgs:
- name: VAR
value: ${VAR}
but its value is still empty inside the built container when I would have expected it to be true (because I started the build with oc new-app ... VAR=true. So something else must be wrong too.
This turned out to be a side-effect of my perhaps particular way of employing OpenShift's Docker build strategy.
I maintain a separate Dockerfile in a file and use a script patch.sh for inserting it into the uploaded template. This is convenient, because a Dockerfile stored in oc explain bc.spec.source.dockerfile needs certain escaping (in YAML representation) and the script takes care of this. If I must update the Dockerfile (as happens frequently during development) I just edit the file using verbatim Dockerfile syntax and then re-run the script.
The script also takes care of removing certain argument definitions from the Dockerfile (e.g. ARG VAR) and replacing references to them with references to corresponding OpenShift template parameters (e.g. $VAR to ${VAR}). The idea is for the script to turn a Dockerfile that would also be suitable for a stand-along Docker environment into one that can serve OpenShift's Docker build strategy with template parallelization.
The actual error occurred because I had added a new template parameter but not jet adjusted the script accordingly. The situation is now back to normal.
UPDATE I've now removed the special logic for manipulating arguments from my patch script and introduced build arguments under bc.spec.strategy.dockerStrategy.buildArgs instead. Entries look like this:
buildArgs:
- name: VAR
value: ${VAR}
So basically, now the build configuration does the copying (instead of my patch script the substitution).

Is it possible to skip a FROM command in a multistage dockerfile?

Attempting to make a dynamic docker file, where the final image may need one of two previous images based on user input.
I don't think you can skip FROM command. Build should start from somewhere, even if it is scratch.
While for trying to create a dynamic dockerfile, you can create the dockerfile using a shell script. I came across one such script at parity-deploy.sh, which dynamically creates a docker-compose.yml file on the basis of configurations provided by user.
Dockerfiles have been able to use ARGs to allow passing in parameters during a docker build using the CLI argument --build-arg for some time. But until recently (Docker's 17.05 release, to be precise), you weren't able to use an ARG to specify all or part of your Dockerfile's mandatory FROM command.
But since the pull request Allow ARG in FROM was merged, you can now specify an image / repository to use at runtime. This is great for flexibility, and as a concrete example, I used this feature to allow me to pull from a private Docker registry when building a Dockerfile in production, or to build from a local Docker image that was created as part of a CI/testing process inside Travis CI.
To use an ARG in your Dockerfile's FROM:
ARG MYAPP_IMAGE=myorg/myapp:latest
FROM $MYAPP_IMAGE
...
Then if you want to use a different image/tag, you can provide it at runtime:
docker build -t container_tag --build-arg MYAPP_IMAGE=localimage:latest .
If you don't specify --build-arg, then Docker will use the default value in the ARG.
Typically, it's preferred that you set the FROM value in the Dockerfile itself—but there are many situations (e.g. CI testing) where you can justify making it a runtime argument.
According to the documentation, you cannot skip it. It should be the first command in the Dockerfile as well.
As such, a valid Dockerfile must start with a FROM instruction
But notice that:
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another.
You can edit the file dynamically (e.g. sed) to use the image/images that the user has specified.
Looks like docker support now : https://github.com/docker/cli/issues/1134

Resources