Use build-arg from docker to create json file - docker

I have a docker build command which I'm running in Jenkins execute shell
docker build -f ./fastlane.dockerfile \
-t fastlane-test \
--build-arg PLAY_STORE_CREDENTIALS=$(cat PLAY_STORE_CREDENTIALS) \
.
PLAY_STORE_CREDENTIALS is a JSON file saved in Jenkins using managed files. And, then, inside my Dockerfile, I have
ARG PLAY_STORE_CREDENTIALS
ENV PLAY_STORE_CREDENTIALS=$PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
RUN touch play-store-credentials.json
RUN echo $PLAY_STORE_CREDENTIALS >> ./play-store-credentials.json
RUN cat play-store-credentials.json
cat logs out a empty line or nothing at all.
Content of PLAY_STORE_CREDENTIALS:
{
"type": "...",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
Any idea where the problem is?

Is there actually a file named PLAY_STORE_CREDENTIALS? If it is, and if it's a standard JSON file, I would expect your given command line to fail; if the file contains any whitespace (which is typical for JSON files), that command should result in an error like...
"docker build" requires exactly 1 argument.
For example, if I have in PLAY_STORE_CREDENTIALS the sample content from your question, we see:
$ docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS=$(cat PLAY_STORE_CREDENTIALS) .
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
...because you are not properly quoting your arguments. If you adopt #β.εηοιτ.βε's suggestion and quote the cat command, it appears to build as expected:
$ docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS="$(cat PLAY_STORE_CREDENTIALS)" .
[...]
Step 7/7 : RUN cat play-store-credentials.json
---> Running in 29f95ee4da19
{ "type": "...", "project_id": "...", "private_key_id": "...", "private_key": "...", "client_email": "...", "client_id": "...", "auth_uri": "...", "token_uri": "...", "auth_provider_x509_cert_url": "...", "client_x509_cert_url": "..." }
Removing intermediate container 29f95ee4da19
---> b0fb95a9d894
Successfully built b0fb95a9d894
Successfully tagged fastlane-test:latest
You'll note that the resulting file does not preserve line endings; that's because you're not quoting the variable $PLAY_STORE_CREDENTIALS in your echo statement. You should write that as:
RUN echo "$PLAY_STORE_CREDENTIALS" >> ./play-store-credentials.json
Lastly, it's not clear why you're transferring this data using environment variables, rather than just using the COPY command:
COPY PLAY_STORE_CREDENTIALS ./play-store-credentials.json
In the above examples, I'm testing things using the following Dockerfile:
FROM docker.io/alpine:latest
ARG PLAY_STORE_CREDENTIALS
ENV PLAY_STORE_CREDENTIALS=$PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
RUN touch play-store-credentials.json
RUN echo $PLAY_STORE_CREDENTIALS >> ./play-store-credentials.json
RUN cat play-store-credentials.json
Update
Here's an example using the COPY command, where the value of the PLAY_STORE_CREDENTIALS build argument is a filename:
FROM docker.io/alpine:latest
ARG PLAY_STORE_CREDENTIALS
WORKDIR /app/packages/web/android/fastlane/PlayStoreCredentials
COPY ${PLAY_STORE_CREDENTIALS} play-store-credentials.json
RUN cat play-store-credentials.json
If I have credentials in a file named creds.json, this builds successfully like this:
docker build -t fastlane-test --build-arg PLAY_STORE_CREDENTIALS=creds.json .

Related

Building devcontainer with --ssh key for GitHub repository in build process fails on VS Code for ARM Mac

We are trying to run a python application using a devcontainer.json with VS Code.
The Dockerfile includes the installation of GitHub repositories with pip that require an ssh key. To build the images, we usually use the --ssh flag to pass the required key. We then use this key to run pip inside the Dockerfile as follows:
RUN --mount=type=ssh,id=ssh_key python3.9 -m pip install --no-cache-dir -r pip-requirements.txt
We now want to run a devcontainer.json inside VS Code. We have been trying many different ways.
1. Passing the --ssh key using the build arg variable:
Since you can not directly pass the --ssh key, we tried a workaround:
"args": {"kek":"kek --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa"}
This produces an OK looking build command that works in a normal terminal, but inside VS Code the key is not being passed and the build fails. (Both on Windows & Mac)
2. Putting an initial build command into the initializeCommand parameter and then a simple build command that should use the cached results:
We run a first build inside the initializeCommand parameter:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa ."
and then we have a second build in the regular parameter:
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
}
This solution is a nice workaround and works flawlessly on Windows. On the ARM Mac, however, only the initializeCommand build stage runs well, the actual build fails, as it does not use the cached version of the images. The crucial step when the --ssh key is used, fails just like described before.
We have no idea why the Mac VS Code ignores the already created images. In a regular terminal, again, the second build command generated by VS Code works flawlessly.
The problem is reproducible on different ARM Macs, and on different repositories.
Here is the entire devcontainer:
{
"name": "Dockername",
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa .",
"runArgs": ["--env-file", "configuration.env", "-t"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}
So, we finally found a work around:
We add a target to the initialize command:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image ."
We create a new Dockerfile Dockerfile-devcontainer that only uses one line:
FROM --platform=linux/amd64 docker.io/library/dev-image:latest
In the build command of the devcontainer use that Dockerfile:
"name": "Docker",
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image:latest .",
"build": {
"dockerfile": "Dockerfile-devcontainer",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"runArgs": ["--env-file", "configuration.env"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}
In this way we can use the .ssh key and the docker image created in the initializeCommand (Tested on MacOS and Windows).

How do I get a Command to run from a Dockerfile.aws.json on Elastic Beanstalk?

I have a Dockerfile and a Dockerfile.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "5000",
"HostPort": "5000"
}],
"Volumes": [{
"HostDirectory": "/tmp/download/models",
"ContainerDirectory": "/models"
}],
"Logging": "/var/log/nginx",
"Command": "mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip"
}
But when I deploy, it doesn't run the Command that I specified. What am I doing wrong?
If you have ENTRYPOINT in your Dockerfile, than the Command gets appended as its arguments:
Specify a command to execute in the container. If you specify an Entrypoint, then Command is added as an argument to Entrypoint. For more information, see CMD in the Docker documentation.
Thus your Command mkdir -p /tmp ... will be used as an argument to python3 -m flask run --host=0.0.0.0, resulting in error. This could explain why you experience issue.
I tried to recreate the issue initially using your Command structure but had some problems. What worked was using Command in the following way:
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip\""
My Dockerfile did not have Entrypoint. Thus to run your python you could maybe do the following (assuming everything else is correct):
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip && python3 -m flask run --host=0.0.0.0\""
Do you have the Dockerfile content?
It most likely they your ENTRYPOINT script does not receive parameters, or it is ignoring it.
What you can do is something similar to this.
You have an entrypoint script that receive the command passed in aws.json as parameter, execute it and then call your real python command.
Or you can replace your ENTRYPOINT by something similar to this:
ENTRYPOINT ["/bin/bash"]
and your default command will be:
CMD ["python3 ..."]
This way when running locally you only run the python3 command.
When running in aws, you can change your Command and append the python to the end, as mentioned by Marcin. Both cases works

echo multi-line json file in Dockerfile with environment variable values

I am trying to dynamically generate a json file during my build step of my container via Dockerfile like this:
FROM alpine:3.9
// ... snipped
SHELL ["/bin/bash", "-c"]
RUN echo $'{\n\
"type": "some_type",\n\
"project_id": "$PROJECT_ID",\n\
"private_key_id": "$PRIVATE_KEY_ID"\n\
}' > /etc/my_creds.json
EXPOSE 80
This works fine, so when I shell into my container and cat /etc/my-creds.json file, it appears the environment variables $PROJECT_ID and $PRIVATE_KEY_ID were written literally, they did not get replaced with the environment variable values that were present.
I.e. the file looks like this in the container:
Any ideas what I might be doing wrong here?
The variables will be expanded by the shell if you use double-quotes and escaping, for example:
FROM ubuntu
ENV FOO=bar
RUN echo "{\"foo\": \"$FOO\"}" >foo.json
$ docker build -t foo .
$ docker run foo cat foo.json
{"foo": "bar"}

How does Dockerfile CMD exec form locate the binary

If I have a Dockerfile like this:
FROM ubuntu
CMD [ "ps", "-ef" ]
And if I build and run the image, I get
$ docker run -it 156a9f959f43
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 07:12 pts/0 00:00:00 ps -ef
which is consistent with the documentation.
Question: How does the binary ps get located in the first place when the container runs ?
The exec syntax uses the PATH environment variable defined in the parent image (ubuntu:latest).
$ docker image inspect ubuntu:latest
[
{
...
"Config": {
...
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
....
If you go looking at the Dockerfile for this base image... you'll actually see that the PATH variable is not defined there. We could go looking at scratch but that's a virtual image.
So, lets build an image on scratch with nothing to see what variables are defined:
$ cat df.scratch
FROM scratch
$ docker build -t test-scratch -f df.scratch .
...
$ docker image inspect test-scratch:latest
[
{
...
"Config": {
...
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
...
So the PATH is getting created in the scratch image. This old issue and associated PR show that docker is including a PATH out of the box.
How can you adjust that path? You need to use an ENV line. If you set a variable in a RUN line, it will not be preserved after that RUN line completes. And if you append to the .bashrc in the container, that does not apply to non-bash shells like /bin/sh, anything using the exec syntax without a shell, and any non-interactive bash shells (since the .bashrc stops processing part way through for non-interactive shells). Here's an example of that with a different image/build:
$ cat df.path
FROM ubuntu
# before state from the base image
RUN [ "env" ]
# attempting to modify the .bashrc
RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc
RUN [ "env" ]
# modifying the image environment variable directly
ENV PATH=${PATH}:/opt/custom/bin
RUN [ "env" ]
$ docker build -t test-path -f df.path .
Sending build context to Docker daemon 31.23kB
Step 1/6 : FROM ubuntu
---> 4e5021d210f6
Step 2/6 : RUN [ "env" ]
---> Running in 5bb72abb386d
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=5bb72abb386d
HOME=/root
Removing intermediate container 5bb72abb386d
---> c438fb269c70
Step 3/6 : RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc
---> Running in 127b10aff046
Removing intermediate container 127b10aff046
---> 4af50595c271
Step 4/6 : RUN [ "env" ]
---> Running in c5ff46ba3b82
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c5ff46ba3b82
HOME=/root
Removing intermediate container c5ff46ba3b82
---> 455325a5e484
Step 5/6 : ENV PATH=${PATH}:/opt/custom/bin
---> Running in e7960d9ce18a
Removing intermediate container e7960d9ce18a
---> ed532bff78b4
Step 6/6 : RUN [ "env" ]
---> Running in 9c1558a61ab7
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/custom/bin
HOSTNAME=9c1558a61ab7
HOME=/root
Removing intermediate container 9c1558a61ab7
---> f08993f21b97
Successfully built f08993f21b97
Successfully tagged test-path:latest
Note the original value of the path at step 2, it is unchanged at step 4, and it has the defined value at step 6.
In docker containers (similarily as in most operating systems) there is a $PATH environment variable, which holds the directory paths to where the executables are located (separated by :).
For example a $PATH variable might hold a value like /usr/local/bin:/usr/bin:/home/ubuntu/bin which would mean that when you are running a command like ps it will look for an executable in those directories.
You can learn more about $PATH variable here https://en.wikipedia.org/wiki/PATH_(variable)
Note: The $PATH variable is going to differ from container to container (as they are isolated units) and will most probably hold the defautl value of the base distro used by the docker image.
To make changes to your $PATH variable on linux based systems, you can run export PATH="$PATH:/custom/bin/dir" and it will append the /custom/bin/dir to the variable.
To make this change permanent, you should add this command to your .bashrc, .profile, .zshrc or similar file (depending on what shell you are using)
So to update the variable in you docker containers you should add something like this to your Docker file
FROM ubuntu
RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc

Docker produces incorrect ENTRYPOINT command

I'm trying to build docker image for my application but I can't run container based on this image because of failing ENTRYPOINT execution:
User.Name#pc-name MINGW64 ~
$ docker run some-repository.com/application-name:latest
/bin/sh: line 0: [: missing `]'
There is my Dockerfile:
FROM some-repository.com/openjdk:11.0.5-jre-slim as build
FROM some-repository.com/rhel7-atomic
COPY --from=build /usr/local/openjdk-11 jx/
LABEL Team="some-team"
LABEL AppVersion=1111
RUN mkdir -p id
COPY application-name-1.6.17-SNAPSHOT.jar id
EXPOSE 26000
ENTRYPOINT [ "sh", "-c", "exec echo hello \$JAVA_OPTS \
world"]
There is result of docker inspect:
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"ENTRYPOINT [\"/bin/sh\" \"-c\" \"[ \\\"sh\\\", \\\"-c\\\", \\\"exec echo hello \\\\$JAVA_OPTS world\\\"]\"]"
],
"ArgsEscaped": true,
"Entrypoint": [
"/bin/sh",
"-c",
"[ \"sh\", \"-c\", \"exec echo hello \\$JAVA_OPTS world\"]"
]
Looks like ENTRYPOINT command was interpolated incorrectly and [ character was added to the command.
Why this problem appears and how can I fix it?
Remove the escape in front of the $, it's an invalid escape in json and when the string doesn't parse as json it gets parsed as a string passed to a shell.
ENTRYPOINT [ "sh", "-c", "exec echo hello $JAVA_OPTS \
world"]
If you wanted echo to print a $, rather than having sh expand the variable, then you'd need a double escape to escape the escape character, so that json would pass a single \ to the command being run:
ENTRYPOINT [ "sh", "-c", "exec echo hello \\$JAVA_OPTS \
world"]

Resources