Run shell command while setting ENV in dockerfile [duplicate] - docker

This question already has answers here:
Dockerfile - set ENV to result of command
(7 answers)
Closed 2 years ago.
Inside my dockerfile:
ENV MY_ENCODED_VALUE="bXkgbmFtZSBpcyByYWtpYgo="
ENV MY_DECODED_VALUE=$(echo $MY_ENCODED_VALUE | base64 -d)
in the second line, i want to decode the encoded value and put the decoded value into my environment variable.
But i am getting the following error
Error response from daemon: failed to parse dockerfile: Syntax error - can't find = in "$MY_ENCODED_VALUE". Must be of the form: name=value
What does it even mean? What's supposed to be the right syntax here?

As you've mentioned that you need to use the variable during build time only, this should do the job:
Dockerfile:
FROM node:alpine
ENV MY_ENCODED_VALUE "bXkgbmFtZSBpcyByYWtpYgo="
RUN echo $MY_ENCODED_VALUE | base64 -d > /root/temp
RUN MY_DECODED_VALUE=$(cat /root/temp); echo "Output: $MY_DECODED_VALUE"
Output:
$ docker build -t test .
Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM node:alpine
---> bcfeabd22749
Step 2/4 : ENV MY_ENCODED_VALUE "bXkgbmFtZSBpcyByYWtpYgo="
---> Using cache
---> 81084f4be2e4
Step 3/4 : RUN echo $MY_ENCODED_VALUE | base64 -d > /root/temp
---> Using cache
---> b8ad3a100746
Step 4/4 : RUN MY_DECODED_VALUE=$(cat /root/temp); echo "Output: $MY_DECODED_VALUE"
---> Running in c9c41b92dee0
Output: my name is rakib
Removing intermediate container c9c41b92dee0
---> acfcd422a8ed
Successfully built acfcd422a8ed
Successfully tagged testing:latest
Note:
1) The RUN instruction in the last line of my Dockerfile is for the latter part i.e., the echo command. Assigning variable [MY_DECODED_VALUE=$(cat /root/temp)] just before the echo command ensures that the variable gets set in the same layer where you want to consume it.
2) The way i have used variable assignment, it will not behave as you would expect from an ENV instruction i.e., it will not be available for use across layers. If you want to consume the variable in multiple layers, then you will have to use RUN MY_DECODED_VALUE=$(cat /root/temp); <your-command-that-uses-the-variable>, wherever applicable. Not an elegant one but that's how it works with my solution.

Related

Run sed and store result to new variable in dockerfile

How to run sed command and save the result to one new Variable in docker.
The sed will replace the last occurrence of '.' and replace with '_'
Example :
JOB_NAME_WITH_VERSION = test_git_0.1 and wanted result is ZIP_FILE_NAME = test_git_0_1
--Dockerfile
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && export ZIP_FILE_NAME
RUN echo "Zip file Name found : $ZIP_FILE_NAME"
I tried this in my docker file but the result is empty
Zip file Name found :
The issue here is that every RUN command results in a new layer, so whatever shell variable was declared in previous layers is subsequently lost.
Compare this:
FROM ubuntu
RUN JOB="FOOBAR"
RUN echo "${JOB}"
$ docker build .
...
Step 3/3 : RUN echo "${JOB}"
---> Running in c4b7d1632c7e
...
to this:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}"
$ docker build .
...
Step 2/2 : RUN JOB="FOOBAR" && echo "${JOB}"
---> Running in c11049d1687f
FOOBAR
...
so as a workaround, if using a single RUN command is not an option for whatever reason, write the variable to disk and read it when needed, e.g.:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}" > /tmp/job_var
RUN cat /tmp/job_var
$ docker build .
...
Step 3/3 : RUN cat /tmp/job_var
---> Running in a346c30c2cd5
FOOBAR
...
Each RUN statement in a Dockerfile is run in a separate shell. So once a statement is done, all environment variables are lost. Even if they are exported.
To do what you want to do, you can combine your RUN statements like this
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && \
export ZIP_FILE_NAME && \
echo "Zip file Name found : $ZIP_FILE_NAME"
As your variable is lost once the RUN statement is finished, your environment variable won't be available in your container when it runs. To have an environment variable available there, you need to use the ENV statement.

Docker build step name cannot start with number

I'm building a docker image for a Sybase database. Docker build command fails because the name of the build step "server" cannot start with a number.
I have searched A LOT for a way to change the build step machine's name and my solution so far is to retry the build until I get a name that starts with a letter...
Step 1/7 : FROM my_image as docker_sybase_db
---> d266899b4eef
Step 2/7 : COPY *.zip /mnt/backup/
---> Using cache
---> 9e8e405848ce
Step 3/7 : COPY entrypoint.sh ~
---> Using cache
---> 5c0c923985db
Step 4/7 : ENV HOSTNAME docker_sybase_db
---> Using cache
---> f2b39a7280a0
Step 5/7 : RUN init_db.sh
---> Running in 0ae1a95b3203
Server name '0ae1a95b3203' begins with an illegal character. The first
character of a server name must be an alphabetic ascii character.
Error running command 'srvbuild -r /tmp/my_super_build.rs':
If I can't modify this old sybase init script, am I out of luck here ?
EDIT: Here is what I am trying to do
Create a database instance
Load a backup
Package that pre-loaded instance into a container.
Loading the backup takes a lot of time and this old database system requires the server name to start with a letter, not a number.
You could try and see if LolHens's idea of changing the hostname in the container namespace (during the docker build) works for you.
docker build . | tee >((grep --line-buffered -Po '(?<=^change-hostname ).*' || true) | \
while IFS= read -r id; do \
nsenter --target "$(docker inspect -f '{{ .State.Pid }}' "$id")"\
--uts hostname 'new-hostname'; \
done)
The docker build output is parsed to:
detect a "change-hostname" directive
do a nsenter, which runs a program in the UTS (UNIX Time Sharing) namespace, with a different hostname (different than the SHA-generated random one)
That means your RUN step should be:
RUN echo "change-hostname $(hostname)"; \
sleep 1; \
printf '%s\n' "$(hostname)" > /etc/hostname; \
printf '%s\t%s\t%s\n' "$(perl -C -0pe 's/([\s\S]*)\t.*$/$1/m' /etc/hosts)" "$(hostname)" > /etc/hosts; \
init_db.sh
That way, init_db.sh should run in an intermediate container with a different hostname (one you do have control over, and which would not start with a number).

Do I need separate Dockerfiles for py2 and py3?

Currently I have 2 Dockerfiles, Dockerfile-py2:
FROM python:2.7
# stuff
and Dockerfile-py3:
FROM python:3.4
# stuff
where both instances of # stuff are identical.
I build two docker images using an invoke task:
#task
def docker(ctx):
"""Build docker images.
"""
tag = ctx.run('git log -1 --pretty=%h').stdout.strip()
for pyversion in '23':
name = 'myrepo/myimage{pyversion}'.format(pyversion=pyversion)
image = '{name}:{tag}'.format(name=name, tag=tag)
latest = '{name}:latest'.format(name=name)
ctx.run('docker build -t {image} -f Dockerfile-py{pyversion} .'.format(image=image, pyversion=pyversion))
ctx.run('docker tag {image} {latest}'.format(image=image, latest=latest))
ctx.run('docker push {name}'.format(name=name))
is there any way to prevent the duplication of # stuff so I can't get in a situation where someone edits one file but not the other?
Here is one way using Dockerfile ARGS along with docker build --build-arg:
ARG version
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now you build for python2.7 like so:
docker build -t myimg/tmp --build-arg version=2.7 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 06e28a29a3d2
Python 2.7.16
And in the same way, for python3.4:
docker build -t myimg/tmp --build-arg version=3.4 .
In the output you will see:
Step 3/3 : RUN echo "$(python --version)"
---> Running in 2283edc1b65d
Python 3.4.10
As you can imagine you can also set default values for ${version} in your dockerfile:
ARG version=3.4
FROM python:${version}
RUN echo "$(python --version)"
# stuff
Now if you just do docker build -t myimg/tmp . you will build for python3.4. But you can still override with the previous two commands.
So to answer your question, No, you don't need two different docker files.

How to debug any sh file inside Dockerfile?

Echo statements are not printing in console when i build container.
I am able to see the information as below
Step 1/3 : FROM jboss/wildfly:latest
b4680c565eae
Step 2/3 : ADD customization /opt/jboss/wildfly/customization/
bc78405babec
Removing intermediate container 7b22667b3310
Step 3/3 : CMD /opt/jboss/wildfly/customization/execute.sh
Running in 76f8bfe9ac95
5cb0fa9482f4
Removing intermediate container 76f8bfe9ac95
Successfully built 5cb0fa9482f4
Successfully tagged madhu/wildfly-mysql-javaee7:latest
execute.sh file includes echo statements, but not writing to console.
Would be interested to know, how we should be able to debug the script.
The script specified in CMD is not executed at build time -- it's executed at runtime. You need to attempt a docker run to see its output.
If you want more output (and/or more useful output -- when used to show commands being executed, echo tends to throw away important details such as the difference between literal and syntactic spaces) your echos provide, modify the CMD or the script to set the -x shell command. You can do this by putting set -x in your script (under the shebang), or amending the CMD, to something like: CMD ['/bin/bash', '-x', '/opt/jboss/wildfly/customization/execute.sh'] (using /bin/bash if the shebang is #!/bin/bash, /bin/sh if the shebang is /bin/sh, etc).

Why won't my docker-entrypoint.sh execute?

My ENTRYPOINT script doesn't execute and throws standard_init_linux.go:175: exec user process caused "no such file or directory". Why so?
Doesn't Work
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : COPY ./docker-entrypoint.sh /
---> Using cache
---> 0eb7c1c992f1
Step 5 : RUN chmod +x /docker-entrypoint.sh
---> Running in 251395c4790f
---> 46aa0fbc9637
Removing intermediate container 251395c4790f
Step 6 : ENTRYPOINT /docker-entrypoint.sh
---> Running in 7434f052178f
---> eca040859bfe
Removing intermediate container 7434f052178f
Successfully built eca040859bfe
standard_init_linux.go:175: exec user process caused "no such file or directory"
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
docker-entrypoint.sh:
#!/bin/bash
echo 'Hello World!'
Works
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : ENTRYPOINT echo 'hello world'
---> Using cache
---> ef5792a1f252
Successfully built ef5792a1f252
'hello world'
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
ENTRYPOINT ["echo", "'hello world'"]
I was tearing my hair out with an issue very similar to this. In my case /bin/bash DID exist. But actually the problem was Windows line endings.
In my case the git repository had an entry point script with Unix line endings (\n). But when the repository was checked out on a windows machine, git decided to try and be clever and replace the line endings in the files with windows line endings (\r\n).
This meant that the shebang didn't work because instead of looking for /bin/bash, it was looking for /bin/bash\r.
The solution for me was to disable git's automatic conversion:
git config --global core.autocrlf input
Then check out the repository again and rebuild.
Some more helpful info here:
How to change line-ending settings
and here
http://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/
the vault:latest image does not contain /bin/bash which you try to call with your shebang #!/bin/bash. You should either change that to #!/bin/sh or completely remove the shebang from your script.
Another possibility:
Check that the file is not saved with Windows line endings (CRLF). If it is, save it with Unix line endings (LF) and it will be found.
Without seeing your image, my initial idea is that you don't have /bin/bash in your image. Changing the first line of your docker-entrypoint.sh to:
#!/bin/sh
will likely resolve it.
I struggled for hours because I haven't noticed anywhere explained that you need to copy the file in the location where the VM can access the file, preferably globally like so:
COPY docker-entrypoint.sh /usr/local/bin/
(I had thought it should just be automatically accessible since it's part of the dockerfile context)
Gosh I struggled for 2–3 hours!!
Thanks to #Ryan Allen
For my case it was CRLF problem. I am working on puppet manifests over ATOM for jenkins setup.
Make sure if you are using ATOM or any other IDE on windows, when you take your file ( especially .sh) to unix, convert it to unix format. It worked like magic once converted.
Here is what I added in my puppet file:
exec {'dos2unix':
path => ['/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/puppetlabs/bin'],
command => 'dos2unix /dockerwork/puppet/jenkins/files/*',
subscribe => File['/dockerwork/puppet/jenkins/files/init.sh'],
}
I came here with a similar issue while troubleshooting my attempt to build a Dockerfile "entry point" (entrypoint.sh) bash shell script (to be executed within the .NET Core SDK 2.2 image). The start of the script had the line #!/bin/bash, and during execution of docker-compose up (after successfully building with docker-compose build, the logging reported web_1 | ./entrypoint.sh: line 1: #!/bin/bash: No such file or directory.
Checking the file with VS Code, I noticed it was reporting the following encoding:
UTF-8 with BOM
Clicking on this, I would get the option to Save with encoding:
I chose to save as UTF-8 (utf8), which resolved the issue.
NOTE: I also found this SO article What's the difference between UTF-8 and UTF-8 without BOM?
My case was that the alpine image I was using didn't come with bash at all...
RUN apk-install bash did the trick obviously
Another reason this error comes up is if your Windows User password changes.
In my case my entrypoint.sh line endings were LF but I was still getting the error. Our admin has a mandatory password reset about every month or so. Whenever this happens I would run into the error. If this is your reason you may need to reset your credentials in docker settings under "Shared Drives".
Unselect the drive and apply. Then reselect the drive and apply. It will prompt you for your password.
This problem is to do with line endings and I solved it with the solution below
Convert the DOS file to a unix format. This removes any wired line endings.
dos2unix - is available in alpine as well as other Linux distributions.
I used it like so: RUN apk add dos2unix && dos2unix /entry.sh
Sorry for hacking -- this is not a response to the question, but a description of a different problem and it's solution that has the same symptoms.
I had
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", '{"id": "test"}' \
]
I was getting the error:
/bin/bash: [/usr/bin/curl,: No such file or directory
It turns out it's the single quotes that messed it up. Docker documentation has a note:
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
Blockquote
Solution -- use double quotes instead of single and escape nested double quotes:
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", "{\"id\": \"test\"}" \
]
None of the solutions worked for me but I was able to solve the error by setting WORKDIR to the same directory that contained the entrypoint script. No amount of cd'ing would work but somehow WORKDIR solved it

Resources