How to run sed command and save the result to one new Variable in docker.
The sed will replace the last occurrence of '.' and replace with '_'
Example :
JOB_NAME_WITH_VERSION = test_git_0.1 and wanted result is ZIP_FILE_NAME = test_git_0_1
--Dockerfile
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && export ZIP_FILE_NAME
RUN echo "Zip file Name found : $ZIP_FILE_NAME"
I tried this in my docker file but the result is empty
Zip file Name found :
The issue here is that every RUN command results in a new layer, so whatever shell variable was declared in previous layers is subsequently lost.
Compare this:
FROM ubuntu
RUN JOB="FOOBAR"
RUN echo "${JOB}"
$ docker build .
...
Step 3/3 : RUN echo "${JOB}"
---> Running in c4b7d1632c7e
...
to this:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}"
$ docker build .
...
Step 2/2 : RUN JOB="FOOBAR" && echo "${JOB}"
---> Running in c11049d1687f
FOOBAR
...
so as a workaround, if using a single RUN command is not an option for whatever reason, write the variable to disk and read it when needed, e.g.:
FROM ubuntu
RUN JOB="FOOBAR" && echo "${JOB}" > /tmp/job_var
RUN cat /tmp/job_var
$ docker build .
...
Step 3/3 : RUN cat /tmp/job_var
---> Running in a346c30c2cd5
FOOBAR
...
Each RUN statement in a Dockerfile is run in a separate shell. So once a statement is done, all environment variables are lost. Even if they are exported.
To do what you want to do, you can combine your RUN statements like this
RUN ZIP_FILE_NAME=$(echo ${JOB_NAME_WITH_VERSION} | sed 's/\(.*\)\./\1_/') && \
export ZIP_FILE_NAME && \
echo "Zip file Name found : $ZIP_FILE_NAME"
As your variable is lost once the RUN statement is finished, your environment variable won't be available in your container when it runs. To have an environment variable available there, you need to use the ENV statement.
Related
This is a MWE of my Dockerfile: (I'm hand-composing, not using docker-compose)
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019-amd64
SHELL ["cmd", "/S", "/C"]
RUN set zz=foo && echo %zz%
When I build the container, I expect this to print foo, but instead I get %zz% which seems to indicate that echo doesn't see the variable zz set.
Step 20/24 : RUN set zz=foo && echo %zz%
---> Running in b1f462e81c5c
%zz%
Removing intermediate container b1f462e81c5c
---> 9c0abf6eb928
If I run this on the command line, it works as expected:
C:\> cmd /S /C set zz=foo && echo %zz%
foo
How can I make the RUN instruction work like the command prompt?
My actual use case, in case this is an XY problem: I'm reading the contents of a user-provided file into a variable and then using that in a command which sets some global config file.
RUN `
set /p pat=<your_pat.txt && `
git config --global url."https://%pat%#github.com/".insteadOf "https://github.com/"
I'm building a docker image for a Sybase database. Docker build command fails because the name of the build step "server" cannot start with a number.
I have searched A LOT for a way to change the build step machine's name and my solution so far is to retry the build until I get a name that starts with a letter...
Step 1/7 : FROM my_image as docker_sybase_db
---> d266899b4eef
Step 2/7 : COPY *.zip /mnt/backup/
---> Using cache
---> 9e8e405848ce
Step 3/7 : COPY entrypoint.sh ~
---> Using cache
---> 5c0c923985db
Step 4/7 : ENV HOSTNAME docker_sybase_db
---> Using cache
---> f2b39a7280a0
Step 5/7 : RUN init_db.sh
---> Running in 0ae1a95b3203
Server name '0ae1a95b3203' begins with an illegal character. The first
character of a server name must be an alphabetic ascii character.
Error running command 'srvbuild -r /tmp/my_super_build.rs':
If I can't modify this old sybase init script, am I out of luck here ?
EDIT: Here is what I am trying to do
Create a database instance
Load a backup
Package that pre-loaded instance into a container.
Loading the backup takes a lot of time and this old database system requires the server name to start with a letter, not a number.
You could try and see if LolHens's idea of changing the hostname in the container namespace (during the docker build) works for you.
docker build . | tee >((grep --line-buffered -Po '(?<=^change-hostname ).*' || true) | \
while IFS= read -r id; do \
nsenter --target "$(docker inspect -f '{{ .State.Pid }}' "$id")"\
--uts hostname 'new-hostname'; \
done)
The docker build output is parsed to:
detect a "change-hostname" directive
do a nsenter, which runs a program in the UTS (UNIX Time Sharing) namespace, with a different hostname (different than the SHA-generated random one)
That means your RUN step should be:
RUN echo "change-hostname $(hostname)"; \
sleep 1; \
printf '%s\n' "$(hostname)" > /etc/hostname; \
printf '%s\t%s\t%s\n' "$(perl -C -0pe 's/([\s\S]*)\t.*$/$1/m' /etc/hosts)" "$(hostname)" > /etc/hosts; \
init_db.sh
That way, init_db.sh should run in an intermediate container with a different hostname (one you do have control over, and which would not start with a number).
I would like to pass argument (from the docker command) to the shell script inside the Dockerfile.
This is the docker command line.
docker build --file=DockerfileTest --build-arg docker_image=PX-release-migration --tag=test-image:latest --rm=true .
This is a script that is called inside the Dockerfile.
#!/bin/sh -e
image_name=$1
echo "docker image is $image_name"
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
elif [[ $image_name == *"-preprod-"* ]]; then
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
elif [[ $image_name == *"-release-"* ]]; then
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
fi
When I execute separately the script, it works, but it doe
This is docker file.
ARG spring_env=local
ARG docker_image=-local-
FROM maven:3.6.1-jdk-8
COPY . /apps/demo
WORKDIR /apps/demo
RUN chmod +x /apps/demo/initialize_env.sh
RUN ./initialize_env.sh $docker_image
RUN echo "spring_env is ${spring_env}"
So basically, i would like to use a different spring application properties file during the build depending on the docker_image name. If a docker image name contains 'release', i would like to package application-prod.properties during the build.
This is the error message that I am getting.
Step 1/8 : ARG spring_env=local
Step 2/8 : ARG docker_image=-local-
Step 3/8 : FROM maven:3.6.1-jdk-8
---> 4c81be38db66
Step 4/8 : COPY . /apps/demo
---> 41439197c465
Step 5/8 : WORKDIR /apps/demo
---> Running in 56bd408c2eb1
Removing intermediate container 56bd408c2eb1
---> 4c4025bf5f64
Step 6/8 : RUN chmod +x /apps/demo/initialize_env.sh
---> Running in 18dc3a5c1a54
Removing intermediate container 18dc3a5c1a54
---> 60d2037a0209
Step 7/8 : RUN ./initialize_env.sh $docker_image
---> Running in 2e049b2cf630
docker image is
./initialize_env.sh: 5: ./initialize_env.sh: Syntax error: word unexpected (expecting ")")
The command '/bin/sh -c ./initialize_env.sh $docker_image' returned a non-zero code: 2
When I execute separately the script, it works, but it doesn't inside the docker container.
Tip: Use ShellCheck to check scripts for syntax errors.
#!/bin/sh -e
if [[ ($image_name == '') || ($image_name == *"-dev-"*) ]]; then
[[ is bash syntax but your script is declared to use plain sh. It works on your machine presumably because sh is really symlinked to bash, but inside the container that's not the case. maven:3.6.1-jdk-8 is based on debian:stretch which uses dash instead of bash.
Change the shebang line. You can also delete the parentheses; they're superfluous.
#!/bin/bash -e
if [[ $image_name == '' || $image_name == *"-dev-"* ]]; then
You could also use a case block to simplify the repetitive checks.
case "$image_name" in
''|*-dev-*)
echo "This is development"
cp src/main/resources/application-dev.properties src/main/resources/application.properties
;;
*-preprod-*)
echo "This is preprod"
cp src/main/resources/application-stg.properties src/main/resources/application.properties
;;
*-release-*)
echo "This is production"
cp src/main/resources/application-prod.properties src/main/resources/application.properties
;;
esac
Is there any way to make a build argument mandatory during docker build? The expected behaviour would be for the build to fail if the argument is missing.
For example, for the following Dockerfile:
FROM ubuntu
ARG MY_VARIABLE
ENV MY_VARIABLE $MY_VARIABLE
RUN ...
I would like the build to fail at ARG MY_VARIABLE when built with docker build -t my-tag . and pass when built with docker build -t my-tag --build-arg MY_VARIABLE=my_value ..
Is there any way to achieve that behaviour? Setting a default value doesn't really do the trick in my case.
(I'm running Docker 1.11.1 on darwin/amd64.)
EDIT:
One way of doing that I can think of is to run a command that fails when MY_VARIABLE is empty, e.g.:
FROM ubuntu
ARG MY_VARIABLE
RUN test -n "$MY_VARIABLE"
ENV MY_VARIABLE $MY_VARIABLE
RUN ...
but it doesn't seem to be a very idiomatic solution to the problem at hand.
I tested with RUN test -n <ARGvariablename> what #konradstrack mentioned in the original (edit) post... that seems do the job of mandating the variable to be passed as the build time argument for the docker build command:
FROM ubuntu
ARG MY_VARIABLE
RUN test -n "$MY_VARIABLE"
ENV MY_VARIABLE $MY_VARIABLE
You can also use shell parameter expansion to achieve this.
Let's say your mandatory build argument is called MANDATORY_BUILD_ARGUMENT, and you want it to be set and non-empty, your Dockerfile could look like this:
FROM debian:stretch-slim
MAINTAINER Evel Knievel <evel#kniev.el>
ARG MANDATORY_BUILD_ARGUMENT
RUN \
# Check for mandatory build arguments
: "${MANDATORY_BUILD_ARGUMENT:?Build argument needs to be set and non-empty.}" \
# Install libraries
&& apt-get update \
&& apt-get install -y \
cowsay \
fortune \
# Cleanup
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/* \
CMD ["/bin/bash", "-c", "/usr/games/fortune | /usr/games/cowsay"]
Of course, you would also want to use the build-argument for something, unlike I did, but still, I recommend building this Dockerfile and taking it for a test-run :)
EDIT
As mentioned in #Jeffrey Wen's answer, to make sure that this errors out on a centos:7 image (and possibly others, I admittedly haven't tested this on other images than stretch-slim):
Ensure that you're executing the RUN command with the bash shell.
RUN ["/bin/bash", "-c", ": ${MYUID:?Build argument needs to be set and not null.}"]
Another simple way:
RUN test -n "$MY_VARIABLE" || (echo "MY_VARIABLE not set" && false)
Long time ago I had a need to introduce a required (mandatory) ARG, and for better UX include the check at the beginning:
FROM ubuntu:bionic
ARG MY_ARG
RUN [ -z "$MY_ARG" ] && echo "MY_ARG is required" && exit 1 || true
...
RUN ./use-my-arg.sh
But this busts the build cache for every single layer after the initial MY_ARG, because MY_ARG=VALUE is prepended to every RUN command afterwards.
Whenever I changed MY_ARG it would end up rebuilding the whole image, instead of rerunning the last RUN command only.
To bring caching back, I have changed my build to a multi-staged one:
The first stage uses MY_ARG and checks it's presence.
The second stage proceeds as usual and declares ARG MY_ARG right at the end.
FROM alpine:3.11.5
ARG MY_ARG
RUN [ -z "$MY_ARG" ] && echo "MY_ARG is required" && exit 1 || true
FROM ubuntu:bionic
...
ARG MY_ARG
RUN ./use-my-arg.sh
Since ARG MY_ARG in the second stage is declared right before it's used, all the previous steps in that stage are unaffected, thus cache properly.
You could do something like this...
FROM ubuntu:14.04
ONBUILD ARG MY_VARIABLE
ONBUILD RUN if [ -z "$MY_VARIABLE" ]; then echo "NOT SET - ERROR"; exit 1; else : ; fi
Then docker build -t my_variable_base .
Then build your images based on this...
FROM my_variable_base
...
It's not super clean, but at least it abstracts the 'bleh' stuff away to the base image.
I cannot comment yet because I do not have 50 reputation, but I would like to add onto #Jan Nash's solution because I had a little difficulty getting it to work with my image.
If you copy/paste #Jan Nash's solution, it will work and spit out the error message that the build argument is not specified.
What I want to add
When I tried getting it to work on a CentOS 7 image (centos:7), Docker ran the RUN command without erroring out.
Solution
Ensure that you're executing the RUN command with the bash shell.
RUN ["/bin/bash", "-c", ": ${MYUID:?Build argument needs to be set and not null.}"]
I hope that helps for future incoming people. Otherwise, I believe #Jan Nash's solution is just brilliant.
In case anybody is looking for a the solution but with docker compose build, I used mandatory variables.
version: "3.9"
services:
my-service:
build:
context: .
args:
- ENVVAR=${ENVVAR:?See build instructions}
After running docker compose build:
Before exporting ENVVAR: Invalid template: "required variable ENVVAR is missing a value: See build instructions"
After exporting ENVVAR: build proceeds
Support for Required Environment variables
Compose Environment Variables
None of these answers worked for me. I wanted ${MY_VARIABLE:?} but did not want to print anything, so I did like this:
ARG MY_VARIABLE
RUN test -n ${MY_VARIABLE:?}
Nothing is printed on success. On error you see this, which is a good enough error:
ERROR RUN test -n ${MY_VARIABLE:?}
/bin/sh: MY_VARIABLE: parameter not set or null
executor failed running [/bin/sh -c test -n ${MY_VARIABLE:?}]: >exit code: 2
How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile