passing variable to docker command - docker

Running
docker run -t -i -w=[absolute_work_dir] [docker_image] [executable]
is fine. However when I set the workdir using variable in a PowerShell script (.ps1):
$WORKDIR = [absolute_work_dir]
docker run -t -i -w=$WORKDIR [docker_image] [executable]
it gave the error:
docker: Error response from daemon: the working directory '$WORKDIR' is invalid, it needs to be an absolute path.
What is possibly wrong?

You could assemble a string with your variable, then execute the string as a command.
$DOCKRUNSTR = "docker run -t -i -w=" + $WORKDIR + " [docker_image] [executable]"
& $DOCKRUNSTR
The & will tell powershell to run that string as a command.
Edit: Inside a .ps1 file try Invoke-Expression instead of &
Maybe there's a better powershell super user solution, but this seems like it could get the job done for you.

Related

Docker exec command to create a file

0
I am trying to run this command and getting an error:
docker exec 19eca917c3e2 cat "Hi there" > /usr/share/ngnix/html/system.txt
/usr/share/ngnix/html/system.txt: No such file or directory
A very simple command to create a file and write in it, I tried echo and that one too didn't work.
The cat command only works on files, so cat "Hi there" is incorrect.
Try echo "Hi there" to output this to standard out.
You are then piping the output to /usr/share/ngnix/html/system.txt. Make sure the directory /usr/share/ngnix/html/ exists. If not create it using
mkdir -p /usr/share/ngnix/html
I presume you are trying to create the file in the container.
You have several problems going on, one of which #Yatharth Ranjan has addressed - you want echo not cat for that use.
The other is, your call is being parsed by the local shell, which is breaking it up into docker ... "hello world" and a > ... system.txt on your host system.
To get the pipe into file to be executed in the container, you need to explicity invoke bash in the container, and then pass it the command:
docker exec 12345 /bin/sh -c "echo \"hello world\" > /usr/somefile.txt"
So, here you would call /bin/sh in the container, pass it -c to tell it a shell command follows, and then the command to parse and execute is your echo "hello world" > the_file.txt.
Of course, a far easier way to copy files into a container is to have them on your host system and then copy them in using docker cp: (where 0123abc is your container name or id)
docker cp ./some-file.txt 01234abc:/path/to/file/in/container.txt

Docker: error redirection with and without tty option

Something weird happened with one of my container (a simple django app): I was unable to see the stderr stream.
After a bit of fiddling, I realised that I could make the stream appears if I added the -t option when running my container.
With that in mind, I created a simple image from a shell script :
test.sh:
#!/bin/sh
echo "all good"
echo "this is an error" >&2
Dockerfile:
FROM debian:stretch-slim
WORKDIR /usr/src/app
COPY ./test.sh .
CMD ["./test.sh"]
If I run this image without the tty option :
$ docker container run simple_error
I got this :
this is an error
all good
Now with the tty option :
$ docker container run -t simple_error
all good
this is an error
Note the stdout/stderr printing order is not the same with or without -t
I'm confused why this is happening, and why, in a more elaborate case (with my django app) I'm not able to see the error stream without the -t option.
If someone could clarify that,
Thanks !

Dockerfile capture output of a command

I have the following line in my Dockerfile which is supposed to capture the display number of the host:
RUN DISPLAY_NUMBER="$(echo $DISPLAY | cut -d. -f1 | cut -d: -f2)" && echo $DISPLAY_NUMBER
When I tried to build the Dockerfile, the DISPLAY_NUMBER is empty. But however when I run the same command directly in the terminal I get the see the result. Is there anything that I'm doing wrong here?
Commands specified with RUN are executed when the image is built. There is no display during build hence the output is empty.
You can exchange RUN with ENTRYPOINT then the command is executed when the docker starts.
But how to forward the hosts display to the container is another matter entirely.
Host environment variables cannot be passed during build, only at run-time.
Only build args can be specified by:
first "declaring the arg"
ARG DISPLAY_NUMBER
and then running
docker build . --no-cache -t disp --build-arg DISPLAY_NUMBER=$DISPLAY_NUMBER
You can work around this issue using the envsubst trick
RUN echo $DISPLAY_NUMBER
And on the command line:
envsubst < Dockerfile | docker build . -f -
Which will rewrite the Dockerfile in memory and pass it to Docker with the environment variable changed.
Edit: Note that this solution is pretty useless though, because you probably
want to do this during run-time anyways, because this value should depend on not on where the image is built, but rather where it is run.
I would personally move that logic into your ENTRYPOINT or CMD script.

Equivalent of --env-file for build-arg?

I'm building a Docker image using multiple build args, and was wondering if it was possible to pass them to docker build as a file, in the same way --env-file can be pased to docker run. The env file will be parsed by docker run automatically and the variables made available in the container.
Is it possible to specify a file of build arguments in the same way?
There's no such an option, at least for now. But if you have too many build args and want to save it in a file, you can archive it as follows:
Save the following shell to buildargs.sh, make it executable and put it in your PATH:
#!/bin/bash
awk '{ sub ("\\\\$", " "); printf " --build-arg %s", $0 } END { print "" }' $#
Build your image with argfile like:
docker build $(buildargs.sh argfile) -t your_image .
This code is safe for build-arg's that contain spaces and special characters:
for arg in buildarg1 buildarg2 ; do opts+=(--build-arg "$arg") ; done
...
docker run ... "${opts[#]}"
Just substitute buildarg1 and so on with your build-arg's escaped.
Using linux you can create a file (example: arg_file) with the variables declared:
ARG_VAL_1=Hello
ARG_VAL_2=World
Execute the source command on that file:
source arg_file
Then build a docker image using that variables run this command:
docker build \
--build-arg "ARG_VAL_1=$ARG_VAL_1" \
--build-arg "ARG_VAL_2=$ARG_VAL_2" .

Docker echo environment variable

I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)

Resources