In a Dockerfile, RUN instruction has two forms shell and exec:
# shell form
RUN <command>
# exec form
RUN ["executable", "param1", "param2"]
When shell form is used the <command> is run inside a shell, prepending to it a proper shell (i.e.: sh -c "<command>").
So far so good, the question is: how exec form work? How commands are executed without shell? I haven't found a satisfying answer reading official doc.
The exec form of the command runs your command with the same OS syscall that Docker would use to run the shell itself. It's just doing the namespaced version of a fork/exec that linux uses to run any process. The shell itself is a convenience that provides PATH handling, variable expansion, IO redirection, and other scripting features, but these aren't required to run processes at the OS level. This question may help you understand how Linux runs processes.
This looks like a docker file.
With the RUN syntax, the commands will run one at a time in the para virtualised environment and the default shell for the given environment (usually Bash) will be spawned for each command. sh -c is "short-hand" for bash and so you are in effect doing the same thing.
In shell form, the command will run inside a shell with /bin/sh –c
RUN apt-get update
Exec format allows execution of command in images that don’t have /bin/sh
RUN [“apt-get”, “update”]
shell form is easier to write and you can perform shell parsing of variables
• For example
CMD sudo -u $(USER} java ....
• Exec form does not require image to have a shell.
Related
My dockerized project uses pipenv to deal with python dependencies, intepreter etc. Currently I have Makefile command which with I can go inside my docker container:
to-container:
docker exec -ti my_container_name bash
I would want this command also automatically launch pipenv shell inside the container, like that:
to-container:
docker exec -ti my_container_name bash && pipenv shell
Is this possible, and what is the trick?
You can't do what you want like this. Forget even docker for now, much less make. What does this do when you invoke it:
bash && pipenv shell
First, it runs the bash program and waits for that program to finish. Second, assuming that the bash program exits successfully, it will run pipenv shell. It certainly will not run the pipenv shell command inside the bash program.
You want this, or something like it:
bash -c 'pipenv shell'
This will start a bash program and ask it to run the pipenv shell command, then (after pipenv shell completes) exit. pipenv shell will itself start an interactive shell (as I understand it, I don't use pipenv myself). This is a little gross since you have two shells but it's not a big deal.
To translate this into docker you'd use:
docker exec -ti my_container_name bash -c 'pipenv shell'
then to put that into your makefile:
to-container:
docker exec -ti my_container_name bash -c 'pipenv shell'
I know I'm a little bit late to the party. But I stumbled onto this page just today. So for anyone who comes after. I think you might want to set pipenv shell in your ~/.bashrc file as that file is loaded every time you access bash.
For example, in my Dockerfile, my WORKDIR is set to /opt/app/ and my Pipfile is under the same directory. Thus, I just need to add pipenv shell at the end of my .bashrc file. And Voila! running bash exec -ti container-name /bin/bash will go straight to pipenv environment.
I've created a docker image with all the modules required for our build environment. If I start a container in interactive mode, I can build fine.
docker run -v <host:container> -w my_working_dir -it my_image
$make -j16
But if I try to do this from a command line I get compile failures (well into the process)
docker run -v <host:container> -w my_working_dir my_image bash -c "make -j16"
Also if I run the container detached and use docker exec I also get compile failures (same point)
docker run -v <host:container> -t --detach --name star_trek my_image
docker exec star_trek bash -c "cd my_working_dir; make -j16"
Entering an interactive session with the detached container also seems seems to pass though I though I have seen this fail as well.
docker exec -it star_trek_d bash
$make -j16
This will be part of an automated build system so I need to be able run this without user intervention.
I'm not sure why these are behaving differently but I ran multiple combination and the only way I've been able to get a success build is through the interactive method above. Other then the interactive system having more of a logged in user configuration, what is the difference between running interactive or passing on command line?
My preferred method would to be run the container detached so I can send several sequential commands as we have a complex build and test process but if I have to spin the container up each time I'm OK with that as this point because I really need to get this running like last week.
*Commands are pseudo-code and simplified to aid visibility and using bash -c because I'm needing to run a script for our test and therefore doing something like bash -c "my_script.sh; run_test"
UPDATE - We need custom paths for our build tools. I believe this is not working except in the interactive session. Our /etc/bashrc file is used to build the correct path and export it. When I do a docker run I've tried running a script that does a "source /etc/bashrc", among other initialization things we need, before doing the make but this doesn't seem to work. Note have to pipe in password as this needs to be run using sudo. The other commands seem to work fine.
bash -c 'echo su_password | sudo -S /tmp/startup.sh; make -j16'
I've also tried to just set on command without success
bash -c 'export <path>; make -j16'
What is the best way to set the path in the container so installed applications can be found? I don't want to hard code them in the dockerfile but will at this point if I must.
I have this working. As our path is very long I set it to a variable and was passing it in on the command line. Seems this was causing issues.
export PATH=$PATH/...
vs
export PATH=$PATH:/...
Now I am just specifying the whole path each time and everything is working.
bash -c 'export PATH=$PATH/<dir>/<program>/bin:/<dir>/<program>/bin:...; make -j16'
Below is the command I am trying to run:
docker exec sandbox_1 'influxd-ctl sandbox_1:8091'
I understand that apparently this means the container will execute it with a different shell that does have the necessary $PATH but I'm not sure how to deal with that.
For what it's worth, I tried influxd-ctl without the single quotes and it didn't read the rest of the command.
docker exec sandbox_1 influxd-ctl sandbox_1:8091
Thoughts?
Update: I also tried running bash -c <string> as the command I passed to exec but that didn't seem to work either.
Single quotes shouldn't be used. The Exec Command takes the command and it's arguments as separate arguments.
The correct command in your case should be:
docker exec <container> influxd-ctl <container>:8091
You can also test the command when having a shell inside the container like this:
docker exec -it <container> bash
You should then (provided bash is available inside the container, otherwise other shells can be used instead) get a root shell like this:
root#<container>:~#
Note: The working dir might be different based on where it was set in the Dockerfile used to build the image of the container.
In the now interactive shell talking to the container, you can try your command explicitly without the Exec command passing stuff around.
root#<container>:~# influxd-ctl <container>:8091
If you find that your command doesn't work there, then probably the influxd-ctl command expects different parameters from what you are suggesting.
There is my Dockerfile:
# https://hub.docker.com/_/php/
FROM php:5.5.23-fpm
USER www-data
ADD .bash_profile /var/www/.bash_profile
SHELL ["/bin/bash", "-c"]
RUN source /var/www/.bash_profile
Then after container built I run docker exec -it CONTAINER_NAME bash I did not see my aliases defined into /var/www/.bash_profile. But if I execute source /var/www/.bash_profile manually – everything is ok.
The same problem described here: https://github.com/docker/kitematic/issues/896 but no answer.
Had a similar issue and the easiest solution was to use the -l option to bash to make bash act as if it had been invoked as a login shell.
docker run --rm -it $IMAGE /bin/bash -l
bash will then read in ~/.bash_profile
That's because those (ie 'RUN' and 'SHELL') are build instructions. When you execute docker run the ENTRYPOINT and COMMAND are being executed instead.
docker exec however just enters into an existing container's namespace and executes a command. So in your case it just runs bash. That's why you have to source your profile again.
UPDATE:
This snippet is from man bash:
When an interactive shell that is not a login shell is started, bash reads and executes commands from
/etc/bash.bashrc and ~/.bashrc, if these files exist.
So in your case if you change the file name to ~/.bashrc probably works
I faced a similar issue.
Dockerfile relevant snippet:
RUN scripts/script1.sh
RUN scripts/script2.sh
Snippet from scripts/script1.sh
echo "TEST_ENV=Hello" >> ~/.bashrc
source ~/.bashrc
Snippet from scripts/script2.sh
echo $TEST_ENV
The echo statement in script2 printed nothing, that is script2 could not see TEST_ENV variable at all.
So, it turns out that every RUN instruction is executed in its own shell. So, the environment variable will only be visible in the shell where source ~/.bashrc is executed, which is the shell in which script1 ran. For it to be visible in script2's shell, a exclusive source ~/.bashrc has to be executed from script2.
In case an environment variable is required to be present within the running container, then the code that is run as part of Docker's ENTRYPOINT or COMMAND should source the .bashrc file. Logging into the running container with docker exec -it <container name> /bin/bash, will demonstrate that the environment variable is indeed set.
Trying to concatenate a value to an existing environment variable in a docker container I'm starting.
for example - docker run -it -e PATH=$PATH:foo continuumio/anaconda
I am currently stuck at the point of trying to concatenate a value to the existing PATH environment variable that already exists in the container.
I am expecting to see the following value in the PATH environment variable of the container - PATH=/opt/conda/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/scala/bin:/usr/local/sbt/bin:/usr/local/spark/bin:/usr/local/spark/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Instead I get this - PATH=$PATH:foo
using the docker run command. Is there anyway to achieve what I'm aiming at?
--EDIT--
I am executing the command from a windows 10 command line window.
Try the following:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash"
This command launches bash in the container, passes it a command (-c) that appends to the existing $PATH and then replaces itself with a new bash copy (exec bash) that inherits the new $PATH value.
If you also want to execute a command in the updated shell, you can pass another -c option to exec bash, but note that quoting can get tricky, and that a trick is needed to keep a shell open:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash -c 'date; exec bash'"
The small caveat is that the shell that is running when the startup command has finished is not the same instance as the one that ran the command (which shouldn't be a problem, unless your startup command made modifications to the shell state (such as defining functions, aliases, ...) that must be preserved).
As for what you tried:
The only way to set an environment variable with -e is if the value is known ahead of time, outside the container; whatever you pass to -e must be a literal value - it cannot reference definitions inside the container.
As an aside: If you ran your command on a Unix platform rather than Windows, the current shell would expand $PATH, which is also not the intent.