Docker Exec command does not work properly - docker

I have a script run.sh that I run as I initialize the container through docker run command. The script runs successfully. I can also get a bash instance (through docker exec -i -t container-name bash) in the container and run the script successfully (note that by default I have su privileges when I get the bash).
However, when I run the script from the host through docker exec -i -t container-name /run.sh the script runs but does not provide the outcome that it provides through the alternative approaches. I know it runs as it produces some of the expected behavior but not all of them. So my main question is what are the difference between executing a script through commandline and running the same script through docker exec.

Related

Docker run uses host PATH when chaining commands

I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.

Running shell script from PC in running Docker

I have pulled one docker image and docker container is running successfully as well. But I want to run one shell script in the running docker. The shell script is located in my hard disk. I am unable to find out which command to use and how to give pathname of the shell file so that it can be executed in running docker.
Please guide me.
Regards
TL;DR
There are two ways that could work in your case.
You can run one-liner-script using docker exec sh/bash with -c argument:
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
You can copy shell script into container using docker cp and then run it in docker context:
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh
Precaution
Not all containers allow to run shell scripts in their context. You can check it executing any shell command in docker:
docker exec -i <your_container_id> echo "Shell works"
For future reference check section Understand how CMD and ENTRYPOINT interact
Docker Exec One-liner
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
If your container has sh or bash or BusyBox shell wrapper (such as alpine, you can send one-line shell script to container's shell.
Limitations:
only short scripts;
hard to pass command-line arguments;
only if your container has shell.
Docker Copy and Execute Script
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can copy script from host to container and then execute it.
You can pass arguments to the script.
You can run script with root credentials with -u root: docker exec -i -u root <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can run script interactively with -t: docker exec -it <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
Limitations:
one more command to execute;
only if your container has shell.

Executing a script inside a docker container gives no errors but does not work either

I have a docker container with the basic ubuntu image. I use the following command to start it.
docker container run -it -d -v c:\Git\ENGINE_LIB_DIR:/ENGINE_LIB_DIR:ro --name ibuntu ubuntu
inside the mounted volume is a Java JDK and a script which looks like this:
#!/bin/bash
echo "export JAVA_HOME=/ENGINE_LIB_DIR/jdk/" >> ~/.bashrc;
echo "export PATH=${PATH}:/ENGINE_LIB_DIR/jdk/bin/" >> ~/.bashrc;
exec bash
So it basically adds the mounted java to the path to make it useable. This script works, as long as I am executing it from the ubuntu bash inside the container. If I try to use
docker exec -it ibuntu sh -c "sh /ENGINE_LIB_DIR/action.sh"
from outside the container it does not give any error message, whereas
docker exec -it ibuntu sh -c "java -version"
Returns "java: not found". So I suspect the script is not executed properly. I tried absolute paths, just without "sh -c" and basically any other method I found by googeling.
My goal is to easily use a java jdk provided inside a docker container to build a project. I am gladful for any help.
Edit:
I tried the /bin/bash -ic approach from #itachi. It still says java: not found, while the shell call gives back that error:
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
sh: 0: Can't open ./ENGINE_LIB_DIR/action.sh
Edit 2:
I managed to narrow the behaviour down to the docker exec command. I setup the container with docker container run -it -d -v c:\Git\ENGINE_LIB_DIR:/ENGINE_LIB_DIR:ro --entrypoint /ENGINE_LIB_DIR/action.sh --name ibuntu ubuntu /bin/bash. The java path variable is functioning when attached to the container, but when i execute docker exec ibuntu sh -c "java -version" it still says sh: 1: java: not found. I would be grateful for any idea.

Execute host shell script from meteor container

I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.

How to get docker exec stdout to be as verbose as running command in container?

If I run a command using docker's exec command, like so:
docker exec container gulp
It simply runs the command, but nothing is outputted to my terminal window.
However, if I actually go into the container and run the command manually:
docker exec -ti container bash
gulp
I see gulp's output:
[13:49:57] Using gulpfile ~/code/services/app/gulpfile.js[13:49:57]
Starting 'scripts'...[13:49:57] Starting 'styles'...[13:49:58]
Starting 'emailStyles'... ...
How can I run my first command and still have the output sent to my terminal window?
Side note: I see the same behavior with npm installs, forever restarts, etc. So, it is not just a gulp issue, but likely something with how docker is mapping the stdout.
How can I run my first command and still have the output sent to my terminal window?
You need to make sure docker run is launched with the -t option in order to allocate a pseudo tty.
Then a docker exec without -t would still work.
I discuss docker exec -it here, which references "Fixing the Docker TERM variable issue ")
docker#machine:/c/Users/vonc/prog$ d run --name test -dit busybox
2b06a0ebb573e936c9fa2be7e79f1a7729baee6bfffb4b2cbf36e818b1da7349
docker#machine:/c/Users/vonc/prog$ d exec test echo ok
ok

Resources