Running shell script from PC in running Docker - docker

I have pulled one docker image and docker container is running successfully as well. But I want to run one shell script in the running docker. The shell script is located in my hard disk. I am unable to find out which command to use and how to give pathname of the shell file so that it can be executed in running docker.
Please guide me.
Regards

TL;DR
There are two ways that could work in your case.
You can run one-liner-script using docker exec sh/bash with -c argument:
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
You can copy shell script into container using docker cp and then run it in docker context:
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh
Precaution
Not all containers allow to run shell scripts in their context. You can check it executing any shell command in docker:
docker exec -i <your_container_id> echo "Shell works"
For future reference check section Understand how CMD and ENTRYPOINT interact
Docker Exec One-liner
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
If your container has sh or bash or BusyBox shell wrapper (such as alpine, you can send one-line shell script to container's shell.
Limitations:
only short scripts;
hard to pass command-line arguments;
only if your container has shell.
Docker Copy and Execute Script
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can copy script from host to container and then execute it.
You can pass arguments to the script.
You can run script with root credentials with -u root: docker exec -i -u root <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can run script interactively with -t: docker exec -it <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
Limitations:
one more command to execute;
only if your container has shell.

Related

Executing a script inside a docker container gives no errors but does not work either

I have a docker container with the basic ubuntu image. I use the following command to start it.
docker container run -it -d -v c:\Git\ENGINE_LIB_DIR:/ENGINE_LIB_DIR:ro --name ibuntu ubuntu
inside the mounted volume is a Java JDK and a script which looks like this:
#!/bin/bash
echo "export JAVA_HOME=/ENGINE_LIB_DIR/jdk/" >> ~/.bashrc;
echo "export PATH=${PATH}:/ENGINE_LIB_DIR/jdk/bin/" >> ~/.bashrc;
exec bash
So it basically adds the mounted java to the path to make it useable. This script works, as long as I am executing it from the ubuntu bash inside the container. If I try to use
docker exec -it ibuntu sh -c "sh /ENGINE_LIB_DIR/action.sh"
from outside the container it does not give any error message, whereas
docker exec -it ibuntu sh -c "java -version"
Returns "java: not found". So I suspect the script is not executed properly. I tried absolute paths, just without "sh -c" and basically any other method I found by googeling.
My goal is to easily use a java jdk provided inside a docker container to build a project. I am gladful for any help.
Edit:
I tried the /bin/bash -ic approach from #itachi. It still says java: not found, while the shell call gives back that error:
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
sh: 0: Can't open ./ENGINE_LIB_DIR/action.sh
Edit 2:
I managed to narrow the behaviour down to the docker exec command. I setup the container with docker container run -it -d -v c:\Git\ENGINE_LIB_DIR:/ENGINE_LIB_DIR:ro --entrypoint /ENGINE_LIB_DIR/action.sh --name ibuntu ubuntu /bin/bash. The java path variable is functioning when attached to the container, but when i execute docker exec ibuntu sh -c "java -version" it still says sh: 1: java: not found. I would be grateful for any idea.

Entering docker container with exec losing PATH environment variable

Here is my Dockerfile:
FROM ros:kinetic-ros-core-xenial
CMD ["bash"]
If I run docker build -t ros . && docker run -it ros, and then from within the container echo $PATH, I'll get:
/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I exec into the container (docker exec -it festive_austin bash) and run echo $PATH, I'll get:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Why are the environment variables different? How can I get a new bash process on the container with the same initial environment?
The ENTRYPOINT command is only invoked on docker run, not on docker exec.
I assume that this /ros_entrypoint.sh script is responsible for adding stuff to PATH. If so, then you could do something like this for docker exec:
docker exec -it <CONTAINER_ID> /ros_entrypoint.sh bash
docker exec only gets environment variables defined in Dockerfile with instruction ENV. With docker exec [...] bash you additionally get those defined somewhere for bash.
Add this line to your Dockerfile:
ENV PATH=/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
or shorter:
ENV PATH=/opt/ros/kinetic/bin:$PATH
This is old question but since it's where google directed me I thought I'll share solution I ended up using.
In your entrypoint script add a section similar to this:
cat >> ~/.bashrc << EOF
export PATH="$PATH"
export OTHER="$OTHER"
EOF
Once you rebuild your image you can exec into your container (notice bash is invoked in interactive mode):
docker run -d --rm --name container-name your_image
docker exec -it container-name /bin/bash -i
If you echo $PATH now it should be the same as what you have set in .bashrc

How to continue running scripts when exiting docker containers

My script is as follows:
# start a ubuntu container in the background
docker run -it --name ub -d ubuntu /bin/bash
sleep 1
# run a command in the container
docker exec -it ub bash
echo 234
# exit the container
exit
sleep 1
# do something else
echo 123
But the script would just stop right after exit and hang there. Does anyone know why is that?
p.s: My Docker version is: 17.03.0-ce, build 60ccb22
You have given -it during the run command. which opens up the /bin/bash of your container and waits there. The next command wont get executed until the first command execution is completed.
It's better to create a script file and move it inside the container while making the docker. and run the script on starting the docker. You may specify that using a CMD in the docker file.
You won't be needing an additional exec command.
The corresponding Dockerfile would be
FROM ubuntu:latest
COPY <path-to-script> <dest>
CMD [" <path-to-script> "]
You have to create the script file along with the Dockerfile. Build the docker using the command
docker build -t <image-name> <location of Dockerfile>
The execution command would be
docker run -d --name <name> -d ubuntu <path-to-script>

Running a script in docker container and not killing the script when leaving terminal

I have got some docker container for instance my_container
I Want to run a long living script in my container, but not killing it while leaving the shell
I would like to do something like that
docker exec -ti my_container /bin/bash
And then
screen -S myScreen
Then
Executing my script in screen and exit the terminal
Unfortunately, I cannot execute screen in docker terminal
this maybe help you.
docker exec -i -t c2ab7ae71ab8 sh -c "exec >/dev/tty 2>/dev/tty </dev/tty && /usr/bin/screen -r nmsrv -s /bin/bash"
and this is the reference link
Only way I can think of is to run your container with your script at the start;
docker run -d --name my_container nginx /etc/init.d/myscript
If you have to run the script directly in an already running container, you can do that with exec:
docker exec my_container /path/to/some_script.sh
or if you wanna run it through Php:
docker exec my_container php /path/to/some_script.php
That said, you typically don't want to run scripts in already running containers, but to just use the same image as some already running container. You can do that with a standard docker run:
docker run -a stdout --rm some_repo/some_image:some_tag php /path/to/some_script.php

Docker Exec command does not work properly

I have a script run.sh that I run as I initialize the container through docker run command. The script runs successfully. I can also get a bash instance (through docker exec -i -t container-name bash) in the container and run the script successfully (note that by default I have su privileges when I get the bash).
However, when I run the script from the host through docker exec -i -t container-name /run.sh the script runs but does not provide the outcome that it provides through the alternative approaches. I know it runs as it produces some of the expected behavior but not all of them. So my main question is what are the difference between executing a script through commandline and running the same script through docker exec.

Resources