Save current directory between docker exec commands - docker

I have a list of commands which I need to issue one by one to a running docker container. However, when I "cd" in the container, it's not working as expected. For example:
docker run -di --name example alpine:latest
for CMD in 'mkdir -p example && touch example/file' 'cd example' 'ls'
do
docker exec -w='/root' example sh -c "$CMD"
done
Will printout example instead of file. How should I properly execute series of statements, but preserving the working directory between their execution? Preferably, if it possible to do this without concatenating all the commands?

I think you should use this format:
dingrui#gdcni:~/onie$ docker exec -w /root example sh -c 'mkdir -p example; touch example/file; cd example; ls'
file
or write these commands to a script and then mount it to container and run it in container:
dingrui#gdcni:~/onie$ docker run -itd -w /root -v $(pwd):/app --name example busybox /app/test.sh
12f7f2b55182bd18c45ce31e03390544adaedc1a2dd923d3bc4293b214301650
dingrui#gdcni:~/onie$ docker logs example
file

Related

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

Create MakeFile that Runs Docker Image and Changes Directory?

I would like to create a makefile that runs a docker container, automatically mount the current folder and within the container CD to the shared directory.
I currently have the following which runs the docker image and mounts the directory with no issue. But I am unsure how to get it to change directory.
run:
docker run --rm -it -v $(PWD):/projects dockerImage bash
I've seen some examples where you can append -c "cd /projects" at the end so that it is:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects"
however it will immediately exit the bash command afterwards. Ive also seen an example where you can append && at the end so that it is the following:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects &&".
Unfortunately the console will just hang.
You can specify the working directory in your docker run command with the -w option. So you can do something like this:
docker run --rm -it -v $(PWD):/projects -w /projects dockerImage bash
You can find this option in the official docs here https://docs.docker.com/engine/reference/run/.

Running shell script from PC in running Docker

I have pulled one docker image and docker container is running successfully as well. But I want to run one shell script in the running docker. The shell script is located in my hard disk. I am unable to find out which command to use and how to give pathname of the shell file so that it can be executed in running docker.
Please guide me.
Regards
TL;DR
There are two ways that could work in your case.
You can run one-liner-script using docker exec sh/bash with -c argument:
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
You can copy shell script into container using docker cp and then run it in docker context:
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh
Precaution
Not all containers allow to run shell scripts in their context. You can check it executing any shell command in docker:
docker exec -i <your_container_id> echo "Shell works"
For future reference check section Understand how CMD and ENTRYPOINT interact
Docker Exec One-liner
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
If your container has sh or bash or BusyBox shell wrapper (such as alpine, you can send one-line shell script to container's shell.
Limitations:
only short scripts;
hard to pass command-line arguments;
only if your container has shell.
Docker Copy and Execute Script
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can copy script from host to container and then execute it.
You can pass arguments to the script.
You can run script with root credentials with -u root: docker exec -i -u root <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can run script interactively with -t: docker exec -it <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
Limitations:
one more command to execute;
only if your container has shell.

Docker make .sh executable and run it

I'm trying to give executable permission to my script inside docker image and run it. I don't want to set chmod + x for it in Dockerfile.
i tried
docker run img /bin/bash -c "chmod +x ../test/test.sh; ../test/test.sh
but i got "/bin/bash: bad interpreter: Text file busy"
and i can't just make two containers with this commands:
docker run -d img chmod +x ../test/test.sh
docker run -d img ../test/test.sh
=> starting container process caused "exec: \"../test/testing.sh\": permission denied"
i need somehow bind this two containers together
Text file busy means that something is already using the file.
Normally this would work
docker run --rm -it alpine sh -c 'echo "echo it works" > test.sh && chmod +x test.sh && ./test.sh'
With the second command you create two new containers, that are completly seperate. If you want to execute something in an running container you can use docker exec -it <container id or name> <command e.g. bash>
You don't need to set perms if you just pass your script as parameter:
docker run -d IMAGE /bin/bash ../test/test.sh
(add -i and/or -t if you need them)
Ok, i've figured it:
first i made a container:
docker run --name CONTAINER -dt IMAGE
then exec my commands:
docker exec CONTAINER chmod +x ../test/test.sh
docker exec CONTAINER ../test/test.sh

Running a script in Docker container by accepting -e parameter

I have a very simple script called as myscript.sh
echo "this is test " > /tmp/myfile.txt
echo $TEST >> /tmp/myfile.txt
I have stored this script in my disk which i plan to pass it to the container as a volume like this below
docker run -d --name test \
-v /home/docker/test/myscript.sh:/tmp/myscript.sh \
-e TESTING=just-a-test \
test
The Dockerfile looks like this below
FROM ubuntu
CMD ["bash", "/tmp/myscript.sh"]
So the thought process is to get this script executed and get the result as a file myfile.txt which would contain the -e passed.
Instead i am getting
docker#boot2docker:~/test$ docker exec -it test /bin/bash Error
response from daemon: Container test is not running
Which means that this simplest program did not execute as a container.
I Could not figure it out.
The container ran, executed the script, then exited. A container only runs as long as its main process. When that stops, the container stops.
A simpler test would be to change your test script to:
#!/bin/bash
echo $TEST
I would change your Dockerfile to copy the file in and remove the "bash" part of the CMD instruction:
FROM ubuntu
COPY myscript.sh /myscript.sh
CMD /myscript.sh
Now rebuild and run:
$ docker build -t test .
...
$ docker run -e TEST=VAL test
...
The container should echo the value of the test variable and exit. (I haven't tested any of this, so apologies for any mistakes).
The answer to this question is to use the entrypoint instead of cmd.
I did some research and i came up with the solution that looks like this
ENTRYPOINT ["bash", "<script>"]
To run the script just use
docker run -d --name [--privileged] -p : \
- v /script.sh:/tmp/script.sh \
where -v <> : YOU CAN ALSO USE THE WGET TO GET THE SCRIPT LIKE MOST PEOPLE AND EXECUTE AT RUNTIME.
Appreciate all the people who tried to solve the query

Resources