Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount
Here's the situation:
I have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container.
Manually, everything works in the container. However, when Jenkins executes the job, it doesn't "wait" for the docker exec command to run to completion.
Below, is an extract from the Jenkinsfile. The short-lived printenv command runs correctly, and prints the environment variables. The next command (python) just gets run and then Jenkins moves on immediately, not waiting for completion. The Jenkins agent (slave) is running on an Ubuntu image. Running all these commands outside Jenkins work as expected.
echo "Running the app docker container in detached tty mode to keep it up"
docker run --detach --tty --name "${CONTAINER_NAME}" "${IMAGE_NAME}"
echo "Listing environment variables"
docker exec --interactive "${CONTAINER_NAME}" bash -c "printenv"
echo "Running test coverage"
docker exec --interactive "${CONTAINER_NAME}" bash -c "python -m coverage run --source . --branch -m pytest -vs"
It seems maybe related to this question.
Please can anyone explain how to get Jenkins to wait for the docker exec command to complete before proceeding to the next step.
Have considered alternatives, like the Docker Pipeline Plugin, but would much prefer to use something close to what I have above where possible.
Ok, another approach, I've tried using Docker Pipeline plugin here.
You can use docker.sock as volume mount to orchestrate containers on your host machine like this in your docker-compose.yml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Depending on your setup you might need to run
chmod 666 /var/run/docker.sock
to get going in the first place.
This works on macOS as well as Linux.
Ugh. This was down to the way that I'd set up docker support on the slave container.
I'd used socat to provide a TCP server proxy. Instead, switched that out for a plain old docker.sock volume between host & container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The very first time, I had to also sort out a permissions issue by doing (inside the container):
rm -Rf ~/.docker
chmod 666 /var/run/docker.sock
After that, everything "just worked". Very painful experience.
I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.
I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.
I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)