I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.
Related
I am trying to create a .bat file that will back-up the database inside the container. After the first command (that is used to enter the container) the next one is ignored.
docker exec -it CONTAINER_NAME /bin/bash
cd /var/opt/mssql/data
Any ideas why? If I'm trying to manually write cd /var/opt/mssql/data in the opened cmd it works.
When running docker exec -it CONTAINER_NAME /bin/bash, you are opening a bash-shell INSIDE the docker container.
The next command, i.e. cd /var/opt/mssql/data, is only executed, if the previous command, docker exec -it CONTAINER_NAME /bin/bash has exited successfully, which means that the shell on the docker container has been closed/exited.
This means that cd /var/opt/mssql/data is then executed on the local machine and not inside the docker container.
To run a command inside the docker container, use the following command
docker exec -it CONTAINER_NAME /bin/bash -c "<command>"
Although it may be better to create a script inside the container during the build process or mount a script inside the docker container while starting the container and then simply call this script with the above mentioned command.
Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount
I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.
I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command
I have a script run.sh that I run as I initialize the container through docker run command. The script runs successfully. I can also get a bash instance (through docker exec -i -t container-name bash) in the container and run the script successfully (note that by default I have su privileges when I get the bash).
However, when I run the script from the host through docker exec -i -t container-name /run.sh the script runs but does not provide the outcome that it provides through the alternative approaches. I know it runs as it produces some of the expected behavior but not all of them. So my main question is what are the difference between executing a script through commandline and running the same script through docker exec.