Docker run uses host PATH when chaining commands - docker

I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?

When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.

Related

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

Run commands in Docker during run process

I want to be able to run a docker run... command for my custom Ubuntu command where the docker will run two commands as if they were typed once the docker begins running. I have my docker mounted to a local folder and have a custom code within the mounted folder, I want running the docker to also run cd Project and ./a.out within the docker but I am not sure how to do that in one long command.
I have tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu cd Project && ./a.out but I get an OCI runtime create failed.
I have also tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu -c 'cd Project && ./a.out' but get the same error.
Ultimately, it would be nice to have my mounted directory, cd Project, ./a.out, and exit command in my Dockerfile so that the docker container opens, runs the compiled code within a.out, and then exits with a simple docker run myubuntu command but I know that mounting within the Dockerfile requires the image be rebuilt every time that local folder changes. So that leaves me with being able to open the docker container, run my two commands, and exit the container with 1 docker run command line.
I think you want to start a shell that runs your two commands:
docker run --mount ... myubuntu /bin/bash -c 'cd somewhere && do something'

Error running interactive Docker on git-bash (prefixing winpty doesn't help)

I'm new to Docker.
I have a simple DockerFile:
FROM ubuntu:12.04
CMD echo "Test"
I built the image using the docker build command (docker build -t dt_test .).
All that I want to do is run the docker image interactively on Git bash. The path in git has been set up to include the docker toolbox.
When I run the interactive docker run command: "docker run -it dt_test"
it gives me an ERROR:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
So I've tried prefixing the run command with winpty, and it executes the command but doesn't show me the interactive shell. When I type something, I cant see any of the commands that I'm typing into the terminal. I have to then type "reset" and then it sets the terminal back to normal. So I guess the winpty command isn't working
Questions:
Is there something wrong with the "winpty docker run -it dt_test" command and why doesn't it work?
How can I fix this issue to make my file run interactively?
FYI: When I run the docker file non interactively it seems to work fine. shows "Test" in the terminal according to the Dockerfile.
Looks to be the same as the input device is not a TTY.
Try without the -t:
docker run -i dt_test
And to run it with a different entrypoint (like bash):
docker run -i --entrypoint bash dt_test

Execute host shell script from meteor container

I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.

Docker Exec command does not work properly

I have a script run.sh that I run as I initialize the container through docker run command. The script runs successfully. I can also get a bash instance (through docker exec -i -t container-name bash) in the container and run the script successfully (note that by default I have su privileges when I get the bash).
However, when I run the script from the host through docker exec -i -t container-name /run.sh the script runs but does not provide the outcome that it provides through the alternative approaches. I know it runs as it produces some of the expected behavior but not all of them. So my main question is what are the difference between executing a script through commandline and running the same script through docker exec.

Resources