How to navigate to different folder in prebuilt Docker container? - docker

I'm using a prebuilt container from Dockerhub. When I run the container it acts like it's in a folder called workspace, since my run command sudo docker run -it shubhamgoel/birds:bigbang bash returns root#eg2e775g0a1b:/workspace#
I don't know how to navigate to the correct folder. I need to run this container in a folder /home/s/ucmr.
If I do
sudo docker run -it shubhamgoel/birds:bigbang bash -c "cd:/home/s/ucmr"
I get
bash: cd:/home/s/ucmr: No such file or directory
How do I navigate to the correct folder with this prebuilt container? Thank you.
__
Edit: I've tried
sudo docker run -v /kitty:/dog --name kittycat -it shubhamgoel/birds:bigbang
and when I search for 'dog' on my disk there's no such folder. Also when I type in mkdir frog and search for 'frog' on my disk there's no such folder...

docker run -it shubhamgoel/birds:bigbang bash -c "cd:/home/s/ucmr" is wrong for 2 reasons. The first one has already been covered by the other answer (wrong syntax with cd command). The other is that using the -it docker option with a non-interactive bash is kind of meaningless. The -c bash option just means "execute whatever there is between the double quotes and return to the caller", this last part makes the interactivity vanish.
A first naive solution, but still working, could be creating another shell like this:
docker run -it shubhamgoel/birds:bigbang bash -c "cd /home/s/ucmr && bash"
However, docker is far smarter and flexible and lets you override some Dockerfile directive, for instance the WORKDIR:
docker run -it -w="/home/s/ucmr" shubhamgoel/birds:bigbang bash

Related

Docker tutorial: docker run -it ubuntu ls / gives me a no file/directory error using git bash (Windows)

I'm on the part of the tutorial where it talks about data persistence.
First, I run this command to put a random number into a text file within an ubuntu image:
docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"
I think I understand this line pretty well.
Next, the instructions ask me to start a new container (the same image) and I will see that the file is not the same:
docker run -it ubuntu ls /
However, when I run the above command, I get the following error:
/ ls: cannot access 'C:/Program Files/Git/': No such file or directory
I'm running Windows 10 using Git Bash, and this is being done through VS Code.
For now, I've gotten around this issue by re-running the exact command (docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"), but I would like to know why the docker run -it ubuntu ls / instructions failed, and what the solution is?
I managed to solve the issue so I am posting the solution here in case people come across the same issue in the future: git bash changes absolute paths so it is something that should be disabled.
Put this into .bashrc to correct the way paths are handled:
# Workaround for Docker for Windows in Git Bash.
docker()
{
(export MSYS_NO_PATHCONV=1; "docker.exe" "$#")
}
Unfortunately, this doesn't work in scenarios where docker run is called from npm scripts, etc. Volume mapping will still break.
See here to continue exploring the issue and seeing possible workarounds

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

running docker container without /bin/bash command

I create docker container with
sudo docker run -it ubuntu /bin/bash
in book The docker book I read
The container only runs for as long as the command we specified, /bin/bash , is running.
didn't I created terminal with option -it and /bin/bash isn't required? will anything change if I don't pass any command in docker run?
You will get the same behavior if you run
sudo docker run -it ubuntu
because the ubuntu docker image specifies /bin/bash as the default command. You can see that in the ubuntu Dockerfile. As #tadman wrote in their answer, providing a command (like /bin/bash) overrides the default CMD.
In addition, -it does not imply a bash terminal. -t allocates a pseudo-tty, and -i keeps STDIN open even if not attached. See the documentation for further details.
That's an override to the default CMD specification. You can run a container with defaults, that's perfectly normal, but /bin/bash is a trick to pop open a shell so you can walk around and check out the built container to see if it's been assembled and configured correctly.

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Start and attach a docker container with X11 forwarding

There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.

Resources