Docker - Mount Volume while executing the container - docker

I use a docker container, where i dynamically want to mount a volume. So i want every time i invoke "exec" to mount a different host-path. this is currently not possible.
My current method (Static):
# First Time
docker run -dit -v <from1>:/<to> --name <NAME> <IMAGE>
docker exec <NAME> bash -c "<my-bash-command>"
# Any following time:
docker stop <NAME>
docker rm <NAME>
docker run -dit -v <from2>:/<to> --name <NAME> <IMAGE>
docker exec <NAME> bash -c "<my-bash-command>"
So currently i have to stop, remove and recreate the entire container just because i have a different "from" path.
I hope there is a way that i could create and already start the container in the background, and just during a command execution mount the volume.
Example (pseudo code, this wont work)
# First Time
docker run -dit --name <NAME> <IMAGE>
docker exec -v <from1>:/<to> <NAME> bash -c "<my-bash-command>"
# Any following time:
docker exec -v <from2>:/<to> <NAME> bash -c "<my-bash-command>"
docker exec -v <from3>:/<to> <NAME> bash -c "<my-bash-command>"
...
Is there a solution for this? Because i need to keep the same container and i dont want to create a new container every time a run a command (as i will use persistent data inside the container, which get tossed away if i remove the container)

The whole idea behind containers is to encapsulate small tasks that are reusable. The containers should be transient, meaning, I should be able to delete the container and create new one without loosing data (all data should be outside the container)
If your containers follow this approach, you can run in the following way.
docker run -v <from2>:/<to> <NAME> bash -c "<my-bash-command>"
docker run -v <from3>:/<to> <NAME> bash -c "<my-bash-command>"
From the nature of the question and what you are trying to do I can understand that the container has internal state on which you depend on the subsequent commands, and this is the root-cause of the problem.
From the commands that are shared, I don't see anything that is depending between the containers, (ex. volumes, ports, etc.), so nothing preventing you to run the containers as follows:
# First Time
docker run -dit -v <from1>:/<to> --name <NAME> <IMAGE>
docker exec <NAME> bash -c "<my-bash-command>"
# Any following time:
docker run -dit -v <from2>:/<to> --name <NAME2> <IMAGE>
docker exec <NAME2> bash -c "<my-bash-command>"
If you have dependancies, maybe the dependancies should be in another container and then both the running container and the new container can link to the dependency container and consume the information that is required. You can use file system, network services, etc. to link the containers.

Related

Any commands hang inside docker container

Any commands hang terminal inside docker container.
I login in container with docker exec -t php-zts /bin/bash
And then print any elementary command (date, ls, cd /, etc.)
Command hang
When I press ctrl+c I going back to host machine.
But, if I run any command without container - it's work normally
docker exec -t php-zts date
Wed Jan 26 00:04:38 UTC 2022
tty is enabled in docker-compose.yml
docker system prune and all cleanups can not help me.
I can't identify the problem and smashed my brain. Please help :(
The solution is to use the flag -i/--interactive with docker run. Here is a relevant section of the documentation:
--interactive , -i Keep STDIN open even if not attached
You can try to run your container using -i for interactive and -t for tty which will allow you to navigate and execute commands inside the container
docker run -it --rm alpine
In the other hand you can run the container with docker run then execute commands inside that container like so:
tail -f /dev/null will keep your container running.
-d will run the command in the background.
docker run --rm -d --name container1 alpine tail -f /dev/null
or
docker run --rm -itd --name container1 alpine sh # You can use -id or -td or -itd
This will allow you to run commands from inside the container.
you can choose sh, bash, or any other shell you prefer.
docker exec -it container1 alpine sh

docker run - autokill container already in use?

I was following this guide on customizing MySQL databases in Docker, and ran this command multiple times after making tweaks to the mounted sql files:
docker run -d -p 3306:3306 --name my-mysql -v /Users/pneedham/dev/docker-testing/sql-scripts:/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=supersecret -e MYSQL_DATABASE=company mysql
On all subsequent executions of that command, I would see an error like this:
docker: Error response from daemon: Conflict. The container name "/my-mysql" is already in use by container "9dc103de93b7ad0166bb359645c12d49e0aa4a3f2330b5980e455cec24843663". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
What I'd like to know is whether that docker run command can be modified to auto-kill the previous container (if it exists)? Or if there is a different command that has the same desired result.
If I were to create a shell script to do that for me, I'd first run docker ps -aqf "name=mysql" and if there is any output, use that resulting container ID by running docker rm -f $containerID. And then run the original command.
docker run command has a --rm arguments that deletes the container after the run is completed. see the docs . So, just change your command to
docker run --rm -d -p 3306:3306 --name my-mysql -v /Users/pneedham/dev/docker-testing/sql-scripts:/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=supersecret -e MYSQL_DATABASE=company mysql

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Docker exec command without the container ID

How can do something like:
docker exec -it 06a0076fb4c0 install-smt
But use the name of the container instead
docker exec -it container/container install-smt
I am running a build on CI server so I can not manually input the container ID.
How can I achieve this?
Yes, you can do this by naming the container with --name. Note that your command with container/container is likely referencing an image name and not the container.
➜ ~ docker run --name my_nginx -p 80:80 -d nginx
d122acc37d5bc2a5e03bdb836ca7b9c69670de79063db995bfd6f66b9addfcac
➜ ~ docker exec my_nginx hostname
d122acc37d5b
Although it won't save any typing, you can do something like this if you want to use the image name instead of giving the container a name:
docker run debian
docker exec -it `docker ps -q --filter ancestor=debian` bash
This will only work if you're only running one instance of the debian image.
It does help if you're constantly amending the image when working on a new Dockerfile, and wanting to repeatedly run the same command in each new container to check your changes worked as expected.
I was able to fix this by setting a container name in the docker-compose file, and rundocker exec -it with the name form the file.
#Héctor (tnx)
These steps worked for me:
This will start the container named mytapir and spawn a shell into the docker container:
docker run -d --name mytapir -it wsmoses/tapir-built:latest bash
Upon docker ps to ensure the docker container is running:
docker exec -it mytapir /bin/bash
Will spawned a shell into an existing container named mytapir.
And you can stop the container as usual docker stop mytapir.
And starting it via docker start mytapir, if it is not running.
(check via docker ps -a)

How to re-mount a docker volume without overriding existing files?

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

Resources