Run executable from host within docker container - docker

I have a docker container and I would like to start a process in the host OS, and then have it execute in the context of the docker container. That is, my executable is a file in the host filesystem, and I want to start a process in the host OS, but I want to contain that process to the container, so that e.g. the process can only access the container's filesystem, etc.
For various reasons I do not want to copy the executable into the container and execute it there.
I do realize that this is a somewhat strange thing to be trying to do with docker containers!

Mount the executable into the container with a volume like this:
$ docker run -v /path/to/executable:/my_exe debian /my_exe
The only problem is you will also need to take care of making sure any required libraries are also available in the container.

Related

When mounting /var/run/docker.sock into a container, which file system is used for volume mounting?

I have a container that contains logic for coordinating the deployment of the microservices on the host - let's call this service the deployer. To achieve that, I have mounted the /var/run/docker.sock file from the host into that deployer container.
So, when performing docker run hello-world from within the deployer container, the host runs it.
This system works as expected, except for one thing I have become unsure about now, since I have seen some unexpected behaviour.
When performing docker run -v "/path/to/src:/path/to/dest" hello-world, what folder will Docker be looking at?
I'm seeing two valid reasonings:
A) It will mount /path/to/src from within the deployer to the
hello-world container, since that is the shell that performs the
command.
B) It will mount /path/to/src from the source to the
hello-world container, since the docker.sock determines the context
and the command is being ran on the host.
Which of those is correct?
Moreover, when using relative paths (e.g. in docker-compose), what will be the path that is being used?
It will always use the host filesystem. There isn’t a way to directly mount one container’s filesystem into another.
For example:
host$ sudo docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker sh
0123456789ab# docker run -v /:/host --rm -it busybox sh
13579bdf0246# cat /host/etc/shadow
The last command will print out the host’s encrypted password file, not anything in the intermediate container.
If it isn’t obvious from the example, mounting the Docker socket to programmatically run Docker commands has massive security implications, and you should carefully consider whether it’s actually a good approach for you.
I’m pretty sure relative paths in docker-compose.yml won’t actually work with this setup (because you can’t bind-mount things out of the intermediate container). You’d have to mount the same content into both containers for one to be able to send files to the other. Using named volumes can be helpful here (because the volume names aren’t actually dependent on host paths); depending on what exactly you’re doing, a roundabout path of docker create and then docker cp could work.
At an implementation level there is only one Docker daemon and it runs on the host. You can publish its socket to various places, but ultimately that daemon receives requests like “create a container that mounts host directory /x/y” and the daemon interprets those requests in the context of the host. It doesn’t know that a request came from a different container (or, potentially, a different host; but see above about security concerns).

How to mount command or busybox to docker container?

The image pulled from docker hub is a minimal system, without commands like vim,ping,etc. Sometimes when in debug environment.
For example, I need ping to test network or "vim" to modify conf, but I dont want to install them in container or indocker-file` as they are not necessary in run time.
I have tried to install the commands in my container which is not convenient.
So, I think if it can mount commands from host to container? or even "mount" a busy-box to container?
You should install these tools in your docker container, because this is how the things are done. I cant find a single reason not to do so, but in case you cant do it (why??), you can put necessary binaries into volume and mount this volume into your container. Something like:
docker run -it -v /my/binaries/here:/binaries:ro image sh
$ ls /binaries
and execute them inside using container path /binaries.
But what you have to keep in mind - these binaries usually have dependencies from system paths like /var/lib and others. And when calling them from inside container, you have to somehow resolve them.
If running on Kubernetes, the kubectl command has support for running a debug container that has access to running container. Check kubectl debug.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Linux+Docker - How to run host's apps from inside Docker container?

I want to know if Docker can run apps installed in host in the container so that I dont need to install the app on each images which wastes the hard disk space.
I know Linux is different since it requires dependencies and packages locally but I wonder if it is possible to use it like in Windows VM.
In Windows Hyper-V, I did this by sharing the network folder containing portable apps with the container and run apps from inside the Windows VM.
Thank you.
You can link a directory on your host containing the executables into your container. Then it will be accessible in the container. To do so, you can use VOLUMES -- Mount a host directory as a data volume and mount a host directory (here: /tmp/foo) into your container (here: /foo) and execute a script called foo.sh in your container's location /foo/foo.sh:
mkdir /tmp/foo
echo -e "#\!/bin/sh\n\necho foo" > /tmp/foo/foo.sh
docker run --rm -v /tmp/foo:/foo alpine sh /foo/foo.sh
=> foo
The same way, you can add binaries from your host to your container... But I do not think that this is intended and should be used, because a container should work as a standalone, isolated "lightweight-VM". You add an unnecessary dependency to your host machine to it, which seems not to be an elegant solution.

Docker exec command not using the mounted directory for /

I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.

Resources