I want to know if Docker can run apps installed in host in the container so that I dont need to install the app on each images which wastes the hard disk space.
I know Linux is different since it requires dependencies and packages locally but I wonder if it is possible to use it like in Windows VM.
In Windows Hyper-V, I did this by sharing the network folder containing portable apps with the container and run apps from inside the Windows VM.
Thank you.
You can link a directory on your host containing the executables into your container. Then it will be accessible in the container. To do so, you can use VOLUMES -- Mount a host directory as a data volume and mount a host directory (here: /tmp/foo) into your container (here: /foo) and execute a script called foo.sh in your container's location /foo/foo.sh:
mkdir /tmp/foo
echo -e "#\!/bin/sh\n\necho foo" > /tmp/foo/foo.sh
docker run --rm -v /tmp/foo:/foo alpine sh /foo/foo.sh
=> foo
The same way, you can add binaries from your host to your container... But I do not think that this is intended and should be used, because a container should work as a standalone, isolated "lightweight-VM". You add an unnecessary dependency to your host machine to it, which seems not to be an elegant solution.
Related
I am using Docker to run four containers to run a backend web application. The backend web application uses buildout to assemble the software.
However, the frontend, which is installed and runs on the host machine (that is, not using Docker), needs to access the buildout directory inside one of the four docker containers.
Moreover, the frontend uses an environment variable called NTI_BUILDOUT_PATH that is defined on the host machine. NTI_BUILDOUT_PATH must point to the buildout directory, which is inside the aforementioned container.
My problem is that I do not know how to define NTI_BUILDOUT_PATH such that it contains a directory that points towards the buildout directory that is needed by the front end for SSL certificates and other purposes.
I have researched around the web and read about volumes and bind mounts but I do not think they can help me in my case.
One way you can do that is by copying your buildout folder into the host machine using docker cp
docker cp <backend-container-id>:<path-to-buildout> <path-to-host-folder>
For Example if your backend's container_id is d1b5365c5bca and your buildout folder is in /app/buildout inside the container. You can use the following command to copy it to the host.
docker cp d1b5365c5bca:/app/buildout /home/mahmoud/app/buildout
After that you docker rm all your containers and recreate new ones with a bind mount to the buildout folder in the host. So following the previous example we'll have
docker run -v /home/mahmoud/app/buildout:/app/buildout your-backend-image
docker run -v /home/mahmoud/app/buildout:/app/buildout -e NTI_BUILDOUT_PATH=/app/buildout your-frontend-image
I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.
Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.
I have a docker container and I would like to start a process in the host OS, and then have it execute in the context of the docker container. That is, my executable is a file in the host filesystem, and I want to start a process in the host OS, but I want to contain that process to the container, so that e.g. the process can only access the container's filesystem, etc.
For various reasons I do not want to copy the executable into the container and execute it there.
I do realize that this is a somewhat strange thing to be trying to do with docker containers!
Mount the executable into the container with a volume like this:
$ docker run -v /path/to/executable:/my_exe debian /my_exe
The only problem is you will also need to take care of making sure any required libraries are also available in the container.