Docker using host libraries - docker

I have hadoop libraries on my host and I want use them in the container rather than having them in the container itself, is there a way I can use my host's libraries in the docker container?

You can mount a host directory on the container so it will be available inside, for example:
docker run -it -v /opt/myhadooplibs:/myhadooplibs busybox
Now /opt/myhadooplibs contents will be available on /myhadooplibs on the container. You can read more about volumes here.
Note that by doing this you are making your container non portable as it depends on the host.

Related

Docker Windows new container options

When I create a new container in docker windows, there are optional settings to give it a name, and under 'Volumes' there is 'Host Path' and 'Container Path'. Are these for somehow connecting local host with the container? How does it work?
Volumes in the new Docker Desktop correspond to Docker volumes. You can also use volumes with the Docker CLI's -v flag when you run a container. This mounts a path from your host machine's filesystem (Windows) to your Docker container's filesystem.
For example, if you set Host Path: C:\User\Username\Projects and Container Path: /home/projects, the contents of your Projects directory on Windows would be "shared" (actually bind-mounted) to your Docker container. The corresponding Docker command would be: docker run -v C:\Users\Username\Projects:/home/projects MP4
See the bind-mounts docs for more information.

How to access files in host from a Docker Container?

I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src

Is there a way to start a sibling docker container mounting volumes from the host?

the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes

Docker: how control docker service on host from it's container?

There is possibility to install docker in docker container.
How to control docker host service from it's container (manage another containers)?
If execute docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti debian and enter docker error appears:
docker: error while loading shared libraries: libapparmor.so.1: cannot open shared object file: No such file
The error you're seeing seems very clear: the docker binary requires a shared library that is not present inside the container.
Is your container running the same distribution and version as your host? If it is, you simply need to determine which packages provide the necessary dependencies and install them inside the container.
If not, you will probably have better luck simply installing docker inside the container, rather than trying to bind-mount it from the host. There is probably a source of recent Docker versions available for Debian.
if your host is a linux based machine, you dont need to install docker inside container, you can just mount docker into container and whatever you do with that inside your container is just like doing it on host. I have tested it on a Ubuntu machine (image: https://github.com/mohamnag/ubuntu-git.git) by mounting /usr/bin/docker from host into /bin/docker inside container. then inside that container you can literally do (build, stop, list ...) whatever you may have done with docker inside host.

why does docker have docker volumes and volume containers

Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/

Resources