Docker Windows new container options - docker

When I create a new container in docker windows, there are optional settings to give it a name, and under 'Volumes' there is 'Host Path' and 'Container Path'. Are these for somehow connecting local host with the container? How does it work?

Volumes in the new Docker Desktop correspond to Docker volumes. You can also use volumes with the Docker CLI's -v flag when you run a container. This mounts a path from your host machine's filesystem (Windows) to your Docker container's filesystem.
For example, if you set Host Path: C:\User\Username\Projects and Container Path: /home/projects, the contents of your Projects directory on Windows would be "shared" (actually bind-mounted) to your Docker container. The corresponding Docker command would be: docker run -v C:\Users\Username\Projects:/home/projects MP4
See the bind-mounts docs for more information.

Related

How to access files in host from a Docker Container?

I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src

How to Attach Network Directory to Docker Container - Windows 7 Host

I'm testing Docker running on my Windows 7 PC. I can mount directories under C:\Users to containers without issue with e.g.
docker run --rm -it -v //c/Users/someuser/:/data/ alpine ash
but when I try to attach a networked location like //server1/data with e.g.
docker run --rm -it -v //server1/data/:/data/ alpine ash
the /data directory in the container appears empty. How do I pass a directory not under C:\Users\ to my Docker containers?
Because my PC was running Windows 7 I'd installed Docker Toolbox, which uses Virtualbox instead of Hyper-V. My understanding is that this means Docker is running inside a VM on my system, so that VM needs to have access to any data I intend to pass to Docker.
To attach network directories (or anything local above C:\Users) I needed to add it as a shared folder in Virtualbox.
VM (default in my case) => Settings => Shared Folders => +
After navigating the file explorer and adding //server1/data to the list of folders shared with VM 'default' I was able to pass it to the container as a volume using the second command outline in my original question.

Docker using host libraries

I have hadoop libraries on my host and I want use them in the container rather than having them in the container itself, is there a way I can use my host's libraries in the docker container?
You can mount a host directory on the container so it will be available inside, for example:
docker run -it -v /opt/myhadooplibs:/myhadooplibs busybox
Now /opt/myhadooplibs contents will be available on /myhadooplibs on the container. You can read more about volumes here.
Note that by doing this you are making your container non portable as it depends on the host.

How is docker able to mount a volume from docker client into a docker container running on docker host?

I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker

why does docker have docker volumes and volume containers

Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/

Resources