When I create a new container in docker windows, there are optional settings to give it a name, and under 'Volumes' there is 'Host Path' and 'Container Path'. Are these for somehow connecting local host with the container? How does it work?
Volumes in the new Docker Desktop correspond to Docker volumes. You can also use volumes with the Docker CLI's -v flag when you run a container. This mounts a path from your host machine's filesystem (Windows) to your Docker container's filesystem.
For example, if you set Host Path: C:\User\Username\Projects and Container Path: /home/projects, the contents of your Projects directory on Windows would be "shared" (actually bind-mounted) to your Docker container. The corresponding Docker command would be: docker run -v C:\Users\Username\Projects:/home/projects MP4
See the bind-mounts docs for more information.
I have hadoop libraries on my host and I want use them in the container rather than having them in the container itself, is there a way I can use my host's libraries in the docker container?
You can mount a host directory on the container so it will be available inside, for example:
docker run -it -v /opt/myhadooplibs:/myhadooplibs busybox
Now /opt/myhadooplibs contents will be available on /myhadooplibs on the container. You can read more about volumes here.
Note that by doing this you are making your container non portable as it depends on the host.
I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker
There is possibility to install docker in docker container.
How to control docker host service from it's container (manage another containers)?
If execute docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti debian and enter docker error appears:
docker: error while loading shared libraries: libapparmor.so.1: cannot open shared object file: No such file
The error you're seeing seems very clear: the docker binary requires a shared library that is not present inside the container.
Is your container running the same distribution and version as your host? If it is, you simply need to determine which packages provide the necessary dependencies and install them inside the container.
If not, you will probably have better luck simply installing docker inside the container, rather than trying to bind-mount it from the host. There is probably a source of recent Docker versions available for Debian.
if your host is a linux based machine, you dont need to install docker inside container, you can just mount docker into container and whatever you do with that inside your container is just like doing it on host. I have tested it on a Ubuntu machine (image: https://github.com/mohamnag/ubuntu-git.git) by mounting /usr/bin/docker from host into /bin/docker inside container. then inside that container you can literally do (build, stop, list ...) whatever you may have done with docker inside host.
I want to make it so that the Docker container I spin up use the same /etc/hosts settings as on the host machine I run from. Is there a way to do this?
I know there is an --add-host option with docker run, but that's not exactly what I want because the host machine's /etc/hosts file may be different on different machines, so it's not great for me to hardcode exact IP addresses/hosts with --add-host.
Use --network=host in the docker run command. This tells Docker to make the container use the host's network stack. You can learn more here.
Add a standard hosts file -
docker run -it ubuntu cat /etc/hosts
Add a mapping for server 'foo' -
docker run -it --add-host foo:10.0.0.3 ubuntu cat /etc/hosts
Add mappings for multiple servers
docker run -it --add-host foo:10.0.0.3 --add-host bar:10.7.3.21 ubuntu cat /etc/hosts
Reference - Docker Now Supports Adding Host Mappings
extra_hosts (in docker-compose.yml)
https://github.com/compose-spec/compose-spec/blob/master/spec.md#extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
Also you can install dnsmasq to the host machine, by the command:
sudo apt-get install dnsmasq
And then you need to add the file /etc/docker/daemon.json with content:
{
"dns": ["host_ip_address", "8.8.8.8"],
}
After that, you need to restart the Docker service by command sudo service docker restart
This option forces to use the host DNS options for every Docker container.
Or you can use it for a single container, and the command-line options are explained by this link. Also docker-compose options are supported (you can read about it by this link).
If you are using docker-compose.yml, the corresponding property is:
services:
xxx:
network_mode: "host"
Source
Add this to your run command:
-v /etc/hosts:/etc/hosts
If trusted users start your containers, you could use a shell function to easily "copy" the /etc/hosts entries that you need:
add_host_opt() { awk "/\\<${1}\\>/ {print \"--add-host $1:\" \$1}" /etc/hosts; }
You can then do:
docker run $(add_host_opt host.name) ubuntu cat /etc/hosts
That way you do not have to hard-code the IP addresses.
The host machine's /etc/hosts file can't mount into a container. But you can mount a folder into the container. And you need a dnsmasq container.
A new folder on host machine
mkdir -p ~/new_hosts/
ln /etc/hosts ~/new_hosts/hosts
mount the ~/new_hosts/ into container
docker run -it -v ~/new_hosts/:/new_hosts centos /bin/bash
Config dnsmasq use /new_hosts/hosts to resolve name.
Change your container's DNS server. Use the dnsmasq container's IP address.
If you change the /etc/hosts file on the host machine, the dnsmasq container's /new_hosts/hosts will change.
I found a problem:
The file in dnsmasq container /new_hosts/hosts can change. But the new hosts can't resolve. Because dnsmasq use inotify listen change event. When you modify a file on the host machine. The dnsmasq can't receive the signal so it doesn't update the configuration. So you may need to write a daemon process to read the /new_hosts/hosts file content to another file every time. And change the dnsmasq configuration to use the new file.
I had the same problem and found that it is likely in contrast with the containerization concept! however I solved my problem by adding each (ip host) pair from /etc/hosts to an existing running container in this way:
docker stop your-container-name
systemctl stop docker
vi /var/lib/docker/containers/*your-container-ID*/hostconfig.json
find ExtraHosts in text and add or replace null with
"ExtraHosts":["your.domain-name.com":"it.s.ip.addr"]
systemctl start docker
docker start your-container-name
if you can stop your container and re-run it, you'd have better situation, so just do that. But if you do not want to destroy your containers, just like mine, it would be a good solution.
If you are running a virtual machine for running Docker containers, if there are hosts (VMs, etc.) you want your containers to be aware of, depending on what VM software you are using, you will have to ensure that there are entries on the host machine (hosting the VM) for whatever machines you want the containers to be able to resolve.
This is because the VM and its containers will have the IP address of the host machine (of the VMs) in their resolv.conf file.
IMO, passing --network=host option while running Docker is a better option as suggested by d3ming over other options as suggested by other answers:
Any change in the host's /etc/hosts file is immediately available to the container, which is what probably you want if you have such a requirement at the first place.
It's probably not a good idea to use the -v option to mount the host's /etc/hosts filr as any unintended change by the container will spoil the host's configuration.