how to keep hosts config after docker restart - docker

I am edit /etc/hosts in docker container,but the hosts config is lost when I restart nginx docker container.Is it possible to keep the hosts config do not lost when restart container? I am read internet suggest to add parameter when docker run,but my container already exists.

Related

Docker: Does container inherit /etc/hosts from docker host?

In case of I have a machine that running docker (docker host) and spin up some containers inside this docker host,
I need containers' services be able to talk to each other - container expose ports and they also need to resolve by hostname (e.g: example.com)
container A needs to talk to container B with URL: example.com:3000
I've read this article but not quite sure about "inherit" from docker host, does docker host's /etc/hosts will be appended to container's /etc/hosts that running inside docker host?
https://docs.docker.com/engine/reference/run/#managing-etchosts
How to achieve?
Does this "inherit" has any connect to type of docker container networking https://docs.docker.com/v17.09/engine/userguide/networking/ ?
It does not inherit the host's /etc/hosts file. The file inside your container is updated by docker when using the --add-host parameter or extra_hosts in docker-compose. You can add individual records by using extra_hosts in docker-compose (https://docs.docker.com/compose/compose-file/#extra_hosts).
Although if you're just trying to get 2 containers talking to each other you can alternatively connect them to the same network. In docker-compose you can create what's called an external network and have all your docker-compose files reference it. you will then be able to connect by using the full docker container name (eg. http://project_app_1:3000).
See https://docs.docker.com/compose/compose-file/#external

Restart docker container from another container

I'm trying to set up Docker with two containers. One is a web app and the second is a dnsmasq DHCP server.
Docker should update the dnsmasq container and the dhcp ip list from a event from the web app. The only option I have so far is to generate the dhcp hosts file and restart the dnsmasq container but it need to be done manually in the Docker host outside the web app container.
Is there a way to restart the service from another container?
The only way to restart a container from another container would be to mount /var/run/docker.sock and use the API. But I wouldn't do that from a webapp for obvious security reasons.
I would share the dhcp hosts file between the containers (with the -v option) and have a script running in the dnsmasq container that checks for changes in this file and restart the dnsmasq service in the container. There's no need to restart the container. You could use Supervisord to start dnsmasq and this script. I would use the --init flag to avoid zombie process.
From your host:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock --name=xxx ubuntu bash
docker cp /usr/bin/docker xxx:/usr/bin/docker
Go inside the container and check unresolved libs:
ldd /usr/bin/docker
Manually copy missing libs from host into container and setup including symlinks as required. In my case I had to:
docker cp /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 xxx:/usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1
And then inside the container I had to:
ln -sf /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 /usr/lib/x86_64-linux-gnu/libltdl.so.7
Inside the container check again: ldd /usr/bin/docker if all is well, you can now run docker inside the container.
Note, that docker-compose run right away when i copied from host to container. Only docker i had to copy the extra library and setup the symlinks.

How to resolve docker host names (/etc/hosts) in containers

how is it possible to resolve names defined in Docker host's /etc/hosts in containers?
Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine.
I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.
My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.
I am even quite sure I have seen this working a while ago... but I could be wrong.
Any thoughts?
Giovanni
Check out the --add-host flag for the docker command: https://docs.docker.com/engine/reference/run/#managing-etchosts
$ docker run --add-host="smtp:127.17.0.1" container command
In Docker, /etc/hosts cannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case --add-host to modify the file.
For docker-compose, use the extra_hosts option.
For the whole "connect to services running in host" problem, see the discussion in this GitHub issue: https://github.com/docker/docker/issues/1143.
The common approach for this problem is to use --add-host with Docker's gateway address for the host, e.g. --add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
You can setup on host simple DNS server, and in container setup /etc/resolve.conf to Docker host DNS server.
For example in dnsmasq you can add addn-hosts=/etc/hosts to config file. So container, by using Docker host DNS server, will be able to resolve hosts /etc/hosts.

Access host docker-machine from within container

I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-‌​1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.

How can you make the Docker container use the host machine's '/etc/hosts' file?

I want to make it so that the Docker container I spin up use the same /etc/hosts settings as on the host machine I run from. Is there a way to do this?
I know there is an --add-host option with docker run, but that's not exactly what I want because the host machine's /etc/hosts file may be different on different machines, so it's not great for me to hardcode exact IP addresses/hosts with --add-host.
Use --network=host in the docker run command. This tells Docker to make the container use the host's network stack. You can learn more here.
Add a standard hosts file -
docker run -it ubuntu cat /etc/hosts
Add a mapping for server 'foo' -
docker run -it --add-host foo:10.0.0.3 ubuntu cat /etc/hosts
Add mappings for multiple servers
docker run -it --add-host foo:10.0.0.3 --add-host bar:10.7.3.21 ubuntu cat /etc/hosts
Reference - Docker Now Supports Adding Host Mappings
extra_hosts (in docker-compose.yml)
https://github.com/compose-spec/compose-spec/blob/master/spec.md#extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
Also you can install dnsmasq to the host machine, by the command:
sudo apt-get install dnsmasq
And then you need to add the file /etc/docker/daemon.json with content:
{
"dns": ["host_ip_address", "8.8.8.8"],
}
After that, you need to restart the Docker service by command sudo service docker restart
This option forces to use the host DNS options for every Docker container.
Or you can use it for a single container, and the command-line options are explained by this link. Also docker-compose options are supported (you can read about it by this link).
If you are using docker-compose.yml, the corresponding property is:
services:
xxx:
network_mode: "host"
Source
Add this to your run command:
-v /etc/hosts:/etc/hosts
If trusted users start your containers, you could use a shell function to easily "copy" the /etc/hosts entries that you need:
add_host_opt() { awk "/\\<${1}\\>/ {print \"--add-host $1:\" \$1}" /etc/hosts; }
You can then do:
docker run $(add_host_opt host.name) ubuntu cat /etc/hosts
That way you do not have to hard-code the IP addresses.
The host machine's /etc/hosts file can't mount into a container. But you can mount a folder into the container. And you need a dnsmasq container.
A new folder on host machine
mkdir -p ~/new_hosts/
ln /etc/hosts ~/new_hosts/hosts
mount the ~/new_hosts/ into container
docker run -it -v ~/new_hosts/:/new_hosts centos /bin/bash
Config dnsmasq use /new_hosts/hosts to resolve name.
Change your container's DNS server. Use the dnsmasq container's IP address.
If you change the /etc/hosts file on the host machine, the dnsmasq container's /new_hosts/hosts will change.
I found a problem:
The file in dnsmasq container /new_hosts/hosts can change. But the new hosts can't resolve. Because dnsmasq use inotify listen change event. When you modify a file on the host machine. The dnsmasq can't receive the signal so it doesn't update the configuration. So you may need to write a daemon process to read the /new_hosts/hosts file content to another file every time. And change the dnsmasq configuration to use the new file.
I had the same problem and found that it is likely in contrast with the containerization concept! however I solved my problem by adding each (ip host) pair from /etc/hosts to an existing running container in this way:
docker stop your-container-name
systemctl stop docker
vi /var/lib/docker/containers/*your-container-ID*/hostconfig.json
find ExtraHosts in text and add or replace null with
"ExtraHosts":["your.domain-name.com":"it.s.ip.addr"]
systemctl start docker
docker start your-container-name
if you can stop your container and re-run it, you'd have better situation, so just do that. But if you do not want to destroy your containers, just like mine, it would be a good solution.
If you are running a virtual machine for running Docker containers, if there are hosts (VMs, etc.) you want your containers to be aware of, depending on what VM software you are using, you will have to ensure that there are entries on the host machine (hosting the VM) for whatever machines you want the containers to be able to resolve.
This is because the VM and its containers will have the IP address of the host machine (of the VMs) in their resolv.conf file.
IMO, passing --network=host option while running Docker is a better option as suggested by d3ming over other options as suggested by other answers:
Any change in the host's /etc/hosts file is immediately available to the container, which is what probably you want if you have such a requirement at the first place.
It's probably not a good idea to use the -v option to mount the host's /etc/hosts filr as any unintended change by the container will spoil the host's configuration.

Resources