Restart docker container from another container - docker

I'm trying to set up Docker with two containers. One is a web app and the second is a dnsmasq DHCP server.
Docker should update the dnsmasq container and the dhcp ip list from a event from the web app. The only option I have so far is to generate the dhcp hosts file and restart the dnsmasq container but it need to be done manually in the Docker host outside the web app container.
Is there a way to restart the service from another container?

The only way to restart a container from another container would be to mount /var/run/docker.sock and use the API. But I wouldn't do that from a webapp for obvious security reasons.
I would share the dhcp hosts file between the containers (with the -v option) and have a script running in the dnsmasq container that checks for changes in this file and restart the dnsmasq service in the container. There's no need to restart the container. You could use Supervisord to start dnsmasq and this script. I would use the --init flag to avoid zombie process.

From your host:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock --name=xxx ubuntu bash
docker cp /usr/bin/docker xxx:/usr/bin/docker
Go inside the container and check unresolved libs:
ldd /usr/bin/docker
Manually copy missing libs from host into container and setup including symlinks as required. In my case I had to:
docker cp /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 xxx:/usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1
And then inside the container I had to:
ln -sf /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 /usr/lib/x86_64-linux-gnu/libltdl.so.7
Inside the container check again: ldd /usr/bin/docker if all is well, you can now run docker inside the container.
Note, that docker-compose run right away when i copied from host to container. Only docker i had to copy the extra library and setup the symlinks.

Related

Restart a docker container from another running container

I am using docker-compose for deployment.
I want to restart my "centos-1" container from "centos-2" container. Both containers are running on the same host.
Please suggest, How could I achieve this in a simplest and automated way?
I followed How to run shell script on host from docker container? and tried to run a script on Host from "centos-2" container, but the script is executing inside a container and not on the host.
Script:
#!/bin/bash
sudo docker container restart centos-1
Error:
line 2: docker: command not found
(Docker isn't installed inside any centos-2 container)
You need:
Install docker CLI (command line interface) on second container. Do not confuse with full scale installation - you dont need docker daemon, only command line tool (docker executable)
Share you host's docker daemon (service) to make it accessible in second container. That is achieved with simply sharing /var/run/docker.sock when launching 2nd container, example:
docker run ... -v "/var/run/docker.sock:/var/run/docker.sock" container2 ...
Now you can execute any docker command, like docker stop from second container and these commands are happily passed to your main (and the only) docker daemon.
There is a approach from the CI-context to control the Docker Daemon on System from a running container called Docker-out-of-Docker (DooD):
you have to install docker inside your container
Map you docker installation from your system inside your container using volumes
-v /var/run/docker.sock:/var/run/docker.sock
Now each docker command inside your container are execute on the system docker installation. E.g. if you type docker image list inside your container there should be the same list as if your type the command on your system.

How to access files in host from a Docker Container?

I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src

how to access a path of a container from `docker-machine `

how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.

Docker: how control docker service on host from it's container?

There is possibility to install docker in docker container.
How to control docker host service from it's container (manage another containers)?
If execute docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti debian and enter docker error appears:
docker: error while loading shared libraries: libapparmor.so.1: cannot open shared object file: No such file
The error you're seeing seems very clear: the docker binary requires a shared library that is not present inside the container.
Is your container running the same distribution and version as your host? If it is, you simply need to determine which packages provide the necessary dependencies and install them inside the container.
If not, you will probably have better luck simply installing docker inside the container, rather than trying to bind-mount it from the host. There is probably a source of recent Docker versions available for Debian.
if your host is a linux based machine, you dont need to install docker inside container, you can just mount docker into container and whatever you do with that inside your container is just like doing it on host. I have tested it on a Ubuntu machine (image: https://github.com/mohamnag/ubuntu-git.git) by mounting /usr/bin/docker from host into /bin/docker inside container. then inside that container you can literally do (build, stop, list ...) whatever you may have done with docker inside host.

How can you make the Docker container use the host machine's '/etc/hosts' file?

I want to make it so that the Docker container I spin up use the same /etc/hosts settings as on the host machine I run from. Is there a way to do this?
I know there is an --add-host option with docker run, but that's not exactly what I want because the host machine's /etc/hosts file may be different on different machines, so it's not great for me to hardcode exact IP addresses/hosts with --add-host.
Use --network=host in the docker run command. This tells Docker to make the container use the host's network stack. You can learn more here.
Add a standard hosts file -
docker run -it ubuntu cat /etc/hosts
Add a mapping for server 'foo' -
docker run -it --add-host foo:10.0.0.3 ubuntu cat /etc/hosts
Add mappings for multiple servers
docker run -it --add-host foo:10.0.0.3 --add-host bar:10.7.3.21 ubuntu cat /etc/hosts
Reference - Docker Now Supports Adding Host Mappings
extra_hosts (in docker-compose.yml)
https://github.com/compose-spec/compose-spec/blob/master/spec.md#extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
Also you can install dnsmasq to the host machine, by the command:
sudo apt-get install dnsmasq
And then you need to add the file /etc/docker/daemon.json with content:
{
"dns": ["host_ip_address", "8.8.8.8"],
}
After that, you need to restart the Docker service by command sudo service docker restart
This option forces to use the host DNS options for every Docker container.
Or you can use it for a single container, and the command-line options are explained by this link. Also docker-compose options are supported (you can read about it by this link).
If you are using docker-compose.yml, the corresponding property is:
services:
xxx:
network_mode: "host"
Source
Add this to your run command:
-v /etc/hosts:/etc/hosts
If trusted users start your containers, you could use a shell function to easily "copy" the /etc/hosts entries that you need:
add_host_opt() { awk "/\\<${1}\\>/ {print \"--add-host $1:\" \$1}" /etc/hosts; }
You can then do:
docker run $(add_host_opt host.name) ubuntu cat /etc/hosts
That way you do not have to hard-code the IP addresses.
The host machine's /etc/hosts file can't mount into a container. But you can mount a folder into the container. And you need a dnsmasq container.
A new folder on host machine
mkdir -p ~/new_hosts/
ln /etc/hosts ~/new_hosts/hosts
mount the ~/new_hosts/ into container
docker run -it -v ~/new_hosts/:/new_hosts centos /bin/bash
Config dnsmasq use /new_hosts/hosts to resolve name.
Change your container's DNS server. Use the dnsmasq container's IP address.
If you change the /etc/hosts file on the host machine, the dnsmasq container's /new_hosts/hosts will change.
I found a problem:
The file in dnsmasq container /new_hosts/hosts can change. But the new hosts can't resolve. Because dnsmasq use inotify listen change event. When you modify a file on the host machine. The dnsmasq can't receive the signal so it doesn't update the configuration. So you may need to write a daemon process to read the /new_hosts/hosts file content to another file every time. And change the dnsmasq configuration to use the new file.
I had the same problem and found that it is likely in contrast with the containerization concept! however I solved my problem by adding each (ip host) pair from /etc/hosts to an existing running container in this way:
docker stop your-container-name
systemctl stop docker
vi /var/lib/docker/containers/*your-container-ID*/hostconfig.json
find ExtraHosts in text and add or replace null with
"ExtraHosts":["your.domain-name.com":"it.s.ip.addr"]
systemctl start docker
docker start your-container-name
if you can stop your container and re-run it, you'd have better situation, so just do that. But if you do not want to destroy your containers, just like mine, it would be a good solution.
If you are running a virtual machine for running Docker containers, if there are hosts (VMs, etc.) you want your containers to be aware of, depending on what VM software you are using, you will have to ensure that there are entries on the host machine (hosting the VM) for whatever machines you want the containers to be able to resolve.
This is because the VM and its containers will have the IP address of the host machine (of the VMs) in their resolv.conf file.
IMO, passing --network=host option while running Docker is a better option as suggested by d3ming over other options as suggested by other answers:
Any change in the host's /etc/hosts file is immediately available to the container, which is what probably you want if you have such a requirement at the first place.
It's probably not a good idea to use the -v option to mount the host's /etc/hosts filr as any unintended change by the container will spoil the host's configuration.

Resources