Set 'host' as default network for Docker - docker

Docker can build containers just fine up until I connect my Cisco VPN. After that, containers are unable to connect to the outside internet. It's more than a DNS problem, it simply can't route to anything outside of Docker's own network. Now, I can get around this by running containers with --net=host But the problem is with building containers with dockerfiles. I see no way to set the host there. Is there somewhere else I can configure docker to simply always use 'host' as the default network?

The docker build command also has a --network parameter that you can use to specify the network mode that should be used for intermediate containers. This flag has the same effect and possible values as the identically named parameter of the docker run command.
--network (=default) Set the networking mode for the RUN instructions during build
This should allow you to build your containers with:
docker build -t yourimagename --network=host .

Dockerfile is to define how to build an image. It has no runtime parameters beyond setting the default command and/or entrypoint.
Networking is a runtime concern only. If using arguments to docker run is not fitting, perhaps you can use docker-compose.yml and either the docker-compose tool, or a swarm. In both cases, you can define network parameters for the container(s) defined in docker-compose.yml.
network_mode: "host"
See the documentation.

I encountered this problem on a vm running CentOS 7. After I upgraded some yum packages (containerd, container-selinux, docker-ce and docker-ce-cli), it was resolved.

Related

Does Docker persist the resolv.conf from the physical /etc/resolv.conf in the container when switching from bridge mode to host mode and then back?

I was doing a test where I installed Docker version 20.10.8, build 3967b7d to a machine that hadn't had Docker installed, and then deployed a .tar file of a Docker container to this new machine.
I then ran the Docker container, but did not include a "--network" parameter in the "docker run" command line, and when the dockerized app started, it was not able to resolve some hostnames that it uses, and I was seeing "UnknownHostException" exceptions in the dockerized app logs.
So, to get the dockerized app to run, I did the same "docker run" command, but this time, I included a "--network host" parameter. When the app started, it was then able to DNS resolve hostnames, and I didn't get the "UnknownHostException" exceptions anymore.
Subsequently, I did "docker stop", "docker rm" and then "docker run" commands multiple times for some other testing, but I wanted to try to investigate the "UnknownHostException" problem, so I eventually did "docker stop", "docker rm", and then "docker run" commands, BUT WITH NO "--network" parameter, and I was expecting to see the "UnknownHostException" again, but the "UnknownHostException" exceptions NO LONGER APPEARED, even though container was using Docker BRIDGE networking!!!
I've been trying to understand/figure out "Why?" for awhile but then I just checked in the /var/lib/containers/<LONG_STRING_OF_NUMBERS-LETTERS> and noticed that there was a "resolv.conf" in that directory, and found that the contents of that "resolv.conf" file were identical to the contents of the physical machine's /etc/resolv.conf!!
So I am theorizing that:
a) When I initially did the "docker run" with no "--network" parameter, that the "resolv.conf" inside the container was empty (or some default), then
b) When I did the "docker run" with "--network host", that Docker COPIED the physical /etc/resolv.conf to the container's "resolv.conf", BUT then
c) When I did "docker run" but WITHOUT any "--network" parameter (i.e., with bridge networking), the container's "resolv.conf" retained/persisted/didn't get cleared out.
Can anyone confirm if the above behavior is what is happening?
Also, as I mentioned earlier, I have been doing "docker stop" and "docker rm" before I do "docker run", but is there some other command (or parameter) that I should be using to prevent the container "resolv.conf" from persisting when switching from bridge mode, to host mode, and then back to bridge mode?
My apologies for the longish question, but I am trying to provide as clear an explanation of my question as I can.
Thanks in advance!!!
Jim
Docker fully manages /etc/resolv.conf inside your container, and most of the other details of the networking environment. If there's a resolv.conf file in the image, it's hidden and unused. If you need to override the DNS resolver, there is a docker run --dns option that can do that.
You should almost always invoke docker run with a --net matching the name of a network you've docker network created. Omitting the --net option uses a first-generation Docker networking setup that's very limited; --net=host completely disables Docker networking. The network itself does not have any DNS-related settings.
# does not need any options but does need to be created
docker network create a-network
docker run --net=a-network --dns=8.8.8.8 ...

External networking with Docker Swarm and Stacks

I've currently got two fairly vanilla CentOS 7 boxes that are running under a Docker Swarm, one a master and the other joined to it. In that swarm, I want to have a stack running that will essentially be my Plex / multimedia system. I've got the docker_compose.yml file for that linked. I can deploy the file to the swarm using the following command:
docker stack deploy --compose-file docker-compose.yml plexsystem
That works fine, it deploys the containers like you would expect. The issue I'm having is that the containers do not have external internet access, so if the container needs to download any files or interact with APIs, they fail. I interact with containers with docker exec -it container /bin/bash and try to ping out, and it always fails.
What do I need to add or change to my docker-compose.yml file so that my networking can work and I can finally get my stack working as it should. I've been banging my head with this one and I cannot figure out how to get swarm networking. Thank you very much!
Checkout your docker daemon file and look for the iptables parameter configuration. Most probably it is set to false and hence it is not able to access the internet.
--iptables=false prevents the Docker daemon from adding iptables rules. If multiple daemons manage iptables rules, they may overwrite rules set by another daemon. Be aware that disabling this option requires you to manually add iptables rules to expose container ports. If you prevent Docker from adding iptables rules, Docker will also not add IP masquerading rules, even if you set --ip-masq to true. Without IP masquerading rules, Docker containers will not be able to connect to external hosts or the internet when using network other than default bridge.

Docker-Compose with Docker 1.12 "Swarm Mode"

Does anyone know how (if possible) to run docker-compose commands against a swarm using the new docker 1.12 'swarm mode' swarm?
I know with the previous 'Docker Swarm' you could run docker-compose commands directly against the swarm by updating the DOCKER_HOST to point to the swarm master :
export DOCKER_HOST="tcp://123.123.123.123:3375"
and then simply execute commands as if you were running them against a single instance of Docker engine.
OR is this functionality something that docker-compose bundle is replacing?
I realized my question was vaguely worded and actually has two parts to it. Eventually however, I was able to figure out solutions to both issues.
1) Can you run commands directly 'against' a swarm / swarm-mode in Docker 1.12 running on a remote machine?
While you can't really run commands 'against' a swarm you CAN run docker service commands on the master node of a swarm in order to run services on that swarm.
You can also configure the Docker daemon (the docker daemon that is the master node of the swarm) to listen on TCP ports in order to externally expose the Docker API.
2) Can you still use docker-compose files to start services in Docker 1.12 swarm-mode?
Yes, although these features are currently part of Docker's "experimental" features. This means you must download/install the version that includes the experimental features (check the github).
You essentially follow these instructions https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md
to go from the docker-compose.yml file to a distributed application bundle and then to an application stack (this is when your services are actually run).
$ docker-compose bundle
$ docker deploy [OPTIONS] STACK
Here's what I did:
On my remote swarm manager node I started docker with the following options:
docker daemon -D -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 &
This configures Docker daemon to listen on the standard docker socket unix:///var/run/docker.sock AND on localhost:2375.
WARNING : I'm not enabling TLS here just for simplicity
On my local machine I update the docker host environment variable to point at my swarm master node.
$ export DOCKER_HOST="tcp://XX.XX.XX.XX:2377" (populate with your IP)
Navigate to the directory of my docker-compose.yml file
Create a bundle file from my docker-compose.yml file. Make sure to include the .dab extension.
docker-compose bundle --fetch-digests -o myNewBundleFile.dab
Create an application stack from the bundle file. Do not specify the .dab extension here.
$ docker deploy myNewBundleFile
Now I'm still experiencing some networking related issues but I have successfully gotten my service up and running from my unmodified docker-compose.yml files. The network issues I'm experiencing is documented here : https://github.com/docker/docker/issues/23901
While the official support for Swarm mode in Docker Compose is still in progress, I've created a simple script that takes docker-compose.yml file and runs docker service commands for you. See https://github.com/ddrozdov/docker-compose-swarm-mode for details.
It is not possible. Compose uses containers to create a client-side concept of a service. Docker 1.12 Swarm mode introduces a new server-side concept of a service.
You are correct that docker-compose bundle; docker stack deploy is the way to get a Compose file running in Swarm Mode.

How do I get the address of the host when using 'docker build'?

I need to reference the host or host network during a docker build, in the Dockerfile, how do I do that? I want to do this, to clone some git repos, or to scp some files to set the image up with.
Its easy to clone a github repo, because the docker will resolve the dns for that. However, I don't have a dns entries for my host network available to the docker image being built.
In fact, I don't even know what the ip address of the host is, never mind getting as far as setting up dns.
What you are trying to do goes against the idea of a Dockerfile.
The intent of a Dockerfile is to provide a "description" of an image while having the warranty of reproducibility. This is why you don't have any host specific in your Dockerfile so it can be built anywhere with the same result.
If you need closer interaction with your host, it means that your result is going to be tight to this host and you should do it at runtime. Look at CMD or ENTRYPOINT to have the container performm certain operations at startup.
As of docker api 1.25 you can now simply do:
docker build --network=host -f myDockerFile ...
And that will give you access to the host network during build.
It seems very odd that you won't know the Docker host. I can't even fathom how that's possible.
Just do this by passing in the host information during build.
docker run -e DOCKER_HOST=1.2.3.4 busybox env
Alternatively, you're going to have to use some sort of discovery system such as DNS, etcd, zookeeper, or consul.

Sensu-Client inside Docker container

I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu

Resources