What's the host in Docker + Rancher context? - docker

So I see host mentioned a few times in the docs. There's also networking_mode=host you can add in the yml file.
So what I assume the host is, is the machine the VM (Docker) is run on?
So if I set networking mode to host, the port mapping etc will be handled on my local machine. Where in the yml i could do 3001:3000 that'll map port 3001 to the container port of 3000. With networking mode host that mapping will be handled on my local machine.
Now, when we're hosting containers on rancher. And we set the networking_mode=host. What's host in that context? Is it the VM or ec2 or whatever that is running my rancher? Or the VM/ec2 that's running my host stack?
I can't grasp it from the docs.

A container runs on a single server, a.k.a host, running Docker.
Host can be either be a bare metal server, Virtual machine running on your laptop or an EC2 instance.
Rancher itself is a container running on a host. Now when you build a cluster, you can add the host that's running the Rancher container or you can choose to keep things isolated and start adding totally different hosts.
If you choose networking_mode=host, the container is using the host networking stack and if you don't the container gets it's own networking stack. When running in host networking mode, the application running inside the container binds directly to the host network interfaces, so there is no port mapping happening.
In case you are interested in more details, I have discussed a lot about networking in the first half of this talk: https://www.youtube.com/watch?v=GXq3FS8M_kw. Let me know if you have more questions.

Related

How to access docker container in a custom network from another docker container running with host network

My program is consisting of a network of ROS1 and ROS2 nodes, which are software that work with a publish/subscribe way of communication.
Assume there is 4 nodes running inside a custom network: onboard_network.
Those 4 nodes (ROS1) can only communicate together, therefore we have a bridge node (ROS1 & ROS2) that needs to be sitting on the edge on both onboard_network and host network. The reason why we need the host network is because the host is inside a VPN (Zerotier). Inside the VPN we have also our server (ROS2).
We also need the bride node to work with host network because ROS2 work with some multicast stuff that works only on host mode.
So basically, I want a docker compose file running 4 containers inside an onboard_network & a container running inside the host network. The last container needs to be seen from the containers in the onboard_network and being able to see them too. How could I do it ? Is it even possible ?
If you're running a container on the host network, its network setup is identical to a non-container process running on the host.
A container can't be set to use both host networking and Docker networking.
That means, for your network_mode: host container, it can call other containers using localhost as a hostname and their published ports: (because its network is the host's network). For your bridge-network containers, they can call the host-network container using the special hostname host.docker.internal on MacOS or Windows hosts, or on Linux they need to find some reachable IP address (this is discussed further in From inside of a Docker container, how do I connect to the localhost of the machine?.

Map a port inside a docker container

In one of my test setups, I have code which is sometimes runs inside a docker container that wants to connect to a port on the host machine.
It is configured to try to connect to localhost:12345. That works when it is running outside the container, but obviously fails when it runs inside the container.
Is there a way to map a port on the local host into the container? This is the reverse of the normal port mapping, where you expose a port in the container to the host.
I can see potential security issues with this, and also that it breaks the paradigm of "the container is the world", so I would not be surprised if it is not supported.

Stack of VM reverse traversal: reaching host port from a Docker container within a Vagrant machine

We are implementing a CI infrastructure as Docker containers.
Development of the solution takes place on OS X machines:
The OS X physical machine (Host) has Vagrant installed on it, plus a service listening on localhost:2200.
On Host, we vagrant up a Linux machine (VM-a) on which we provision Docker.
On VM-a, we docker run a Linux container (VM-b). VM-b needs to interact with the service running on Host.
By way of well-documented port binding, we are able to reach any listening port on both VM-a and VM-b from the Host.
Yet, we cannot identify a way to have VM-b reach Host port 2200 on Host's localhost interface.
Is it possible to achieve such communication?
If so, how?
So, we found the "magic" interface on which to reach the Host from any VM, i.e. from booth VM-a and VM-b (nested in VM-a).
It is 10.0.2.2.

Google Cloud - Deploy as Container from GCR - Ports not exposed in docker container

I have created a GCP VM instance, with option Deploy as Container pointing to an image in my private GCR(nginx customized).
Also while creating the instance, I had given allow 'https' and 'http' traffic.
Though the application is working fine, on connecting the instance via ssh and inspecting docker containers
(docker ps)
I see the container ports are not exposed. Wondering how the http/https request are handled by the container here via the instance??
When you use the deploying containers option in GCE it runs docker with access to the host network.
From the relevant gcp docs :
Containerized VMs launch containers with the network set to host mode.
A container shares the host network stack, and all interfaces from the
host are available to the container.
More detailed info on the different network modes here.
Other than what #Stefan R has told, you should also use PORT number greater than 1000 as auto deployed container images aren't run as root and hence can't access privileged ports.
https://www.staldal.nu/tech/2007/10/31/why-can-only-root-listen-to-ports-below-1024/
https://www.google.co.in/search?q=privileged+ports+linux&oq=privileged+ports+linux

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources