eth0 IP in the docker IPs range - docker

One of the machines where we need to deploy docker containers has an eth0 IP set to within the docker IPs range (172.17.0.1/16).
The problem is that when we try to access this server through NAT from outside (SSH etc), then everything "hangs". I guess the packets get missdirected by the docker iptables rules.
What is the recommendation in this case if we cannot change the eth0 IP?

Docker should avoid subnet collisions if it sees all of the in use subnets when it creates it's networks. However if you change networks (e.g. a laptop), then you want to setup address pools for docker to use. Steps for this are in my slides here:
https://sudo-bmitch.github.io/presentations/dc2018eu/tips-and-tricks-of-the-captains.html#19
The important details are to setup a /etc/docker/daemon.json file containing:
{
"bip": "10.15.0.0/24",
"default-address-pools": [
{"base": "10.20.0.0/16", "size": 24},
{"base": "10.40.0.0/16", "size": 24}
]
}
Adjust the ip ranges as needed. Stop all containers in the bad networks, delete the containers, delete any user created networks, restart the docker engine, and then recreate any user created networks and containers (often the last two steps just involves removing and redeploying a compose project or swarm stack).
Note, it wasn't clear if you were attempting to connect to your host or container. You should not be connecting directly to a container IP externally (with very few exceptions). Instead you publish the desired ports that you need to be able to access externally, and you connect to the host IP on that published port to reach the container. E.g.
docker run -d -p 8080:80 nginx
Will start nginx with it's normal port 80 inside the container that you normally cannot reach externally. Publishing host port 8080 (could just as easily be 80 to match the container port) maps connections to the container port 80.
One important prerequisite is the application inside the container must listen on all interfaces, not just 127.0.0.1, to be able to access it from outside of that container's network namespace.

Related

docker-compose networking and publishing ports

I'm trying to better understand docker networking, but I'm confused by the following:
I spin up 2 contains via docker-compose (client, api). When I do this, a new network is created, myapp_default, and each container joins this network. The network is a bridge network, and it's at 172.18.0.1. The client is at 172.18.0.2 and the api is at 172.18.0.3.
I can now access the client at 172.18.0.2:8080 and the api at 172.18.0.3:3000 -- this makes total sense. I'm confused when I publish ports in docker-compose: 8080:8080 on the client, and 3000:3000 on the api.
Now I can access the containers from:
Client at 172.18.0.1:8080, 172.18.0.2:8080, and on the docker0 network at 172.17.0.1:8080
API at 172.18.0.1:3000, 172.18.0.3:8080, and on the docker0 network at 172.17.0.1:3000
1) Why can I access the client and api via the docker0 network when I publish ports?
2) Why can I connect to containers via 172.17.0.1 and 172.18.0.1 at all?
You can only access the container-private IP addresses because you're on the same native-Linux host as the Docker daemon. This doesn't work in any other environment (different hosts, MacOS or Windows hosts, environments like Docker Toolbox where Docker is in a VM) and even using docker inspect to find these IP addresses usually isn't a best practice.
When you publish ports they are accessible on the host at those ports. This does work in every environment (in Docker Toolbox "the host" is the VM) and is the recommended way to access your containers from outside Docker space. Unless you bind to a specific address, the containers are accessible on every host interface and every host IP address; that includes the artificial 172.17.0.1 etc. that get created with Docker bridge networks.
Publishing ports is in addition to the other networking-related setup Docker does; it doesn't prevent you from reaching the containers by other paths.
If you haven't yet, you should also read Networking in Compose in the Docker documentation. Whether you publish ports or not, you can use the names in the docker-compose.yml file like client and api as host names, connecting the the (unmapped) port the actual server processes are listening on. Between this functionality and what you get from publishing ports you don't ever actually need to directly know the container-private IP addresses.

How to expose the docker container ip to the external network?

i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.

Can we have two or more container running on docker at the same time

I have not done any practical with the docker and container, But as per my knowledge.
As per the documents available online I did not get the details about the running two or more containers at the same time.
Docker allows container to map port address of container to the host machine.
Now, the question is can we run multiple container at the same time on docker? if yes then if two containers are mapped to same port number then how does the port is handled in this case?
Also out of curiosity, can two containers on docker communicate with each other?
Yes you can run multiple containers on a single host; docker is designed for exactly that.
You cannot map two containers of different images to the same port number; you get an error response if you try. However, if your containers run the same image (e.g.2 instances of a webapp) you could run them as a service, and have them exposed on the same port. Docker will load-balance the requests. You can read more about services here or follow the Get Started (Part 3, services) here
Yes, the containers on a single host can communicate with each other, by container name. For example if you have one container running MongoDB called mongo, and another one running Node.js called webserver, the webserver container can connect to the database by using the name mongo e.g. db.Connect("mongodb://mongo:27017/testdb").
We can run more one than one Docker at a time in a host but yes we will hit the limitation of binding the same port to the docker; so to resolve this we need to bind different port in the host to docker that is if you are running mongo-db then its default port is 27017 so we can run two mongo-db as -p 27017:27017 for Docker D1 and -p 27018:27017 for Docker D2 and 5000:27017 for docker D3; Like this you can bind different host port to map to 27017 for mongo-db port; Now your question is how to manage this ports from host then I would recommend you to use nginx for port managing in the host machine.
Coming to your next question all dockers are connected to default docker0 bridge network so we can connect to any of the dockers connected to default bridge 'docker0' network; If I am right it will come with ipaddress of 172.x.x.x network. Get inside to the docker and run 'ip addr' to see the ip-address assigned to the dockers and you can test connection by running ping command.
Yes two containers can run same time, they can also communicate with each other also, you can define your own network and they can communicate with each other. if two containers have their private ports, they are their internal ports, one container port does not collide with another container port. if you want to expose the port to host, then you have to publish the port(s).

mapping containers to docker host's /etc/hosts automatically with the same port for each container

I have a basic docker-compose setup consisting of the following:
docker bridge subnet starting at 192.168.50.0/24
4 services: rabbit, spring-config, fares, checkin
each of of these services has its hostname correctly set and are able to find each other from within the subnet (192.168.50.0). Ips are dynamically attributed in this subnet, and they all start on port 8080 within their respective containers.
From the host, the bridge network is visible and each instance of the container is accessible using its ip.
I cannot manage to resolve these host entries without mapping a different port than 8080 to the docker host.
For this entry in my host's /etc/hosts:
192.168.50.1 fares rabbit config book checkin: the services are only accessible if I explicitely bind the services' ports 8080 to my host's port 8081, port 8082, port 8083... for each service in the .yml file.
Is there another way to make sure the services are discoverable by their dns name even from outside of the subnet?
You can't bind all 4 containers to the same port on the host. Only one container per port. But there are some workarounds:
Option 1: Use Different Ports for Each Container
For exmaple, bind ports 8081, 8082, 8083, and 8084.
In /etc/hosts, map each containers IP correctly.
Specify the port in addition to the hostname when connecting. Like https://fares:8081
Your /etc/hosts might look like this:
192.168.50.1 fares
192.168.50.2 rabbit
...
Option 2: Use a Reverse Proxy
You can set up an additional Docker container as a reverse proxy in your docker-compose.yml. The reverse proxy container can bind to port 8080 and forward the request to the correct container depending on the hostname. You don't need to bind ports from the other containers on the host because your reverse proxy is forwarding the requests. There's a blog post that explains how this works in detail: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/

How to access applications running in a docker containers inside docker?

I am having a weird scenario in my project.
I am running "Supervisor" application in one of docker container.
Using this supervisor I am running two "web applications" in docker containers and both are using one micro service; again installed in another docker container.
Now, I can able to access my application from "Supervisor's container". But obviously it is not accessible from my machine.
How can I able to access my applications "Web App1" or "Web App2" from my machine?
I have less knowledge related to docker networking.
Please help.
You can map ports of Web App1 and Web App2 to the host container and using the IP address and port you can access those containers from you machine. A better way to do this is to add hostname for your containers and maps ports so you don't have to remember the IP addresses since they are generated randomly on every time the container is recreated.
Docker manages network traffic between "host machine" and containers. In this case you have many dockers on different layers. On each layer you have to expose the ports of the internal containers to the "docker host" on the next layer and so on.
This is a solution over ports:
So the "Supervisor" on 172.17.42.1 must expose the ports of all the internal containers (172.17.0.2-4) as its own ports. So for "Supervisor" you need a -p docker parameter for each port of all containers inside the "Supervisor".
Expose the network:
Configure the local machine to send any network packet 172.17.*.* to 172.17.42.1. Then configure 172.17.42.1 to send network packages for IPs 172.17.0.* to its network adapter Docker0 (default docker network adapter). The exact implementation is dependent on your distribution.
Another solution:
Skip your Supervisor container and use docker-compose to arrange and manage your internal containers.

Resources