mapping containers to docker host's /etc/hosts automatically with the same port for each container - docker

I have a basic docker-compose setup consisting of the following:
docker bridge subnet starting at 192.168.50.0/24
4 services: rabbit, spring-config, fares, checkin
each of of these services has its hostname correctly set and are able to find each other from within the subnet (192.168.50.0). Ips are dynamically attributed in this subnet, and they all start on port 8080 within their respective containers.
From the host, the bridge network is visible and each instance of the container is accessible using its ip.
I cannot manage to resolve these host entries without mapping a different port than 8080 to the docker host.
For this entry in my host's /etc/hosts:
192.168.50.1 fares rabbit config book checkin: the services are only accessible if I explicitely bind the services' ports 8080 to my host's port 8081, port 8082, port 8083... for each service in the .yml file.
Is there another way to make sure the services are discoverable by their dns name even from outside of the subnet?

You can't bind all 4 containers to the same port on the host. Only one container per port. But there are some workarounds:
Option 1: Use Different Ports for Each Container
For exmaple, bind ports 8081, 8082, 8083, and 8084.
In /etc/hosts, map each containers IP correctly.
Specify the port in addition to the hostname when connecting. Like https://fares:8081
Your /etc/hosts might look like this:
192.168.50.1 fares
192.168.50.2 rabbit
...
Option 2: Use a Reverse Proxy
You can set up an additional Docker container as a reverse proxy in your docker-compose.yml. The reverse proxy container can bind to port 8080 and forward the request to the correct container depending on the hostname. You don't need to bind ports from the other containers on the host because your reverse proxy is forwarding the requests. There's a blog post that explains how this works in detail: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/

Related

How to connect to a docker container using a domain name

So I have some docker web application, when it loads using docker-compose the dhcp service chooses some ip address lets say 192.168.96.3, the webapp is located at port 6000, so connecting to the webapp I use http://192.168.96.3:6000. Is there any way, in the docker-compose.yml to assign the domain name foo.local so that when I connect to the webapp I type in foo.local:6000?
In my docker-compose.yml, can I add a domain name that my host machine can map to the dynamic ip of the container?
Note:
The container uses its own network, so attaching it to the host network will conflict with its purpose.
Forwarding container port
For me you can easily accessing from the host by exposing the port of the container. So from that host you should be able to access it as localhost:6000 by exposing the port. From other machines in your network that can access the host, use the IP of the host or its name/DNS name.
For example in docker-compose.yml
services:
myservice:
image: myImage
ports:
- "published_port:container_port"
So if you put "6000:6000" its mean that on the host port 6000 will forward to the service on port 6000.
DNS
So I would say for overall access, ensure that your company DNS match foo.local to your docker host and expose the port from the container in docker to the docker host.
If you want to be able to do that only from a given machine yoythe host you can add an entry to /etc/hosts (assuming linux)
127.0.0.1 localhost
127.0.0.1 foo.local
Here this is assuming we are on the same machine, but you can use the right IP. And if you have a different OS, check the documentation on how to do that for your os.

Docker bind ports in network host

I would like to keep the host's IP address and hostname for all my Docker containers, however, I would like to bind different ports as many of my containers have port 80 in use.Now, I know that port binding doesn't work in network mode host, however, I am wondering if there are alternatives that can achieve the same result?
You can use NGINX as a reverse proxy to expose only the port 80 and manage the container requests internally. It acts like a unique door to your containers:
https://hub.docker.com/_/nginx

docker expose wrong ports open [duplicate]

What is the difference between ports and expose options in docker-compose.yml?
According to the docker-compose reference,
Ports is defined as:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
Ports mentioned in docker-compose.yml will be shared among different services started by the docker-compose.
Ports will be exposed to the host machine to a random port or a given port.
My docker-compose.yml looks like:
mysql:
image: mysql:5.7
ports:
- "3306"
If I do docker-compose ps, it will look like:
Name Command State Ports
-------------------------------------------------------------------------------------
mysql_1 docker-entrypoint.sh mysqld Up 0.0.0.0:32769->3306/tcp
Expose is defined as:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Ports are not exposed to host machines, only exposed to other services.
mysql:
image: mysql:5.7
expose:
- "3306"
If I do docker-compose ps, it will look like:
Name Command State Ports
---------------------------------------------------------------
mysql_1 docker-entrypoint.sh mysqld Up 3306/tcp
Edit
In recent versions of Dockerfile, EXPOSE doesn't have any operational impact anymore, it is just informative. (see also)
ports:
Activates the container to listen for specified port(s) from the world outside of the docker(can be same host machine or a different machine) AND also accessible world inside docker.
More than one port can be specified (that's is why ports not port)
expose:
Activates container to listen for a specific port only from the world inside of docker AND not accessible world outside of the docker.
More than one port can be specified
Ports
This section is used to define the mapping between the host server and Docker container.
ports:
- 10005:80
It means the application running inside the container is exposed at port 80. But external system/entity cannot access it, so it need to be mapped to host server port.
Note: you have to open the host port 10005 and modify firewall rules to allow external entities to access the application.
They can use
http://{host IP}:10005
something like this
EXPOSE
This is exclusively used to define the port on which application is running inside the docker container.
You can define it in dockerfile as well. Generally, it is good and widely used practice to define EXPOSE inside dockerfile because very rarely anyone run them on other port than default 80 port
Ports
The ports section will publish ports on the host. Docker will set up a forward for a specific port from the host network into the container. By default, this is implemented with a userspace proxy process (docker-proxy) that listens on the first port, and forwards into the container, which needs to listen on the second point. If the container is not listening on the destination port, you will still see something listening on the host, but get a connection refused if you try to connect to that host port, from the failed forward into your container.
Note, the container must be listening on all network interfaces since this proxy is not running within the container's network namespace and cannot reach 127.0.0.1 inside the container. The IPv4 method for that is to configure your application to listen on 0.0.0.0.
Also note that published ports do not work in the opposite direction. You cannot connect to a service on the host from the container by publishing a port. Instead you'll find docker errors trying to listen to the already-in-use host port.
Expose
Expose is documentation. It sets metadata on the image, and when running, on the container too. Typically, you configure this in the Dockerfile with the EXPOSE instruction, and it serves as documentation for the users running your image, for them to know on which ports by default your application will be listening. When configured with a compose file, this metadata is only set on the container. You can see the exposed ports when you run a docker inspect on the image or container.
There are a few tools that rely on exposed ports. In docker, the -P flag will publish all exposed ports onto ephemeral ports on the host. There are also various reverse proxies that will default to using an exposed port when sending traffic to your application if you do not explicitly set the container port.
Other than those external tools, expose has no impact at all on the networking between containers. You only need a common docker network, and connecting to the container port, to access one container from another. If that network is user created (e.g. not the default bridge network named bridge), you can use DNS to connect to the other containers.
I totally agree with the answers before.
I just like to mention that the difference between expose and ports is part of the security concept in docker. It goes hand in hand with the networking of docker.
For example:
Imagine an application with a web front-end and a database back-end.
The outside world needs access to the web front-end (perhaps on port
80), but only the back-end itself needs access to the database host
and port. Using a user-defined bridge, only the web port needs to be
opened, and the database application doesn’t need any ports open,
since the web front-end can reach it over the user-defined bridge.
This is a common use case when setting up a network architecture in docker.
So for example in a default bridge network, not ports are accessible from the outer world.
Therefor you can open an ingresspoint with "ports". With using "expose" you define communication within the network. If you want to expose the default ports you don't need to define "expose" in your docker-compose file.

eth0 IP in the docker IPs range

One of the machines where we need to deploy docker containers has an eth0 IP set to within the docker IPs range (172.17.0.1/16).
The problem is that when we try to access this server through NAT from outside (SSH etc), then everything "hangs". I guess the packets get missdirected by the docker iptables rules.
What is the recommendation in this case if we cannot change the eth0 IP?
Docker should avoid subnet collisions if it sees all of the in use subnets when it creates it's networks. However if you change networks (e.g. a laptop), then you want to setup address pools for docker to use. Steps for this are in my slides here:
https://sudo-bmitch.github.io/presentations/dc2018eu/tips-and-tricks-of-the-captains.html#19
The important details are to setup a /etc/docker/daemon.json file containing:
{
"bip": "10.15.0.0/24",
"default-address-pools": [
{"base": "10.20.0.0/16", "size": 24},
{"base": "10.40.0.0/16", "size": 24}
]
}
Adjust the ip ranges as needed. Stop all containers in the bad networks, delete the containers, delete any user created networks, restart the docker engine, and then recreate any user created networks and containers (often the last two steps just involves removing and redeploying a compose project or swarm stack).
Note, it wasn't clear if you were attempting to connect to your host or container. You should not be connecting directly to a container IP externally (with very few exceptions). Instead you publish the desired ports that you need to be able to access externally, and you connect to the host IP on that published port to reach the container. E.g.
docker run -d -p 8080:80 nginx
Will start nginx with it's normal port 80 inside the container that you normally cannot reach externally. Publishing host port 8080 (could just as easily be 80 to match the container port) maps connections to the container port 80.
One important prerequisite is the application inside the container must listen on all interfaces, not just 127.0.0.1, to be able to access it from outside of that container's network namespace.

What is the difference between ports and expose in docker-compose?

What is the difference between ports and expose options in docker-compose.yml?
According to the docker-compose reference,
Ports is defined as:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
Ports mentioned in docker-compose.yml will be shared among different services started by the docker-compose.
Ports will be exposed to the host machine to a random port or a given port.
My docker-compose.yml looks like:
mysql:
image: mysql:5.7
ports:
- "3306"
If I do docker-compose ps, it will look like:
Name Command State Ports
-------------------------------------------------------------------------------------
mysql_1 docker-entrypoint.sh mysqld Up 0.0.0.0:32769->3306/tcp
Expose is defined as:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Ports are not exposed to host machines, only exposed to other services.
mysql:
image: mysql:5.7
expose:
- "3306"
If I do docker-compose ps, it will look like:
Name Command State Ports
---------------------------------------------------------------
mysql_1 docker-entrypoint.sh mysqld Up 3306/tcp
Edit
In recent versions of Dockerfile, EXPOSE doesn't have any operational impact anymore, it is just informative. (see also)
ports:
Activates the container to listen for specified port(s) from the world outside of the docker(can be same host machine or a different machine) AND also accessible world inside docker.
More than one port can be specified (that's is why ports not port)
expose:
Activates container to listen for a specific port only from the world inside of docker AND not accessible world outside of the docker.
More than one port can be specified
Ports
This section is used to define the mapping between the host server and Docker container.
ports:
- 10005:80
It means the application running inside the container is exposed at port 80. But external system/entity cannot access it, so it need to be mapped to host server port.
Note: you have to open the host port 10005 and modify firewall rules to allow external entities to access the application.
They can use
http://{host IP}:10005
something like this
EXPOSE
This is exclusively used to define the port on which application is running inside the docker container.
You can define it in dockerfile as well. Generally, it is good and widely used practice to define EXPOSE inside dockerfile because very rarely anyone run them on other port than default 80 port
Ports
The ports section will publish ports on the host. Docker will set up a forward for a specific port from the host network into the container. By default, this is implemented with a userspace proxy process (docker-proxy) that listens on the first port, and forwards into the container, which needs to listen on the second point. If the container is not listening on the destination port, you will still see something listening on the host, but get a connection refused if you try to connect to that host port, from the failed forward into your container.
Note, the container must be listening on all network interfaces since this proxy is not running within the container's network namespace and cannot reach 127.0.0.1 inside the container. The IPv4 method for that is to configure your application to listen on 0.0.0.0.
Also note that published ports do not work in the opposite direction. You cannot connect to a service on the host from the container by publishing a port. Instead you'll find docker errors trying to listen to the already-in-use host port.
Expose
Expose is documentation. It sets metadata on the image, and when running, on the container too. Typically, you configure this in the Dockerfile with the EXPOSE instruction, and it serves as documentation for the users running your image, for them to know on which ports by default your application will be listening. When configured with a compose file, this metadata is only set on the container. You can see the exposed ports when you run a docker inspect on the image or container.
There are a few tools that rely on exposed ports. In docker, the -P flag will publish all exposed ports onto ephemeral ports on the host. There are also various reverse proxies that will default to using an exposed port when sending traffic to your application if you do not explicitly set the container port.
Other than those external tools, expose has no impact at all on the networking between containers. You only need a common docker network, and connecting to the container port, to access one container from another. If that network is user created (e.g. not the default bridge network named bridge), you can use DNS to connect to the other containers.
I totally agree with the answers before.
I just like to mention that the difference between expose and ports is part of the security concept in docker. It goes hand in hand with the networking of docker.
For example:
Imagine an application with a web front-end and a database back-end.
The outside world needs access to the web front-end (perhaps on port
80), but only the back-end itself needs access to the database host
and port. Using a user-defined bridge, only the web port needs to be
opened, and the database application doesn’t need any ports open,
since the web front-end can reach it over the user-defined bridge.
This is a common use case when setting up a network architecture in docker.
So for example in a default bridge network, not ports are accessible from the outer world.
Therefor you can open an ingresspoint with "ports". With using "expose" you define communication within the network. If you want to expose the default ports you don't need to define "expose" in your docker-compose file.

Resources