How to connect to a docker container using a domain name - docker

So I have some docker web application, when it loads using docker-compose the dhcp service chooses some ip address lets say 192.168.96.3, the webapp is located at port 6000, so connecting to the webapp I use http://192.168.96.3:6000. Is there any way, in the docker-compose.yml to assign the domain name foo.local so that when I connect to the webapp I type in foo.local:6000?
In my docker-compose.yml, can I add a domain name that my host machine can map to the dynamic ip of the container?
Note:
The container uses its own network, so attaching it to the host network will conflict with its purpose.

Forwarding container port
For me you can easily accessing from the host by exposing the port of the container. So from that host you should be able to access it as localhost:6000 by exposing the port. From other machines in your network that can access the host, use the IP of the host or its name/DNS name.
For example in docker-compose.yml
services:
myservice:
image: myImage
ports:
- "published_port:container_port"
So if you put "6000:6000" its mean that on the host port 6000 will forward to the service on port 6000.
DNS
So I would say for overall access, ensure that your company DNS match foo.local to your docker host and expose the port from the container in docker to the docker host.
If you want to be able to do that only from a given machine yoythe host you can add an entry to /etc/hosts (assuming linux)
127.0.0.1 localhost
127.0.0.1 foo.local
Here this is assuming we are on the same machine, but you can use the right IP. And if you have a different OS, check the documentation on how to do that for your os.

Related

docker-compose: open port in container but not bind it from host

(Note: the whole problem is because I misread the IP address of the docker network. The my-network is 172.22.0.0/16 instead of 127.22.0.0/16. I slightly modified the OP to reflect the original problem I encountered)
I created a service (a small web server) using docker-compose. The network part is defined as
services:
service:
image: ... (this image uses port 9000)
ports:
- 9000:9000
networks:
default:
name: my-network
After docker-compose up, I observe:
the host gets an IP address 172.22.0.1 and the client gets 172.22.0.2.
I can successfully ping the client from the host ping 127.22.0.2.
From the host machine: the web server can be reached using
127.22.0.1:9000
127.22.0.2:9000
localhost:9000
192.168.0.10:9000 (This is the host's IP address in the LAN)
Now I want to restrict the access from the host using 172.22.0.2:9000 only. I feel this should be possible if I don't bind the container's 9000 port to the host's 9000 port. Then I deleted the ports: 9000:9000 part from the docker-compose.yml. Now I observe:
All the above four methods do not work now, including 127.22.0.2:9000
The client can still be pinged from the host using 127.22.0.2
I think: since the the host and the container are both in a bridge network my-network and have obtained their IP addresses. The web server should still be reachable from 127.22.0.2:9000. But this is not the case.
My questions:
why does it work like this? Shouldn't the host/container in the same subnet 127.22.0.0/16 be able to talk to each other freely?
How to achieve what I want: do not forward port 9000 from host to container and only allow accessing the container using its subnet IP address.
Your understanding of the networking is correct. Removing the port binding from the docker-compose.yml will remove the exposed port from the host. Since the host is also part of the virtual network my-network with an IP in the same subnet as the container, your service should be reachable from the host using the container IP directly.
But I think, this is actually a simple typo and instead of
127.22.0.0/16
you actually have
172.22.0.0/16
as the subnet for my-network! This is a typical subnet used by docker in the default configuration, while 127.0.0.0/8 is always bound to the loopback device!
So connecting to 127.22.0.2 will actually connect you to localhost - which is consistent with the symptoms you encountered:
connecting to 127.22.0.2:9000 will work only if the port is exposed on the host
you can always pint 127.22.0.2 since it is the loopback address

docker host and port info from the container

I am deploying an application in a Docker container. The application sends requests to another server with a callback URL. The callback URL contains the host and port name where actually the app runs.
To configure this callback URL in a "stable, non-dynamic" test environment is easy because we know the IP and port where the app runs. But in Docker, the callback URL is the IP address of the host machine + the port that was configured in the docker-compose.yml file. So both parameter is dynamic, can not be hardcoded in the Docker image.
I need the docker host IP and the exposed port by the container info somehow in the container.
This is how my container gets the docker host machine IP:
version: '3'
services:
my-server:
image: ...
container_name: my-server
hostname: my-server
ports:
- "1234:9876"
environment:
- DOCKER_HOST_IP=${HOST_IP}
I set the host IP when I spin up the container:
HOST_IP=$(hostname -i) docker-compose up
Maybe this is not an elegant way but this is the best that I could do so far.
But I have no idea, how to get the exposed port info inside the container.
My idea was that once I know the host IP in the container, I can use nmap $HOST_IP to get the opened port list and grep for the proper line somehow. But this does not work, because I run many Docker containers on this host, and I am not able to select the proper line with grep.
here is the result of th nmap:
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
443/tcp open https
5001/tcp open commplex-link
5002/tcp open rfe
7201/tcp open dlip
1234/tcp open vcom-tunnel
1235/tcp open vcom-tunnel
1236/tcp open teradataordbms
60443/tcp open unknown
So when I execute nmap from the container then I can see all of the opened ports in my host machine. But I have no idea, how to select the line which belongs to the container where I am.
Can I can customize somehow the service name before docker spin-up the containers?
What is the best way to get the port number that was opened on the host machine by the container?
You should pass the complete externally-visible callback URL to the application.
ports:
- "1234:9876"
environment:
- CALLBACK_URL=http://physical-host.example.com:1234/path
You can imagine an interesting variety of scenarios where the host IP address isn't directly routable either. As a basic example, say you're running the container, on your laptop, at home. The laptop's IP address might be 192.168.1.2/24 but that's still a "private" address; you need your router's externally-visible IP address, and there's no easy way to discover that.
xx.xx.xx.xx /--------\ 192.168.1.1 192.168.1.2 /----------------\
------------| Router |---------------------------| Laptop |
\--------/ | Docker |
| 172.17.1.2 |
Callback address must be | Server |
http://xx.xx.xx.xx/path \----------------/
In a cloud environment, you can imagine a similar setup using load balancers. Your container might run on some cloud-hosted instance. The container might listen on port 11111, and you remap that to port 22222 on the instance. But then in front of this you have a load balancer that listens on the ordinary HTTPS port 443, does TLS termination, and then forwards to the instance, and you have a DNS name connected to that load balancer; the callback address would be https://service.example.com/path, but without explicitly telling the container this, there's no way it can figure this out.

How can I access deployments on Rancher local cluster from vm Ip

I have Ubuntu server 20.04 running as a guest vm. On it I have installed Rancher within a docker container, and mapped port 443 to 9091 to have access to the Rancher UI at 192.168.0.50:9091. Within Rancher I have deployed a nextcloud instance on the local cluster and forwarded the nextcloud port 443 to port 9700 using HostPort. The link generated for the pod is taking me to 172.17.0.2:9700, which I am assuming is the internal Ip for the local node within the cluster.
How can I access the nextcloud container with a browser?
Currently I cannot access it if I simply navigate to the :9700. Is there a way to access the node with the IP I use for my vm?
Thanks
The publish the container port field in the Port Mapping is the one where you specify the the port that container listen to.
It relates directly to containerPort in kubernetes yaml file. Exposing a port in this field gives the system additional information about the network connections a container uses but this field is primarily informational. Not specyfing a port here does not prevent that port from being expose. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network.
I checked the nextcloud image specs and it looks like it the apache-image is listening on port 80 and fpm-image uses 9000.
For more reading please visit rancher document how to expose workloads.

How to assign the host ip to a service that is running with docker compose

I have several services specified inside a docker compose file that are communication with each other via links. Now I want one of these services to talk to the outside world and fetch some data from another server in the host network. But the docker service uses its internally assigned IP address which leads to the firewall of the host network blocking his requests. How can I tell this docker service to use the IP address of the host instead?
EDIT:
I got a step further, what I'm looking for is the network_mode option with the value host. But the Problem is that network_mode: "host" cannot be mixed with links. So i guess i have to change the configuration of all the docker services to not use links. I will try how this works out.
You should open a port like to that service
ports:
8000:8000
The 8000 on the left is the host port and the 8000 on the right will be the IP port

mapping containers to docker host's /etc/hosts automatically with the same port for each container

I have a basic docker-compose setup consisting of the following:
docker bridge subnet starting at 192.168.50.0/24
4 services: rabbit, spring-config, fares, checkin
each of of these services has its hostname correctly set and are able to find each other from within the subnet (192.168.50.0). Ips are dynamically attributed in this subnet, and they all start on port 8080 within their respective containers.
From the host, the bridge network is visible and each instance of the container is accessible using its ip.
I cannot manage to resolve these host entries without mapping a different port than 8080 to the docker host.
For this entry in my host's /etc/hosts:
192.168.50.1 fares rabbit config book checkin: the services are only accessible if I explicitely bind the services' ports 8080 to my host's port 8081, port 8082, port 8083... for each service in the .yml file.
Is there another way to make sure the services are discoverable by their dns name even from outside of the subnet?
You can't bind all 4 containers to the same port on the host. Only one container per port. But there are some workarounds:
Option 1: Use Different Ports for Each Container
For exmaple, bind ports 8081, 8082, 8083, and 8084.
In /etc/hosts, map each containers IP correctly.
Specify the port in addition to the hostname when connecting. Like https://fares:8081
Your /etc/hosts might look like this:
192.168.50.1 fares
192.168.50.2 rabbit
...
Option 2: Use a Reverse Proxy
You can set up an additional Docker container as a reverse proxy in your docker-compose.yml. The reverse proxy container can bind to port 8080 and forward the request to the correct container depending on the hostname. You don't need to bind ports from the other containers on the host because your reverse proxy is forwarding the requests. There's a blog post that explains how this works in detail: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/

Resources