Can we use a DNS name for a service running on a Docker Swarm on my local system? - docker

Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?

You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.

Related

Docker containerization. accessing container from other machine using container name instead of host IP

my client and server hosted in a Linux host using docker as container. From other machine of same network able to connect using host IP address but I am not able to connect using container name or container IP address. I want to call a API of my server side application using container name or IP not using host machine IP
You cannot.
The (Docker) container runtime(s) are entirely distinct; there's no shared state that would permit the container on one host to be able to enumerate the container names on another host.
The easiest way for you to connect a client container on one host with a server container on another host is publish the server containers' ports to the host and to leverage the (shared) networking between the client and server and use either the server host's name or IP address to access it from the client.

Access Docker container via DNS name from corporate LAN

I'm looking for a way to access containers that are running on server in our company lan by domain names. By far I only managed to access them by IPs
So the setup is. Docker (for windows) is running on server srv1.ourdomain.com (Windows Server 2019), network for container is configured with l2bridge driver, container's dns name, as specifiedn in run command, is cont1. It is accessible by dns name on the docker host (srv1) and by IP from my machine.
What can I do to access the container by dns name cont1.ourdomain.com from my local machine located in the same lan?
I tried to use proxy (traefik) but it cant rewrite urls in the content, so web applications running inside the container are failing. Bacause of this I can't host multiple web application behind that proxy.
I know that it is possible to map container's port to host port and then it will be accessible from lan through the host name and host port, but applications I'm running are requiring many ports to be mapped (like 8 ports for each container) and with those containers being short-lived developer's environments it will be a hell to find a port pool when running new container.
So again if I can access container and its' ports by IP, is there a way to do the same by DNS name?
UPD1. Container host is a virtual server running on vmware. I tried to follow those recommendations and configure promiscuous mode. Thise doesn't help with dns though.
UPD2. I tried transparent network as well. For some reason DHCP can never assign propper IP and container ends up with autoconfigured ip from 168.x.x.x subnet.
You could create a transparent network and make the container discoverable on the network just like host. However, using host ports is what's recommended.
Did you try PathStrip or PathPrefixStrip with Traefik? That should let you rewrite the URLs for the backend.

Docker Swarm, how to communicate to other services through their "hostname" only?

I have some experience with Docker Compose and container linking. In a non-swarm environment, you could easily connect from, e.g, the web container to the db_mysql container using its name (for example, in PHP I can configure the MySQL connection to be:
$dsn = 'mysql:host=db_mysql;
I am having a hard time understanding how that works with Docker in Swarm mode, especially considering the "replicas" and "load balancing" mechanisms.
Let's say I have 5 different Docker Machines, each having a different public IP, participating in a Swarm. I also have a web service and a db service that's replicated across these 5 different machines (1 instance per each machine).
My question is: how do I make any of the 5 web containers, communicate to any of the 5 db_mysql containers without forcing these web containers to have knowledge of any Docker Machine public IPs or the fact that these containers live within a Swarm?
You use the service name. This will resolve in DNS to either a VIP or the 5 ip addresses (one for each replica) of the service. Under the covers, the VIP uses IPVS to round robin to one of the healthy replicas without suffering from stale DNS issues. You can also get all the replica IP addresses using service_name.tasks even if you use the default VIP.
In Docker's DNS implementation, you can resolve the container name, and any network alias. The network alias includes the service name with DNSRR (used by docker-compose without swarm). Or the service name resolves to a VIP in swarm mode. The hostname of the container does not resolve, likely because it can change outside of the control (and therefore knowledge) of the docker engine.
Using Docker version 19.03.5 the correct DNS name to query in order to obtain all the IP addresses of the replica of a service is the following:
tasks.<service-name>

Remote Docker container by hostname

How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.

static IP for container in docker swarm

I'm new in docker and I have a simple question.
I have 3 hosts running a docker swarm with the following ip's:
192.168.0.52
192.168.0.53
192.168.0.54
Also, I've created a http service with a published port:8080. As expected, service it's available at all hosts ip's (ex: 192.168.0.52:8080).
Is it possible to assign a static IP Address to the service(for example 192.168.0.254) and be able to reach it from any computer from my local network ? (192.168.0.0/24).
This way I should have high availability for my service; if the host with the service goes down, it should be started on another host, but keep the same IP.
Thanks,
Alex

Resources