Proxy SMTP Request to different Docker containers on the same host - docker

I have the following problem. I'm running multiple Docker containers on one host, all listening on SMTP port 25.
However, each container should be accessible via its own hostname (Domain name). And this doesn't work over NGINX as I can't work with virtual hosts for SMTP.
Does anyone have a hint? Is there a special module for NGINX or another proxy? Or is it just not possible because SMTP doesn't support hostname?
I already tried it with "stream" module, but that doesn't work with multiple containers, only with one instance and the "map" module is an http modul.
Example:

Related

How do I make host.docker.internal work with custom dns configuration enabled?

I have docker compose running with several containers. One of those containers is a dns server running bind. In my docker daemon configuration I specify the dns like this:
"dns" : [
"10.1.1.8", /* static ip address of my dockerized bind container defined in compose */
"x.x.x.x", /* my companies internal vpn dns */
"8.8.8.8" /* google dns */
]
This all works fine. My containers in the compose file will use the bind server running on 10.1.1.8 for dns lookup and then fall back on my companies internal dns and lastly googles dns for external websites.
Docker provides a special dns host.docker.internal which should point at the host IP (lets say you want docker containers to connect to services running locally but not on docker). I want to use this in a few containers which should allow the container to reference the host IP address without hardcoding an IP which can change. In fact docker inserts this value into the hosts file (windows/system32/driver/etc/host) on the host operating system and updates it when you host IP gets assigned a new dns.
The issue is docker uses dns to resolve "host.docker.internal". When using my custom dns configuration in the daemon it breaks things and I get issues reaching the host os service. I spent 2 hours debugging this issue till I realized host.docker.internal starts working only when I delete the dns configuration from the daemon config. Is there any way to make docker resolve the dns correctly and still use custom dns bind server on the same machine? Can I somehow update the daemon dns to also point at some docker dns ip address?
Have you considered to rely on Docker Compose and how it helps defining custom DNS addressing policies.
I provide you the link to the Compose DNS configuration official guide.

How to use the Docker's embedded DNS server for builds via docker-compose?

When I run docker-compose up, I see that it creates a custom network for the containers. AFAIU, this makes the containers use the Docker’s embedded DNS server for resolving, which can forward DNS lookups to addresses like 127.0.0.1 on the host (e.g. when the host uses a local DNS server, such as dnsmasq).
However, docker-compose seems to use the default bridge network for builds, which makes containers use the Google's public DNS servers in case the host is configured with 127.0.0.1, for example.
Is it possible to use the Docker's embedded DNS server for builds, like it's used for running containers?

Access Docker container via DNS name from corporate LAN

I'm looking for a way to access containers that are running on server in our company lan by domain names. By far I only managed to access them by IPs
So the setup is. Docker (for windows) is running on server srv1.ourdomain.com (Windows Server 2019), network for container is configured with l2bridge driver, container's dns name, as specifiedn in run command, is cont1. It is accessible by dns name on the docker host (srv1) and by IP from my machine.
What can I do to access the container by dns name cont1.ourdomain.com from my local machine located in the same lan?
I tried to use proxy (traefik) but it cant rewrite urls in the content, so web applications running inside the container are failing. Bacause of this I can't host multiple web application behind that proxy.
I know that it is possible to map container's port to host port and then it will be accessible from lan through the host name and host port, but applications I'm running are requiring many ports to be mapped (like 8 ports for each container) and with those containers being short-lived developer's environments it will be a hell to find a port pool when running new container.
So again if I can access container and its' ports by IP, is there a way to do the same by DNS name?
UPD1. Container host is a virtual server running on vmware. I tried to follow those recommendations and configure promiscuous mode. Thise doesn't help with dns though.
UPD2. I tried transparent network as well. For some reason DHCP can never assign propper IP and container ends up with autoconfigured ip from 168.x.x.x subnet.
You could create a transparent network and make the container discoverable on the network just like host. However, using host ports is what's recommended.
Did you try PathStrip or PathPrefixStrip with Traefik? That should let you rewrite the URLs for the backend.

Exposing A Containerized Web App on a Public Domain

I am trying to expose my containerized web app to the Internet over a public domain, but all the articles out there seem to be teaching how to play around with Docker's local network, for example how to run a containerized DNS server or running a DNS server in Docker. Even if I set up a DNS server that resolves an IP e.g. 172.20.0.3 to a domain like exmaple.com, then DNS service will translate example.com to 172.20.0.3 which is obviously only local to the docker network and not accessible from the outside.
The scenario seems easy. I have a docker host with a public static IP lets say 64.233.191.255, and I have multiple domains on it. Each domain is mapped to a web server and will serve a (containerized) web application. Each application has its own network defined in docker-compose.yml under the networks section on which all other services related to the web app e.g. mariadb, redis, etc. communicate. Should I have a DNS server inside every container I create? How do I translate local addresses to the static public IP address so as to make the web apps available on their respective domains on port 80?
I found a service called ngrok that exposes a container over a public domain name like xxxx.ngrok.io, but that is not what I want. I would like to serve my website on my own domain.
This has proved to be everything but trivial to me. Also, there's no explicit tutorial on Docker's documentation on how to do this. I suppose this is not how it is supposed to be done in real world as they probably do it via Kubernetes or OpenShift.
Should I have a bind9 configuration on the host or a containerized bind9 to manage DNS queries? Do I need iptables rules for this scenario?
You have to map both domains to the public ip via DNS and than use an reverse proxy to forward the requests to the correct apache server.
So basically 3 vhosts inside the docker host.
Vhost 1 (the reverse proxy) gets the request maps the domain to Vhost 2 or Vhost 3 address.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
you can use reverse proxy with Nginx for each application. For example, you're running two apps on port 3000 and 3001. Assign a proper DNS for each application.
like localhost:3000 maps to example1.com

Remote Docker container by hostname

How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.

Resources