I am trying to expose my containerized web app to the Internet over a public domain, but all the articles out there seem to be teaching how to play around with Docker's local network, for example how to run a containerized DNS server or running a DNS server in Docker. Even if I set up a DNS server that resolves an IP e.g. 172.20.0.3 to a domain like exmaple.com, then DNS service will translate example.com to 172.20.0.3 which is obviously only local to the docker network and not accessible from the outside.
The scenario seems easy. I have a docker host with a public static IP lets say 64.233.191.255, and I have multiple domains on it. Each domain is mapped to a web server and will serve a (containerized) web application. Each application has its own network defined in docker-compose.yml under the networks section on which all other services related to the web app e.g. mariadb, redis, etc. communicate. Should I have a DNS server inside every container I create? How do I translate local addresses to the static public IP address so as to make the web apps available on their respective domains on port 80?
I found a service called ngrok that exposes a container over a public domain name like xxxx.ngrok.io, but that is not what I want. I would like to serve my website on my own domain.
This has proved to be everything but trivial to me. Also, there's no explicit tutorial on Docker's documentation on how to do this. I suppose this is not how it is supposed to be done in real world as they probably do it via Kubernetes or OpenShift.
Should I have a bind9 configuration on the host or a containerized bind9 to manage DNS queries? Do I need iptables rules for this scenario?
You have to map both domains to the public ip via DNS and than use an reverse proxy to forward the requests to the correct apache server.
So basically 3 vhosts inside the docker host.
Vhost 1 (the reverse proxy) gets the request maps the domain to Vhost 2 or Vhost 3 address.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
you can use reverse proxy with Nginx for each application. For example, you're running two apps on port 3000 and 3001. Assign a proper DNS for each application.
like localhost:3000 maps to example1.com
Related
I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.
I have some web applications under same domain and different sub-domain running on same machine. I am using Apache Virtual Host configuration to use pretty URLs for all these applications. I am now trying to Dockerize one of these applications. So I exposed ports 80 and 443 to different ports of host machine.
I can successfully access containerized web application using URL format http://localhost:{http exposed port} OR https://localhost:{https exposed port}.
Now, If I try using Virtual host configuration within container it does not work unless I stop host machine Apache server.
How do I setup pretty URLs for containerized application using ports exposed from within container, along with running an Apache server on same machine.
Reverse proxy will be the good option for run multiple docker containers which will be exposed on different different ports but will be configured on same port in reverse proxy. This link will be helpful, mentioned just below:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04
You can try one thing also just expose your application on different IP and configure that ip in /etc/hosts. Please check it here:
http://jasani.org/posts/docker-now-supports-adding-host-mappings-2014-11-19/index.html
Basically, I have Docker Swarm running on physical machine with public IPv4 address and registered domain, say example.com
Now, I would like to access the running containers from Internet using their names as a subdomain.
For instance, lets say there is a mysql container running with name mysqldb. I would like to be able to access it from Internet by the following DNS:
mysqldb.example.com
Is there any way to achieve this?
I've finally found a so-called solution.
In general it's not possible :)
However, some known ports can be forwarded to a container by using HTTP reverse-proxies such as Nginx, haproxy etc.
I have 2 dockers in a net: web and backend
When I access "web" from the host machine (http://web:3000) it works.
"web" have a "test connection" button to the backend machine, which just tries to access a static page on the backend machine (http://backend:80/isAlive)
But since the call is made from the browser, and the browser is on the host machine, then the "backend" hostname can not be resolved.
I can fix this by editing my host file to so that "backend" will be resolved to localhost, but is there a more intelligent way to do this?
You should strongly consider setting up a separate container acting as a reverse proxy forwarding requests to different containers using virtual hosts.
backend.foo.bar -> talks to backend container
web.foo.bar -> talks to web container
If you don't want do configure dns you can just map those names to localhost in your hosts file for now.
The quickest way to get this working is using jwilder/nginx. When you get it working you can go into the container and look at the generated config file for nginx and learn a fair bit in case you want to set this up manually in the future.
Again: This means that the jwilder/nginx container is the only one that maps a port to localhost. The other containers are proxied through it.