I have some web applications under same domain and different sub-domain running on same machine. I am using Apache Virtual Host configuration to use pretty URLs for all these applications. I am now trying to Dockerize one of these applications. So I exposed ports 80 and 443 to different ports of host machine.
I can successfully access containerized web application using URL format http://localhost:{http exposed port} OR https://localhost:{https exposed port}.
Now, If I try using Virtual host configuration within container it does not work unless I stop host machine Apache server.
How do I setup pretty URLs for containerized application using ports exposed from within container, along with running an Apache server on same machine.
Reverse proxy will be the good option for run multiple docker containers which will be exposed on different different ports but will be configured on same port in reverse proxy. This link will be helpful, mentioned just below:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04
You can try one thing also just expose your application on different IP and configure that ip in /etc/hosts. Please check it here:
http://jasani.org/posts/docker-now-supports-adding-host-mappings-2014-11-19/index.html
Related
I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
I'm studying Docker Desktop, and I have a question.
I have created two ASP.NET Core applications that are listening on the same port. I would like to have them respond by name (i.e.: http://app1.local/ and http://app2.local/). Is it possible with Docker?
Thanks
You can only have one process listening on a given port, so to achieve what you want, you can have a reverse proxy listening on the port and direct traffic to the applications behind, based on the host name in the request.
Some options for reverse proxies are Nginx, Traefik or the Ocelot library for .NET.
I am trying to expose my containerized web app to the Internet over a public domain, but all the articles out there seem to be teaching how to play around with Docker's local network, for example how to run a containerized DNS server or running a DNS server in Docker. Even if I set up a DNS server that resolves an IP e.g. 172.20.0.3 to a domain like exmaple.com, then DNS service will translate example.com to 172.20.0.3 which is obviously only local to the docker network and not accessible from the outside.
The scenario seems easy. I have a docker host with a public static IP lets say 64.233.191.255, and I have multiple domains on it. Each domain is mapped to a web server and will serve a (containerized) web application. Each application has its own network defined in docker-compose.yml under the networks section on which all other services related to the web app e.g. mariadb, redis, etc. communicate. Should I have a DNS server inside every container I create? How do I translate local addresses to the static public IP address so as to make the web apps available on their respective domains on port 80?
I found a service called ngrok that exposes a container over a public domain name like xxxx.ngrok.io, but that is not what I want. I would like to serve my website on my own domain.
This has proved to be everything but trivial to me. Also, there's no explicit tutorial on Docker's documentation on how to do this. I suppose this is not how it is supposed to be done in real world as they probably do it via Kubernetes or OpenShift.
Should I have a bind9 configuration on the host or a containerized bind9 to manage DNS queries? Do I need iptables rules for this scenario?
You have to map both domains to the public ip via DNS and than use an reverse proxy to forward the requests to the correct apache server.
So basically 3 vhosts inside the docker host.
Vhost 1 (the reverse proxy) gets the request maps the domain to Vhost 2 or Vhost 3 address.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
you can use reverse proxy with Nginx for each application. For example, you're running two apps on port 3000 and 3001. Assign a proper DNS for each application.
like localhost:3000 maps to example1.com
I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.
I want to run multiple services on port 443 on same host Machine in docker containers. Can I achieve this using multiple VirtualIp's without getting errors like bind address already in use?
If you want the host to have multiple services hosted from the same port (443), I would suggest using a reverse proxy such as HA Proxy and exposing that on host port 443 and then have it route to the appropriate backend.