Run multiple https enabled services on docker containers on same host Machine - docker

I want to run multiple services on port 443 on same host Machine in docker containers. Can I achieve this using multiple VirtualIp's without getting errors like bind address already in use?

If you want the host to have multiple services hosted from the same port (443), I would suggest using a reverse proxy such as HA Proxy and exposing that on host port 443 and then have it route to the appropriate backend.

Related

Docker host multiple containers with different ip address but on same port

I have three tomcat containers running on different bridge networks with different subnet and gateway
For example:
container1 172.16.0.1 bridge1
container2 192.168.0.1 bridge2
container3 192.168.10.1 bridge3
These containers are running on different ports like 8081, 8082, 8083
Is there any way to run all three containers in same 8081?
If it is possible, how can I do it in docker.
You need to set-up a reverse proxy. As the name suggests, this is a proxy that works in an opposite way from the standard proxy. While standard proxy gets requests from internal network and serves them from external networks (internet), the reverse proxy gets requests from external network and serves them by fetching information from internal network.
There are multiple applications that can serve as a reverse proxy, but the most used are:
NginX
Apache
HAProxy mainly as a load-balancer
Envoy
Traefik
Majority of the reveres proxies can run as another container on your docker. Some of this tools are easy to start since there is ample amount of tutorials.
The reverse proxy is more than just exposing single port and forwarding traffic to back-end ports. The reverse proxy can manage and distribute the load (load balancing), can change the URI that is arriving from the client to a URI that the back-end understands (URL rewriting), can change the response form the back-end (content rewriting), etc.
Reverse HTTP/HTTP traffic
What you need to do to set a reverse proxy, assuming you have HTTP services, in your example is foloowing:
Decide which tool to use. As a beginner, I suggest NginX
Create a configuration file for the proxy which will take the requests from the port 80 and distribute to ports 8081, 8082, 8083. Since the containers are on different network, you will need to decide if you want to forward the traffic to their IP addresses (which I don't recommend since IP can change), or to publish the ports on the host and use the host IP. Another alternative is to run all of them on the same network.
Depending on the case, you need to setup the X-Forwarding-* flags and/or URL rewriting and content rewriting.
Run the container and publish the port 80 as 8080 (if you expose the containers on host, your 8081 will be already taken).
Reverse TCP/UDP traffic
If you have non-HTTP services (raw TCP or UDP services), then you can use HAProxy. Steps are same apart from the configuration step #2. The configuration is different due to non-HTTP nature of the traffic and you can find example in this SO

How can I access deployments on Rancher local cluster from vm Ip

I have Ubuntu server 20.04 running as a guest vm. On it I have installed Rancher within a docker container, and mapped port 443 to 9091 to have access to the Rancher UI at 192.168.0.50:9091. Within Rancher I have deployed a nextcloud instance on the local cluster and forwarded the nextcloud port 443 to port 9700 using HostPort. The link generated for the pod is taking me to 172.17.0.2:9700, which I am assuming is the internal Ip for the local node within the cluster.
How can I access the nextcloud container with a browser?
Currently I cannot access it if I simply navigate to the :9700. Is there a way to access the node with the IP I use for my vm?
Thanks
The publish the container port field in the Port Mapping is the one where you specify the the port that container listen to.
It relates directly to containerPort in kubernetes yaml file. Exposing a port in this field gives the system additional information about the network connections a container uses but this field is primarily informational. Not specyfing a port here does not prevent that port from being expose. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network.
I checked the nextcloud image specs and it looks like it the apache-image is listening on port 80 and fpm-image uses 9000.
For more reading please visit rancher document how to expose workloads.

Docker bind ports in network host

I would like to keep the host's IP address and hostname for all my Docker containers, however, I would like to bind different ports as many of my containers have port 80 in use.Now, I know that port binding doesn't work in network mode host, however, I am wondering if there are alternatives that can achieve the same result?
You can use NGINX as a reverse proxy to expose only the port 80 and manage the container requests internally. It acts like a unique door to your containers:
https://hub.docker.com/_/nginx

Forward docker exposed port to another port on the same container without publishing it to the host

I have a container exposing a web app through the 3000 port and another one witch access it by docker dns.
I want to access this container using the 80 port without modifying the web app and without direct exposing it to the host (aka --publish). Basically internally forward the 80 port to the 3000 port.
Is it possible to do it using docker without modifying the container to have socat or something?
No, Docker doesn’t have this capability. The only port remapping is when a port is published outside of Docker space using the docker run -p option, and this never affects inter-service communication. Your only options here are to change the server configuration to listen on port 80, or to change the client configuration to include the explicit port 3000.
(Kubernetes Services do have this capability, and I tend to remap an unprivileged port from a given Pod to the standard HTTP port in a Service, but that’s not a core Docker capability at all.)

Containerized Apache co-exisitng with host Apache

I have some web applications under same domain and different sub-domain running on same machine. I am using Apache Virtual Host configuration to use pretty URLs for all these applications. I am now trying to Dockerize one of these applications. So I exposed ports 80 and 443 to different ports of host machine.
I can successfully access containerized web application using URL format http://localhost:{http exposed port} OR https://localhost:{https exposed port}.
Now, If I try using Virtual host configuration within container it does not work unless I stop host machine Apache server.
How do I setup pretty URLs for containerized application using ports exposed from within container, along with running an Apache server on same machine.
Reverse proxy will be the good option for run multiple docker containers which will be exposed on different different ports but will be configured on same port in reverse proxy. This link will be helpful, mentioned just below:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04
You can try one thing also just expose your application on different IP and configure that ip in /etc/hosts. Please check it here:
http://jasani.org/posts/docker-now-supports-adding-host-mappings-2014-11-19/index.html

Resources