I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
Related
I have three tomcat containers running on different bridge networks with different subnet and gateway
For example:
container1 172.16.0.1 bridge1
container2 192.168.0.1 bridge2
container3 192.168.10.1 bridge3
These containers are running on different ports like 8081, 8082, 8083
Is there any way to run all three containers in same 8081?
If it is possible, how can I do it in docker.
You need to set-up a reverse proxy. As the name suggests, this is a proxy that works in an opposite way from the standard proxy. While standard proxy gets requests from internal network and serves them from external networks (internet), the reverse proxy gets requests from external network and serves them by fetching information from internal network.
There are multiple applications that can serve as a reverse proxy, but the most used are:
NginX
Apache
HAProxy mainly as a load-balancer
Envoy
Traefik
Majority of the reveres proxies can run as another container on your docker. Some of this tools are easy to start since there is ample amount of tutorials.
The reverse proxy is more than just exposing single port and forwarding traffic to back-end ports. The reverse proxy can manage and distribute the load (load balancing), can change the URI that is arriving from the client to a URI that the back-end understands (URL rewriting), can change the response form the back-end (content rewriting), etc.
Reverse HTTP/HTTP traffic
What you need to do to set a reverse proxy, assuming you have HTTP services, in your example is foloowing:
Decide which tool to use. As a beginner, I suggest NginX
Create a configuration file for the proxy which will take the requests from the port 80 and distribute to ports 8081, 8082, 8083. Since the containers are on different network, you will need to decide if you want to forward the traffic to their IP addresses (which I don't recommend since IP can change), or to publish the ports on the host and use the host IP. Another alternative is to run all of them on the same network.
Depending on the case, you need to setup the X-Forwarding-* flags and/or URL rewriting and content rewriting.
Run the container and publish the port 80 as 8080 (if you expose the containers on host, your 8081 will be already taken).
Reverse TCP/UDP traffic
If you have non-HTTP services (raw TCP or UDP services), then you can use HAProxy. Steps are same apart from the configuration step #2. The configuration is different due to non-HTTP nature of the traffic and you can find example in this SO
I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?
Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.
After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.
I have some web applications under same domain and different sub-domain running on same machine. I am using Apache Virtual Host configuration to use pretty URLs for all these applications. I am now trying to Dockerize one of these applications. So I exposed ports 80 and 443 to different ports of host machine.
I can successfully access containerized web application using URL format http://localhost:{http exposed port} OR https://localhost:{https exposed port}.
Now, If I try using Virtual host configuration within container it does not work unless I stop host machine Apache server.
How do I setup pretty URLs for containerized application using ports exposed from within container, along with running an Apache server on same machine.
Reverse proxy will be the good option for run multiple docker containers which will be exposed on different different ports but will be configured on same port in reverse proxy. This link will be helpful, mentioned just below:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04
You can try one thing also just expose your application on different IP and configure that ip in /etc/hosts. Please check it here:
http://jasani.org/posts/docker-now-supports-adding-host-mappings-2014-11-19/index.html