My travis container opens up some html pages on localhost:8080. I'd like these to be accessible to the public while the container is running.
How do I enable this and how do I find out the public IP address for each instance. Is this even possible?
Thanks.
Used ngrok.io for an easy solution.
exposing containers to outside world in general requires you to first establish a connection from inside the container to the gateway proxy.
there are multiple solutions available: ngrok (does not provide authentication though), cloudflared argo tunnel and gw.run both provide tunnel + authentication/authorization
Related
I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?
On performing curl ifconfig.me on my machine I am able to get my public IP but the same does not works in my container.
The simple answer to my silly question is, we don't need to. Instead port binding is use to bind the container port to the system.
The public IP of system could be used with port to interact with hosted application.
Silly me!
Basically, I have Docker Swarm running on physical machine with public IPv4 address and registered domain, say example.com
Now, I would like to access the running containers from Internet using their names as a subdomain.
For instance, lets say there is a mysql container running with name mysqldb. I would like to be able to access it from Internet by the following DNS:
mysqldb.example.com
Is there any way to achieve this?
I've finally found a so-called solution.
In general it's not possible :)
However, some known ports can be forwarded to a container by using HTTP reverse-proxies such as Nginx, haproxy etc.
I am learning Micro-services architecture by writing a small web app. The app has the following components, each of which will be hosted by a docker container.
In my API Gateway which is written in NodeJS, there is some place I will call:
request('http://service_b_ip_addr:port/get_service_b', callback);
However, both service_b_ip_addr and port are not known until Marathon has the Service B's docker container created.
With some Service Discovery mechanism, such as mesos-dns or marathon-lb, I guess that I could just change service_b_ip_addr to something like service_b.marathon.com.
But I've no idea how should I put the port in my program.
Thanks in advance for your help.
PS:
I am using BRIDGED network mode given that multiple instances of a Service could locate on the same Mesos slave. So port is a NATted random number.
Take a look at this answer.
If you use marathon-lb then there is no need to pass a port because it's a proxy and it will know where service is just by name.
If you use mesos-dns you should make a SRV request to get ip and port. In node you can do it with dns.resolveSrv(hostname, callback) but your DNS must be exposed on defaul (53) port and supports SRV request (mesos-dns supports it).
In Dcos on premises , How outside world reach to docker container if we are using mesos-dns for service discovery ?
Lets say my mesos domain is marathon.mesos
I have deployed Nginx container using Marathon framework and mesos-dns discover as "nginx.marathon.mesos" . Within the cluster i can access http://nginx.marathon.mesos via web brower , thats no issue.
But in outside the cluster( in public world) that nginx container server need to present as "abc.xyz.com"
when someone type abc.xyz.com , traffic should route to nginx container, If i use mesos-dns for service discovery how can we deal with this scenario ?
In order to achieve that you would need to setup a node with a public IP and run a task with a reverse proxy on there (like in the tutorials) that forwards to the internal mesos dns backed service.
If you have more questions or need follow up on this one, please contact mesosphere support.
I have resoved this problem. you can refer the document,https://docs.mesosphere.com/tutorials/publicapp/
If your app running on port:80, you can access from the public slave node's domain.
If you have any question , you can reply to me.