I need to set subdomains for apps in docker containers, not in internal rancher network but for public use. I have domain delegated to rancher server. And there is host property in almost all stacks from catalog, but it doesn't work. I guess i need to delegate domain using some rancher dns or setup nginx to proxy traffic to some rancher server but I can't find any.
What you need is add a load-balancer service, which then forwards 80/443 of the host to the container app/nginx/whatever.
So navigate to your stack, click on add service -> load balancer. Then you can chose either for wich domain to trigger ( or catch all, which i would do for now ) and then which target. There you select your app-container and the port the container has its app / httpd server running and thats basically it
Related
I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
I am trying to expose my containerized web app to the Internet over a public domain, but all the articles out there seem to be teaching how to play around with Docker's local network, for example how to run a containerized DNS server or running a DNS server in Docker. Even if I set up a DNS server that resolves an IP e.g. 172.20.0.3 to a domain like exmaple.com, then DNS service will translate example.com to 172.20.0.3 which is obviously only local to the docker network and not accessible from the outside.
The scenario seems easy. I have a docker host with a public static IP lets say 64.233.191.255, and I have multiple domains on it. Each domain is mapped to a web server and will serve a (containerized) web application. Each application has its own network defined in docker-compose.yml under the networks section on which all other services related to the web app e.g. mariadb, redis, etc. communicate. Should I have a DNS server inside every container I create? How do I translate local addresses to the static public IP address so as to make the web apps available on their respective domains on port 80?
I found a service called ngrok that exposes a container over a public domain name like xxxx.ngrok.io, but that is not what I want. I would like to serve my website on my own domain.
This has proved to be everything but trivial to me. Also, there's no explicit tutorial on Docker's documentation on how to do this. I suppose this is not how it is supposed to be done in real world as they probably do it via Kubernetes or OpenShift.
Should I have a bind9 configuration on the host or a containerized bind9 to manage DNS queries? Do I need iptables rules for this scenario?
You have to map both domains to the public ip via DNS and than use an reverse proxy to forward the requests to the correct apache server.
So basically 3 vhosts inside the docker host.
Vhost 1 (the reverse proxy) gets the request maps the domain to Vhost 2 or Vhost 3 address.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
you can use reverse proxy with Nginx for each application. For example, you're running two apps on port 3000 and 3001. Assign a proper DNS for each application.
like localhost:3000 maps to example1.com
I have some docker containers talking together through docker bridge networks. They cannot be accessed from outside (I was said) as they are launched from a script with a default command which does not include 'expose' nor '-p' option. I cannot change that script.
I would like to connect to one of this containers which runs a server and listens for requests on port 8080. I tried connecting that bridge to a newly created docker bridge network, but i did not succede.
Now I am thinking of creating a new container and letting it talk to the server one (through bridge networks). As it is a new contaienr I can use the 'expose' or '-p' options, so it would be able to talk to the host machine.
Is it a good idea? How can I forward every request made to that container to the server one and get responses back to the host machine then?
Thanks
Within the default docker network, all ports are exposed. So you only need a container that exposes a port to the host machine and is in the same network as the other containers you have already created.
This is a relatively normal pattern. You can use a reverse proxy like nginx to achieve something like this.
There are some containers that automate this process:
https://github.com/jwilder/nginx-proxy
If you have no control over the other containers though, you will need to write the proxy config by hand.
If the container to which you are trying to connect is an http server, you may be able to use a ready-made container image that can work as an http forwarder (e.g., nginx - it is relatively easy to configure it as an http forwarder).
If you need plain tcp forwarding, you could make a container running 'socat' (socat can work as a tcp forwarder).
NOTE: in either case, you will be exposing a listener that wasn't meant to be on a public address. Do take measures not to allow unauthorized connections.
I have 2 dockers in a net: web and backend
When I access "web" from the host machine (http://web:3000) it works.
"web" have a "test connection" button to the backend machine, which just tries to access a static page on the backend machine (http://backend:80/isAlive)
But since the call is made from the browser, and the browser is on the host machine, then the "backend" hostname can not be resolved.
I can fix this by editing my host file to so that "backend" will be resolved to localhost, but is there a more intelligent way to do this?
You should strongly consider setting up a separate container acting as a reverse proxy forwarding requests to different containers using virtual hosts.
backend.foo.bar -> talks to backend container
web.foo.bar -> talks to web container
If you don't want do configure dns you can just map those names to localhost in your hosts file for now.
The quickest way to get this working is using jwilder/nginx. When you get it working you can go into the container and look at the generated config file for nginx and learn a fair bit in case you want to set this up manually in the future.
Again: This means that the jwilder/nginx container is the only one that maps a port to localhost. The other containers are proxied through it.
So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/