I have wildcard dns pointed to my server e.g. *.domain.com
I'd like to route each subdomain to it's own docker container.
So that box1.domain.com goes to the appropriate docker container.
This should work for any traffic primarily HTTP and SSH.
Or perhaps the port can be part of the subdomain e.g. 80.box1.domain.com.
I will have lots of docker containers so the solution should be dynamic not hard-coded for every container.
Another solution would be to use https://github.com/jwilder/nginx-proxy.
This tool automatically forwards requests to the appropriate container (based on subdomain via the VIRTUAL_HOST container environment variable).
For instance, if you want to redirect box1.domain.com to a container, simply set the VIRTUAL_HOST container environment variable to "box1.domain.com".
Here is a detailed tutorial I wrote about it: http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker.
I went with interlock to route http traffic using the nginx plugin.
I settled on using a random port for each SSH connection as I couldn't get it work using the subdomain alone.
The easiest solution would be to use the Apache mod_rewrite RewriteMap method. It's very performant when used against a text file, but it can call a script if desired. There is another StackOverflow answer that covers the script variant pretty well.
If you want to avoid Apache, the good folks over at dotCloud created Hipache to do the routing for their PaaS services. They even documented the different things they tried before building their own solution. I found a reference to tsuru.io using hipache exactly for routing to docker containers, so that definitely validates it for this purpose.
my answer may come to late but when you use docker you don't really need ssh to connect to your containers. with the docker exec command, you can run shell command directly in your running container.
here is my advice use the nginx proxy container listed at the beginning for configuring sub-domains. and run portainer on your host in order to have a visual overview of your Containers, images, logs and even execute command in it all of this through the portainer gui.
I used apache proxypresevehost
ProxyPreserveHost On
ProxyPass "/" "http://localhost:4533/"
ProxyPassReverse "/" "http://localhost:4533/"
Related
The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.
I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.
I am looking for a way to assign a domain name to the container when it is started. For example, I want to start a web server container, and to be able to access web pages via domain name. Is there an easy way to do this ?
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1/SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local).
You can skip step #2 and run your browser inside container too: it will discover the web application container by hostname.
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.
I have two docker containers (container_one and container_two), one is linked to the other container_one >>link>> container_two.
when i run a curl command from within container_one using the address: http://container_two/index.php the curl command executes successfully as expected.
however, i would like to introduce a wildcard subdomain so that i can attach any number of subdomains to container_two (eg: site1.container_two, site2.container_two, *.container_two, etc). Obviously, calling a curl command from container_one: http://site1.container_two/index.php does not work with linking alone.
Does anyone know how this would be possible with a docker run command or perhaps some other way?
Basically, you cannot do this with just --link flags, because --link adds an entry to the /etc/hosts file to facilitate this inter-container communication, and /etc/hosts files do not support wildcard entries.
However, you could set up a DNS server on your container_one, and set up your wildcard host (or subdomain records) on that DNS server to point to your container_two (and forward all other requests to your actual DNS for all other hostnames), and then specify --dns=127.0.0.1 in your docker run command for container_one. This seems a bit hacky, but what happens is that container_one will then use 127.0.0.1 (localhost) when it encounters a hostname it does not recognize in /etc/hosts, and the DNS on container_one will point to container_two for subdomains (and all other requests forwarding to your external DNS infrastructure).
You can find more information about this in the documentation. Good luck!