I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.
Related
I am trying to build a container containing 3 applications, for example:
Grafana;
Node-RED;
NGINX.
So I will just need to expose one por, for example:
NGINX reverse proxy on port 3001/grafana redirects to grafana on port 3000 and;
NGINX reverse proxy on port 3001/nodered redirects to nodered on port 1880.
Does it make any sense in your vision? Or this architecture is not feasible if compared to docker compose?
If I understand correctly, your concern is about opening only one port publicly.
For this, you would be better off building 3 separate containers, each with their own service, and all in the same docker network. You could plug your services like you described within the virtual network instead of within the same container.
Why ? Because containers are specifically designed to hold the environment for a single application, in order to provide isolation and reduce compatibility issues, with all the network configuration done at a higher level, outside of the containers.
Having all your services inside the same container thwart these mentioned advantages of containerized applications. It's almost like you're not even using containers.
I have Rancher running behind this reverse proxy https://github.com/jwilder/nginx-proxy and I can start any containers and workloads I want. But because it hosts the containers on a managed network the reverse proxy cant get the IP address of the container and wont forward to the application. I still learning how to use Rancher but from the docs there were a couple of labels that I thought would be useful but never got to try them because it doesnt allow me to add labels on the workload. Im using rancher 2.1.1
You can add labels and annotations on a workload through the Rancher UI under 'Show Advanced Options', but depending on what you are doing you might want to use the Ingress Controller that rancher can deploy for you to route traffic through services. You dont want to have to route traffic to workloads directly by cluster IP.
https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/
https://kubernetes.io/docs/concepts/services-networking/ingress/
I'm trying to setup a microservice deployment (deployment file at https://github.com/mojlighetsministeriet/groups/blob/master/docker-compose.example.yml) with several services that will use HTTP (hopefully HTTPS later on) to communicate internally without being exposed outside the network. I later on will add a proxy service that will expose specific features. I want to do this specifically with docker swarm mode and I like the possibility to define the deployment in a docker-compose.yml so I can initiate with:
$ docker stack deploy my-platform -c docker-compose.example.yml
I want the API urls internally to be like GET http://identity-provider/public-key and GET http://groups/b0c44674-58e0-4a8a-87e0-e1de35088964 . I have done this with Kubernetes setups before and that works great but now I want to get this working with docker swarm mode.
The DNS parts works without any problems, but docker swarm mode won't allow me to have each service listening on port 80 (will later be 443). It keeps complaining about port conflicts even though each service has it's unique domain name like identity-provider or groups and so on.
Should I use a specific network driver to get this working? I currently use overlay.
Using domain names without random ports would make calling in between the services much more simple to remember than e.g. http://identity-provider:1234 and http://groups:1235, the ports only adds complexity to the setup.
I'm fine with using any super cutting edge version of docker-ce if that helps somehow.
This should be possible right?
Docker Swarm routes incoming requests based on the published port, you can't have two applications with the same port number in a single Swarm.
What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.