How to route requests between docker containers - docker

I have a cloud server where I host my web-services. Currently, there is only one docker container with JS + PHP + Mysql running on the server. It serves the web-service mysite.co. There are going to be more web-services. I want to host them on the same machine but in another docker container. I want to refactor and create a bunch of services and containers:
docker1 with MySQL --> DB for all services
docker2 with PHP + JS --> platform.mysite.co
docker3 with PHP + JS --> for mysite.co
docker4 with Python --> client.mysite.co. It's REST-endpoints for clients (ideally accessible only by VPN)
With which tool can I route web-requests between containers?

Not sure what is your exact problem.
If it is basic routing between three containers, you need a basic server (nginx, apache).
Il you want to perform load balandinc as well as routing between nodes among a swarm or pods in kubernetes, you may choose one that is more docker-suited, such as traefik.
It sounds like you see containers are some sort of impenetrable bastion... while it is acually acting exactly like your non-containerized web servers.
So the routing problems you have have the same solutions here... maybe a few more because docker add a few devoted solutions.

Related

Docker Container Containing Multiple Services

I am trying to build a container containing 3 applications, for example:
Grafana;
Node-RED;
NGINX.
So I will just need to expose one por, for example:
NGINX reverse proxy on port 3001/grafana redirects to grafana on port 3000 and;
NGINX reverse proxy on port 3001/nodered redirects to nodered on port 1880.
Does it make any sense in your vision? Or this architecture is not feasible if compared to docker compose?
If I understand correctly, your concern is about opening only one port publicly.
For this, you would be better off building 3 separate containers, each with their own service, and all in the same docker network. You could plug your services like you described within the virtual network instead of within the same container.
Why ? Because containers are specifically designed to hold the environment for a single application, in order to provide isolation and reduce compatibility issues, with all the network configuration done at a higher level, outside of the containers.
Having all your services inside the same container thwart these mentioned advantages of containerized applications. It's almost like you're not even using containers.

how to deal with changing ips of docker compose containers?

I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.

Layer 7 path based routing to Docker containers without Docker Enterprise

The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.

Docker best practises for managing websites

What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.

CoreOS Fleet, link redundant Docker container

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.
I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.
How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this
docker run --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront
Is it possible to link multiple ones?
The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.
nginx
haproxy
The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:
haproxy tutorial
The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.
Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.
I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.
As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.
There is a good tutorial by Digital Ocean on this:
https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos

Resources