The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.
Related
I need to set up a server that does multiple things:
Hosts a Grafana instance on docker (3000 is the default port)
Hosts a flask service for printing Grafana reports (the default Grafana printing sucks so I built a Selenium robot to grab the objects on the screen, create a PDF and download the results)
Hosts a Docker App built with Wappler (Php-based app builder)
I'd like to use free certs (lets encrypt).
I'm new to docker and new to linux server administration. What's the best resource for learning how to set this up?
It's super easy to setup reverse proxies using the Linuxserver LetsEncrypt container (It's an Nginx container that auto-manages free certs). The initial setup if you're completely new to docker might seem a little intimidating, but it's easier than it looks, and once you get the hang of it, it's cake.
Other than that, you just need to be sure to place all 3 in the same docker network so they can talk to each other, and (if you want) also expose those ports to the host during the docker run or docker compose code.
i.e. (pseudo code):
docker run -d --name grafana -p 3000:3000 grafana/grafana
For anyone who ends up finding this via a search, I ended up using Traefik for routing/SSL setup. The best article I found on how to set this up is here.
(Note many articles reference Traefik 1.7, however, they changed a lot between 1.7 and version 2. The article above uses Traefik 2.0)
Basically the way that Traefik works is it sees other docker containers that are in the same network and if the docker container contains specific labels set in the docker configuration, it will automatically generate LetsEncrypt SSL certs (see the docs) and will perform the routing to the docker container.
I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.
I have a cloud server where I host my web-services. Currently, there is only one docker container with JS + PHP + Mysql running on the server. It serves the web-service mysite.co. There are going to be more web-services. I want to host them on the same machine but in another docker container. I want to refactor and create a bunch of services and containers:
docker1 with MySQL --> DB for all services
docker2 with PHP + JS --> platform.mysite.co
docker3 with PHP + JS --> for mysite.co
docker4 with Python --> client.mysite.co. It's REST-endpoints for clients (ideally accessible only by VPN)
With which tool can I route web-requests between containers?
Not sure what is your exact problem.
If it is basic routing between three containers, you need a basic server (nginx, apache).
Il you want to perform load balandinc as well as routing between nodes among a swarm or pods in kubernetes, you may choose one that is more docker-suited, such as traefik.
It sounds like you see containers are some sort of impenetrable bastion... while it is acually acting exactly like your non-containerized web servers.
So the routing problems you have have the same solutions here... maybe a few more because docker add a few devoted solutions.
I'm using 1 host for hosting several service, all on docker. Each of service have a domain like service.domain.com.
I'm using haproxy as a router. The problem is it has a dead time when you change the configuration (haproxy.cfg). How can I solve this? Another solution than haproxy or...?
PS: I'm using windows server 2016 and docker for windows.
Probably you need orchestration tool like Kubernetes and run HaProxy inside kubernetes. To make changes, create new docker images and deploy using RollingUpdate strategy, that way you will not loose connections.
I'm trying to setup nginx as a reverse proxy in a container for my containers (Docker Swarm) and static sites which are being hosted on Google Cloud Platform & Netlify
I'm actually able to run nginx in containers, but I'm really worried about the configurations.
How will I update my site configurations in nginx to all containers (add / remove location)?
Is attaching a disk is the best option to store logs?
Is there any fault in my architecture?
If image isn't working, please use this link - https://s1.postimg.org/1tv4hka3zz/profitto-architecture_1.png
Hej Sanjay.
Have a look at:
https://github.com/jwilder/nginx-proxy
https://traefik.io/
The first one is a modified Nginx Reverse Proxy by J.Wilder.
The Second one is a new and native Reverse Proxy created specially for such use cases.
Both are able to listen to the docker.socks and dnynamicly add new containers to the reverse-proxy backend.
Regarding your Architecture:
Why not running the Reverse-Proxy Containers inside the Swarm Cluster?
Related to logging, have a Look at the Docker Log-Drivers.
You can collect the Logs of all Containers with eg. fluentd or splunk.