Setup: [Varnish] <-> [Nginx] <-> [PHP FPM] <-> [PostgreSQL] Varnish, Nginx and PHP FPM run as Docker containers.
When we push new code, we need to update at least the PHP container with the new build. Our CI/CD pipeline also triggers Nginx to restart docker exec -i $CONTAINER_ID_API_NGINX nginx -s reload so it will fetch the (internal) ip of the newly created PHP Container.
However sometimes the Nginx container has to be restarted as well, for example if there are new static assets. In this case we also need to restart Varnish and this is where our troubles lie, and where we have some downtime on our API.
Things we've tried:
1. Starting second Nginx
2. Starting second Varnish
3. Killing first Nginx
4. Killing first Varnish
Problem: The second Varnish Container complains that there are 2 IPs for the backend and goes in a restart cycle until the first Nginx container is shut down.
1. Starting second Nginx
2. Killing first Nginx
3. Starting second Varnish
4. Killing first Varnish
Problem: Before the second varnish is accepting connections, API request still go to the first Varnish instance, it tries to forward them to the first (now killed) Nginx instance
Question: Is there a way to get Varnish to switch it's backend host to the new Nginx container, with or without a restart?
Our VCL file currently contains
backend default {
.host = "api-api";
.port = "80";
}
In an ideal world, we would start a second Nginx instance, Varnish still has the DNS cached to the first Nginx instance, we kill the first Nginx instance and trigger a DNS refresh on the Varnish instance without needing to kill the Varnish container itself.
varnishreload
Reloading the VCL can be done using the varnishreload command.
This command will load a new VCL, compile it, deactivate the previous VCL and activate the newly added one.
Any backend DNS resolution will happen while the new VCL file is parsed and compiled.
The varnishreload command runs multiple varnishadm commands. See https://github.com/varnishcache/pkg-varnish-cache/blob/master/systemd/varnishreload for more information on the code.
You can choose to either run varnishreload inside your container to automate the VCL reload, which also resolves the DNS of your backend endpoint.
varnishadm remote calls
You can also call varnishadm remotely, but that requires some extra measures.
See http://varnish-cache.org/docs/6.0/reference/varnishadm.html for varnishadm docs.
See http://varnish-cache.org/docs/6.0/reference/varnish-cli.html for more information about the Varnish CLI which varnishadm wraps around.
Related
I have two nginx instances that I will call application and main. I have many reasons to do this.
The main instance running in front is reachable. If I replace its proxy_pass directive with return 200;, I get a 200 response. However, whether I return 200; in application, or I try to make it do what it's actually supposed to do (proxy again to a third app), it never receives the request.
The request eventually fails with 504 Gateway Time-out.
There's nothing out of the ordinary in any logs. The docker logs show the exact same output as when it was working just a few days ago, except there is no output for when I made the request, because the request is never received/registered. It basically stops at "startup complete; waiting for requests". So the app never receives the request in the first place when going through the reverse proxy. There's nothing at all in the nginx logs in either of the nginx containers.
Here's a minimal, reproducible example: https://github.com/Audiopolis/MultiServerRepro
I am not able to get the above example working. I randomly did for a little bit on Windows 10 (but not on Ubuntu 20.04), but it's not working any more. Can you see anything wrong with the configuration in the example?
The end goal is to easily add several applications to the same server using non-standard ports being proxied to by one "main" instance of nginx, which selects the appropriate app based on the host name. Note that each application must be capable of running on its own with its own nginx instance as well.
Thanks.
Both your application and main nginx are running in different docker networks. main is mapping ports 80:80 and application is mapping ports 10000:80. The mapping is to the host machine so main can't proxy_pass to application.
There are a lot of ways to solve it but I will suggest 2:
Run main with network: host and remove the port forwarding from main (since it's not needed anymore). NOTE: Network host is working only on linux machines (No windows)
Run both nginx servers on the same docker network and then main can proxy_pass to application using docker network address, which is also mapped to it's docker container name (e.g: proxy_pass application_container_name:80).
I am looking for a way to get one of my docker containers to re-start with a delay (after a localhost restart).
What I currently have is:
Computer which runs docker desktop.
Docker has two containers: Webserver and MySQL (which serves data to the Webserver).
Both of the containers have --restart=always option, which allows them to restart if I restart the computer.
My issue: after a computer restart, Webserver does not seem to work properly, unless I specifically manually restart it.
My guess is that I need to give MySQL some time to boot up before I start the Webserver.
I was thinking to maybe setup a bash script or look into Compose (https://docs.docker.com/compose/startup-order/), but since I am quite new to this wanted to double check if I missed something and maybe there is a more of an elegant way to approach this.
You should use compose and specify that your webserver depends_on MySQL so that your webserver container starts after the DB is up.
You should ideally make your webserver resilient to unavailability of it's dependencies.
I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.
Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
Scenario:
There is a container running with image version 1.0 and exposed port 8080 on localhost 80. The new version of the image is available, and there is a need to switch those versions. No, any orchestration tool is running ( Kubernetes, OpenShift etc...).
Is it possible to start a container with version 1.1 make it run without a problem
Please, keep in mind that I don't want to keep it simple, no replication, etc.
Simply docker container with the binded port to localhost.
Questions:
1. Is it possible to switch exposing of port between containers without downtime?
2. If not, is there is any mechanism implemented with docker (free edition) to do such switch?
Without downtime, you'd need a second replica of the service up an running, and a proxy in front of that service that's listening to user requests and routing from one to the other. Both Swarm Mode and Kubernetes provide this capability with similar tools, the port being exposed is indirectly connected to the app via either an application reverse proxy, or some iptables rules and ipvs entries in the kernel.
Out of the box, recent versions of docker include support for Swarm Mode with nothing additional to install. You can run a simple docker swarm init to start a single node swarm cluster in less than a second. And then instead of docker-compose up you switch to docker stack deploy -c docker-compose.yml $stack_name to manage your projects with almost the same compose file. For swarm mode, you'll want to be on version 3 of the compose file syntax.
For a v3 syntax compose file in swarm mode that has no outage on an update, you'll want healthcheck's defined in your image to monitor the application and report back when it's ready to receive requests. Then you'll want a deploy section of the compose file to either have multiple replicas for HA, or at least configure a single replica to have a "start-first" policy to ensure the new service is up before stopping the old one. See the compose docs for settings to adjust: https://docs.docker.com/compose/compose-file/#update_config
For an application based reverse proxy in docker, I really do like traefik, but more to allow me to run multiple http based container services with a single port opened. This allows me to mapping requests based off the hostname/path/http header to the right container, while at the same time giving features to migrate between different versions with weighting of which backend to use so you can do more than a simple round-robin load balancing during an upgrade.
There is no mechanism native to Docker that would allow you replace one container with another with no interruption. On the other hand, the duration of the interruption can probably be measured in milliseconds; whether or not this is really an issue for you depends entirely on your application.
You can get the behavior you want by introducing a dynamic reverse proxy such as Traefik into your configuration. The proxy binds to host ports and handles requests from remote systems, then distributes those requests to one or more backend containers.
You can create and remove backend containers as you please, and as long as at least one is running your application will be available. For your specific use case, this means that you can start the new version of your application first, then retire the old one, all without any interruption in service.