building dockerized reverse proxy for and enviroment im working on - docker

I currently have a small cluster of wordpress services implemented using docker, which are accessible through a nginx, using vhost for it and also the service is accessible over the internet using duckdns.org. The nginx server is not on docker but is installed on the machine and I would like to know two things.
Is it advisable to move the server from local to docker and keep the whole architecture "dockerized"?
How can I implement this using the nginx server in docker to have the same result ?

Related

Setting up nginx and ssl in docker (Asp .Net API in VPS)

I want to deploy api service (asp .net) to VPS.
What is at the moment:
VPS ubuntu 22.10
Container api service with open port http.
Container mongodb.
Network bridge for communication between these containers.
Volume for storing mongodb collections.
Configured dns subdomain, which translates to ip VPS.
What I want:
To add nginx.
To add ssl (Let's Encrypt with certbot).
I don't want to use docker compose because I want to understand how things works.
I'm not strong in terminology, but perhaps what I want to do is called an open nginx proxy.
Please tell me if I understand correctly what I need to do.
Nginx:
To run a separate nginx container.
To add the nginx configuration to the docker volume.
To add nginx to the bridge network (close ports on the api container, open ports on the nginx container)
To set up nginx location configs to work internally through the network bridge.
SSL:
On the VPS machine (not in the docker container) to install and run certbot
To enabled automatic certificate renewal
I'm not sure where I need to run certbot. On vps machine or in nginx docker container.
I don't know how to configure nginx to work through the bridge.

NGINX Reverse Proxy with Docker Host Mode for Local Development

Most of the things I'm finding online are all about using docker-compose and more to create a reverse proxy for local development of dockerized applications. This is for my local development environment.
I have a need to create an nginx reverse proxy that can route requests to applications on my local computer that are not running in docker containers (non-dockerized).
Example:
I start up a web app A (not in docker) running on http://localhost:8822
I start up another web app B (not in docker) running on https://localhost:44320
I have an already running publicly available api on https://public-url-for-api-app-a.net
I also have a public A Record setup in my DNS for *.mydomain.local.com -> 127.0.0.1
I am trying to figure out how to use a nginx:mainline-alpine container in host mode to allow me to do the following:
I type http://web-app-a.mydomain.local.com -> reverse proxy to http://localhost:8822
I type http://web-app-b.mydomain.local.com -> reverse proxy to https://localhost:44320
I type http://api-app-a.mydomain.local.com -> reverse proxy to https://public-url-for-api-app-a.net
Ideally, this "solution" would run on both Windows and Mac but I am currently falling short in my attempts at this on my Windows machine.
Some stuff I've tried:
Following this tutorial, Start up my nginx docker container in "host" mode via:
docker run --rm -d --network host --name my_nginx nginx:mainline-alpine
I'm unable to get it to load on http://localhost:80. I'm wondering if I'm hitting some limitation of docker and windows? -- I receive a "The site can't be reached" here.
Custom building my own docker image with nginx configs and exposed ports (before trying host network mode)
Other relevant information:
Docker-Desktop on Windows version: 4.4.4 (73704)
Nginx Container via nginx:mainline-alpine tag.
Web App A = Front End Vue App
Web App B = Front End .NET Framework App
Web App C = Backend .NET Framework App
At this point, I've read too many posts that my brain is mush -- so it could very well be something obvious I'm missing. I'm beginning to think it may be better to simply run nginx.exe locally but that's not ideal because I don't want to have to check in binaries to my source in order for this setup to work.

Remote HTTP Endpoint to Docker Application

I have a demo application running perfectly on my local environment. However, I would like to run the same application remotely by giving it a HTTP endpoint. My goal is to test the performance of the application.
How to give a HTTP endpoint to any multi container docker application?
The following is the Github repository link for the demo application
https://github.com/LonareAman/BankCQRS.git
Use docker-compose and handle containers based on what you need
One of your containers should be web server like nginx. And then bind your machine port to your nginx like 80:80
Then handle your containers in nginx and make a proxy to them
You can find some samples in https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/

Layer 7 path based routing to Docker containers without Docker Enterprise

The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.

Running nginx in a container as reverse proxy with dynamic configuration

I'm trying to setup nginx as a reverse proxy in a container for my containers (Docker Swarm) and static sites which are being hosted on Google Cloud Platform & Netlify
I'm actually able to run nginx in containers, but I'm really worried about the configurations.
How will I update my site configurations in nginx to all containers (add / remove location)?
Is attaching a disk is the best option to store logs?
Is there any fault in my architecture?
If image isn't working, please use this link - https://s1.postimg.org/1tv4hka3zz/profitto-architecture_1.png
Hej Sanjay.
Have a look at:
https://github.com/jwilder/nginx-proxy
https://traefik.io/
The first one is a modified Nginx Reverse Proxy by J.Wilder.
The Second one is a new and native Reverse Proxy created specially for such use cases.
Both are able to listen to the docker.socks and dnynamicly add new containers to the reverse-proxy backend.
Regarding your Architecture:
Why not running the Reverse-Proxy Containers inside the Swarm Cluster?
Related to logging, have a Look at the Docker Log-Drivers.
You can collect the Logs of all Containers with eg. fluentd or splunk.

Resources