My Goal
I want to use a single NGinx docker container as a proxy.
I want it to respond to traffic on my domain: "sub.domain.com" and listen on port 80 and 443.
When traffic comes in on /admin I want it to direct all traffic to one docker container (say... admin_container:6000).
When traffic comes in on /api I want it to direct all traffic to another docker container (say... api_container:5500).
When traffic comes in on any other path (/anything_else) I want it to direct all traffic to another docker container (say... website_container:5000).
Some Helpful Context
Real quick, let me provide some context in case it's helpful. I have a NodeJS website running in a docker container. I'd like to also have an Admin section, created with ASP.NET Core that runs in a second docker container. I'd like both of these websites to share and make use of a single ASP.NET Core Web Api project, running in a third docker container. So, one NodeJS project and two ASP.NET Core projects, that all live on a single subdomain:
sub.domain.com/
Serves the main website
sub.domain.com/admin
Serves the Admin website
sub.domain.com/api
Serves API endpoints and handles Database connectivity
What I have So Far
So far I have the NGinX reverse proxy set up and a single docker container for the NodeJS application. Currently all traffic on :80 is redirected to 443. All traffic on 443 is directed to the NodeJS docker container, running privately on :5000. I'll admit I'm not great with NGinx and don't fully understand how this works.
The NGinx.conf file
worker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
upstream docker-nodejs {
server nodejs_prod:5000;
}
server {
listen 80;
server_name sub.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name sub.domain.com;
ssl_certificate /etc/nginx/ssl/combined.crt;
ssl_certificate_key /etc/nginx/ssl/mysecretkeyfile.key;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://docker-nodejs;
}
}
}
Quick Note: In the above code "nodejs_prod:5000" refers to the container who's name is "nodejs_prod" and listens on port 5000. I don't understand how that works, but it is working. Somehow docker is creating DNS entries in the private network, one for each container name. I'm actually using docker-compose.
My Actual Question: How will this NGinx.conf file look when I have 2 more websites (each a docker container). It's important that /admin and /api are sent to the correct docker containers, and not handled by the "catch all" location. I'm imagining that I'll have a "catch all" location which captures all traffic that DOESN'T START WITH /admin OR /api.
Thank you!
In your nginx config you want to add location rules, such as
location /admin {
proxy_pass http://admin_container:6000/;
}
location /api {
proxy_pass http://api_container:5500/;
}
This redirects /admin to admin_container port 6000, and /api to api_container and port 5500.
docker-compose creates a network between all it's containers. Which is why http://api_container:5500/ points to the container named api_container and the port 5500. This can be used for communication between containers. You can read more about it here https://docs.docker.com/compose/networking/
Related
I deployed an nginx:1.22.1 instance alongside a static react app server on a worker node in a docker swarm. This is docker swarm mode, not classic swarm.
The advertise address that I listed when joining the swarm is internal to the data center, I do not know if that matters because I can still access these services with the public addresses.
Both containers are pinned to the same worker node and communicate over a user-created overlay network.
I can retrieve the full bundle directly from the react app server over the public network.
I cannot retrieve the full bundle through the nginx reverse-proxy server over the public network.
When I attempt to fetch the bundle using chrome browser as the user-agent I get 2 errors:
net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 (OK).
The app bundle is cutoff mid js function as if a chunk of data was not transmitted.
Rarely, the upstream server will send html and not a js bundle. But I receive that whole response body and it is not truncated like the js bundle.
I have played with all kinds of configuration and cannot get it to work.
(most relevant)
This is my configuration under /etc/nginx/conf.d/default.conf
resolver 127.0.0.11 valid=10s;
error_log /dev/stdout info;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certs/nginx.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
client_max_body_size 100M;
proxy_buffers 8 1024k;
proxy_buffer_size 1024k;
proxy_max_temp_file_size 1024m;
location / {
set $reactapp reacthost;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://$reactapp:3000/;
proxy_redirect off;
}
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
}
I use the variable $reactapp for service discovery after nginx server start. See NGINX blog here.
Note that the nginx:1.22.1 instance runs with user nginx after it is deployed to the stack. I only see this below message when I deploy via docker stack. If I start the container directly using docker engine, I do not see it.
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
However, I can exec into the container as the nginx user, access /var/cache/nginx/, and create a directory.
I do not know if:
my server / location configuration is plain bad.
The NGINX server cannot write a part of the container it needs to write when the service is deployed in stack mode.
If I cannot access the server properly over the public network via the overlay network.
Prior to using docker stack I was able to use this reverse proxy.
The two containers were on the same host without swarm mode running.
The containers communicated over a bridge network.
The reverse proxy server port was published on the public interface of the server it was deployed on.
The NGINX server started after the upstream server.
Because there is no depends_on key honored in stack mode I have to allow DNS service discovery after the NGINX server starts up. Placing them on an overlay gives me more flexibility in how I do my deployments, but this has become a bit muddled. There are enough differences between the two environments that it has become difficult to get the stack to behave as I expect.
I am working with Magento 2.4.2 (Adobe Commerce Enterprise Edition) and have a local site set up using the Magento Cloud Docker setup. I would like to change the nginx timeout setting to be long enough to let a page I'm testing run for as long as it needs to but still render the page on the browser in the frontend.
Is there a specific environment variable that I can set in my docker-compose.yml file to accomplish this? I'm not seeing anything that would make this update in the docker-environment or Dockerfile files. Do I just have to add my own custom lines to either of these files to update the timeout setting?
if you use magento cloud docker development, no you can't without overriding the docker image.
if you want to set nginx timeout, you need to override the nginx docker image and include it in docker-compose.override.yml. here are the step :
copy vendor/magento/magento-cloud-docker/images/nginx to .docker/images/nginx, i.e like this
edit .docker/images/nginx/1.19/etc/nginx.conf and .docker/images/nginx/1.19/etc/vhost.conf
create docker-compose.override.yml , like this
and run docker-compose up --build --force-recreate --no-deps --remove-orphans -d
check this link for the full example.
Note : the .docker/config.env file will be overwritten when you run ./vendor/bin/ece-docker 'build:compose'
According to Adobe Commerce support, this isn't possible on their Cloud platform which is very unfortunate
In terms of a local environment for testing, a method which is quicker and hackier than the one presented by Deki above is below:
ssh into your tls docker container
edit the /etc/nginx/conf.d/default.conf file as per below:
server {
listen 80;
listen 443 ssl;
server_name _;
ssl_certificate /etc/nginx/ssl/magento.crt;
ssl_certificate_key /etc/nginx/ssl/magento.key;
**# Add the 3 lines below**
proxy_read_timeout NEW_TIMEOUT_VALUE;
proxy_connect_timeout NEW_TIMEOUT_VALUE;
keepalive_timeout NEW_TIMEOUT_VALUE;
location / {
proxy_pass http://varnish:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
one nginx server is exposed to the internet and redirects the traffic.
A second nginx server is only available internally and listens to port 6880 (bookstack (a wiki) is hosted via docker).
In the local network, everything is unencrypted, while to the outside only https via port 443 is available.
The application (bookstack) works fine in the local network (http).
When accessing the application from the outside via https, the page is displayed, but all links are http instead of https. (For example, http://.../logo.png is in the login page's source code, but https://.../logo.png would be correct.)
Where and how do i switch to https?
First server sites-enabled/bookstack (already contains the redirect to https):
server {
listen 80;
server_name bookstack.example.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name bookstack.example.org;
ssl_certificate /etc/letsencrypt/live/<...>/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/<...>/privkey.pem;
location / {
try_files index index.html $uri $uri/index.html $uri.html #bookstack;
}
location #bookstack {
proxy_redirect off;
proxy_set_header X-FORWARDED_PROTO https;
proxy_set_header Host bookstack.example.org;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<internal ip>:6880;
}
}
Second server sites-enabled/bookstack:
server {
listen 80;
listen [::]:80;
server_name bookstack.example.org;
index index.html index.htm index.nginx-debian.html index.php;
location / {
proxy_pass http://<local docker container ip>:6880;
proxy_redirect off;
}
}
The docker container also deploys its own nginx config, but i didn't touch that one yet.
I solved the problem. Above nginx settings should work. The issue was with the docker container still using the wrong APP_URL. Running the artisan update script (bookstackapp.com/docs/admin/commands) did not suffice, but reinstalling the docker container did it.
I have different versions of my web application running in Docker containers. And nginx is running on my host machine.
Is it possible to access the desired deployed version of my web application with the help of sub-domain such as v1.myapp.io, v2.myapp.io without reconfiguring and restarting the nginx?
I also want to access future versions in the same way?
Could anyone tell me if there is any way to achieve it?
Please consider me a newbie to Docker/nginx world.
Thanks in Advance.
Yes, although it can be done but its very difficult to achieve with docker only. kubernetes will make this very easy and everything like dns, service mapping is provided out of the box. I will include both docker and kubernetes approach:
Docker approach:
A first draft will look like this, use regex in nginx server_name and set the docker container names with a pattern. Create a /etc/hosts entry for different containers like:
172.16.0.1 v1.docker.container
172.16.0.2 v2.docker.container
And nginx server conf look like:
server {
listen 80;
server_name "~^(?<ns>[a-z]+.+)\.myapp\.io";
resolver 127.0.0.1:53 valid=30s;
# make sure $ns.docker.container is resolved to container IP
set $proxyserver "$ns.docker.container";
location / {
try_files $uri #clusterproxy;
}
location #clusterproxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-IP $clientip;
proxy_set_header X-Forwarded-For $clientip;
proxy_set_header X-Real-IP $clientip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO 80;
proxy_pass http://$proxyserver:80;
}
}
Kubernetes approach:
Create different service and deployment for different versions in a namespace. Lets say namespace is 'app-namespace'. Service names are self explanatory:
APP version v1: v1-app-service
APP version v2: v2-app-service
To make nginx more flexible you can add the service name as namespace to $proxyserver
Nginx rule:
server {
listen 80;
server_name "~^(?<version>[a-z]+.+)\.myapp\.io";
# you can replace this with kubernetes dns server IP
resolver 127.0.0.1:53 valid=30s;
# make sure $ns.docker.container is resolved to container IP
set $proxyserver "$version.app-namespace.svc.kubernetes";
location / {
try_files $uri #clusterproxy;
}
location #clusterproxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-IP $clientip;
proxy_set_header X-Forwarded-For $clientip;
proxy_set_header X-Real-IP $clientip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-FORWARDED-PROTO 80;
proxy_pass http://$proxyserver:80;
}
}
I found another solution to this problem after digging a lot. It can be easily be done with Automated Nginx Reverse Proxy for Docker.
Once docker container for ngninx was up and running on my system.
I spun two docker containers of my webapp(diff versions) with the following command:
docker run -e VIRTUAL_HOST=v1.myapp.io --name versionOne -d myapp.io:v1
docker run -e VIRTUAL_HOST=v2.myapp.io --name versionTwo -d myapp.io:v2
and it worked out for me.
Additional notes:
1. I am using dnsmasq for handling all dns queries
I set up the Letsencrypt certificate directly to an AWS EC2 Ubuntu instance running Nginx and a docker server using port 9998. The domain is set up on Route 53. Http is redirected to https.
So https://example.com is working fine but https://example.com:9998 gets ERR_SSL_PROTOCOL_ERROR. If I use the IP address like http://10.10.10.10:9997 is working and checked the server using port 9998 okay.
The snapshot of the server on docker is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999111000 img-server "/bin/sh -c 'java -j…" 21 hours ago Up 21 hours 0.0.0.0:9998->9998/tcp hellowworld
It seems something is missing between Nginx and the server using port 9998. How can I fix it?
Where have you configured the ssl certificate ? Only Nginx?
The reason why you cannot visit https://example.com:9998 using ssl protocal is that that port provides http service rather than https.
I suggest not to publish 9998 of hellowworld and proxy all the traffic with nginx (if Nginx is also started with docker and in the same network).
Configure https in Nginx and the origin sever provides http.
This is a sample configuration https://github.com/newnius/scripts/blob/master/nginx/config/conf.d/https.conf
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
access_log logs/example.com/access.log main;
error_log /var/log/nginx/debug.log debug;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://apache:80;
proxy_set_header Host $host;
proxy_set_header CLIENT-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /.well-known {
allow all;
proxy_pass http://apache:80;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store(Mac).
location ~ /\. {
deny all;
}
}