I am using Nginx and reverse proxy, also Docker.
I have two Docker containers.
319f103c82e5 web_client_web_client "nginx -g 'daemon of…" 6 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp web_client
7636ddaeae99 admin_web_admin "nginx -g 'daemon of…" 2 hours ago Up 2 hours 0.0.0.0:6500->80/tcp, 0.0.0.0:7000->443/tcp web_admin
This is my two containers. When I enter http://website.com, it goes to web_client_web_client container. When I enter http://website.com:6500, it goes to admin_web_admin container. This is the flow right now.
What I want is I don't want my admin users to type http://website.com:6500 to get to the admin page. I prefer them to type http://website.com/admin. So I decided to use proxy_pass which means, when accessing http://website.com/admin, it should proxy_pass to https://website.com:7000
So now, I am posting a Nginx config for web_client_web_client since it's the one which handles requests for port 80 and 433.
Here it is:
server {
listen 80 default_server;
server_name website.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location /admin {
proxy_pass https://website.com:7000/;
}
# I also tried
#location /admin/ {
# proxy_pass https://website.com:7000/;
#}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name website.com;
gzip on;
gzip_min_length 1000;
gzip_types text/plain text/xml application/javascript text/css;
ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
root /usr/share/nginx/html;
location / {
add_header Cache-Control "no-store";
try_files $uri $uri/index.html /index.html;
}
location ~ \.(?!html) {
add_header Cache-Control "public, max-age=2678400";
try_files $uri =404;
}
}
Now, what happens is, static files (css and js files are not loaded) and when inspecting from chrome, request gets made as https://website.com/static/css/app.597efb9d44f82f809fff1f026af2b728.css instead of https://website.com:7000/static/css/app.597efb9d44f82f809fff1f026af2b728.css. So it says 404 not found. I am not sure why I cannot understand such a simple thing.
Your main problem is not really with nginx but with how the 2 applications are setup. I don't have your code but this is what I can infer from your post:
In your pages you load the static content using absolute paths: /static/css/...
So even when you call your pages with /admin in front they will still try to load the static content from /static/
One solution is to use relative paths for your static content. Depending on how complex your application is this might require some work... You need to change the path to static files to something like "./static/css/..." and make sure your files still work. Then your setup in nginx will work because admin pages will try to load '/admin/static/...'
Another solution is to rename the 'static' folder in the admin app to something else and then proxypass that new path as well in your nginx config.
One last thing, your post mentions 2 ports: 6500 and 7000. I am assuming that is a mistake so can you correct it? Or did I understand wrong?
Related
I have a docker image that serves an Angular app via an nginx webserver. The image is to be deployed into a kubernetes cluster, exposed behind an nginx-ingress resource, so all my DNS config is handled by ingress.
A requirement appeared where I'm supposed to keep some legacy configuration, mainly a URI of the form /some/path/to/stuff/{a large code}/something/{another code}. I have the regex settled and everthing works on that part.
It was requested to perform a redirect from www.example.com/some/path/to/stuff/{a large code}/something/{another code} to www.example.com/path/{a large code}/something/{another code}?
The constraint being is that the only modification I can do is modifying the nginx.conf file that's injected in the Dockerfile.
I tried location regex with proxy_pass, rewrite and return 301.
From my tests it seems impossible to do while being DNS-agnostic.
Tl;dr can redirects be performed on webservers while being DNS agnostic?
EDIT: This is the Dockerfile:
FROM node:12.16.1-alpine AS builder
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build-prod
FROM nginx:1.15.8-alpine
RUN rm -rf /usr/share/nginx/html/*
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /usr/src/app/dist/dish-ui/ /usr/share/nginx/html
This is the base nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
}
What I tried:
added rewrite rules in location blocks:
location ~* ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" {
rewrite ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" /new/$1/$2/path;
}
The above configuration caused 404 error because the new path couldn't be found in the root folder.
added rewrite rule inside server block:
rewrite ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" /new/$1/$2/path;
This would only check once, subsequent requests wouldn't get rewritten. It also didn't redirect properly.
added proxy pass to localhost:
location ~* ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" {
proxy_pass http://127.0.0.1/new/$1/$2/path;
}
This didn't work either. My index.html would alway be loaded, it seemed that the rule was ignored completely, I also placed it BEFORE location /.
I settled on hardcoding the correct URL and use a 301 return.
I can run nginx in a Docker container, port 4000. It should serve a static website with the following structure:
/usr/share/nginx/html/
index.html
app.html
app/
some-other.html
The port was mapped to port 3000 on localhost. Now I am trying to use clean urls as following: http://localhost:3000/app. However, when I try to access this url it is redirecting me to http://localhost:4000/app/ (notice the suffixing /) with status code 301 - Moved Permanently.
I've tried using try_files in different locations and order (it's commented below). Nothing worked. In addition, I've also used a rewrite without success. Always resulting in the same redirect.
site.conf
server {
listen 4000;
server_name localhost;
root /usr/share/nginx/html;
# location / {
# index index.html
# try_files $uri $uri.html $uri/ =404;
# }
location / {
index index.html
try_files $uri #htmlext $uri/;
}
location ~ \.html$ {
try_files $uri =404;
}
location #htmlext {
rewrite ^(.*)$ $1.html last;
}
error_page 404 /404;
}
I'd expect nginx to return the content of app.html without touching the url in any form. Can't I do this without adding location /app too?
I have a online shop that has creditcard payment with 3Dsecure.
When 3D secure navigates back to my site using url example.com/confirmPage/token I get a 405 not allowed from Nginx.
If I visit the page direct from my browser there is no problem also when I refresh the exact same page with the 405 error it loads perfectly fine.
It seems to be related to programmatic redirection to my site from 3DSecure.
Details:
Site is hosted in a AWS ECS Cluster which redirects to https so Nginx doesn't have to.
Site runs in a Docker container with Nginx
My Nginx config for the site looks like this:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name example.com *.example.com;
access_log /var/log/example/access/example.access.log;
error_log /var/log/example/error/example.error.log;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.pem;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html?/$request_uri;
}
}
}
This is copied over using a docker file.
Any help would be greatly appreciated.
My team mates managed to find the fix for this.
location / {
error_page 405 = 200 $uri;
try_files $uri $uri/ /index.html?/$request_uri;
}
405 (Not Allowed) on POST request - Esa Jokinen
I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the official nginx docker image docs which uses envsubst to generate the config file from a template before running nginx.
This is my nginx template (nginx.conf.template):
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
I use the default command override feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:
command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
Kubernetes successfully deploys the pod however when I make a request I get a ERR_TOO_MANY_REDIRECTS error in my browser.
Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the add_header directive) it works fine.
(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)
Any help appreciated.
I am pretty sure envsubst is biting you by making try_files $uri $uri/ /index.html; into try_files / /index.html; and return 301 https://$host$request_uri; into return 301 https://;. This results in a loop of redirections.
I suggest you run envsubst '$MYAPP_ENV' <template >nginx.conf instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like envsubsts '$VAR1$VAR2$VAR3'.
If you want to replace all environment variables you can use this snippet:
envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
Also, while it's not asked in the question you can save yourself some trouble by using ... && exec nginx -g 'daemon off;'. The exec will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.
I'm trying to add a Wordpress blog into a site that was built in ruby on rails. I just need it to be in a sub directory. I made a folder in the public directory and put the Wordpress files in there and now i'm getting a routing error and i'm really not that familiar with rails. Can someone help me figure out a way to do this?
You can get PHP and rails working in the same project if you have access to the server configuration. I was able to get things working on a test VPS in just a few minutes. I didn't test with wordpress, just a simple phpinfo() call, but I don't see any reason why it would fail.
My install uses NGINX for the web server, Unicorn for Rails, and spawn-fcgi and php-cgi for the PHP processing.
I already had a rails app working so I just added PHP to that. The rails app uses NGINX to proxy requests to Unicorn, so it was already serving the public directory as static. I will post my virtual host file below so you can see how it was done.
This was all done on an ArchLinux VPS, but other distros should be similar.
My virtual host file:
upstream unicorn {
server unix:/tmp/unicorn.jrosw.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name example.com www.example.com;
root /home/example/app/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/conf/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/example/app/current/public$fastcgi_script$
}
}
And then a small script to bring up php-cgi:
#!/bin/sh
# You may want to just set this to run as your app user
# if you upload files to the php app, just to avoid
# permissions problems
if [ `grep -c "nginx" /etc/passwd` = "1" ]; then
FASTCGI_USER=nginx
elif [ `grep -c "www-data" /etc/passwd` = "1" ]; then
FASTCGI_USER=www-data
elif [ `grep -c "http" /etc/passwd` = "1" ]; then
FASTCGI_USER=http
else
# Set the FASTCGI_USER variable below to the user that
# you want to run the php-fastcgi processes as
FASTCGI_USER=
fi
# Change 3 to the number of cgi instances you want.
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 3 -u $FASTCGI_USER -f /usr/bin/php-cgi
The only problem I had was getting the fastcgi_index option to work, so you'd probably need to look into nginx's url rewriting capabilities to get wordpress' permalink functionality working.
I know this method isn't ideal, but hopefully it gets you on the right track.