Nginx merge_slashes redirect - url

I am using nginx in with my Java Application and my problem is that nginx is merging the slashes and I am unable to redirect my website to the correct version.
For instance:
http://goout.cz/cs/koncerty///praha/
is merged to
http://goout.cz/cs/koncerty/praha/
and then I am unable to recognized the malformed URL and perform the redirection.
I tried to set
merge_slashes off;
and then:
rewrite (.*)//(.*) $1/$2 permanent;
But this has no effect and the // stays in the URL.
How can I achieve this?

Try this (untested):
merge_slashes off;
rewrite (.*)//+(.*) $1/$2 permanent;
It might cause multiple redirects if there are multiple groups of slashes though.
Like this:
http://goout.cz/////cs/koncerty///praha/
Might go to:
http://goout.cz/cs/koncerty///praha/
Then finally:
http://goout.cz/cs/koncerty/praha/

This works good, but for my setup adding port_in_redirect off; was necessary.

We encounter the same problem because of mistake, we add two slashes on the URL, and nginx will return 301 error code for the url with two slashes.
The solution for me is:
Add merge_slashes off; to nginx.conf file, and in the location part, add rewrite (.*)//+(.*) $1/$2 break;
The location setting for me is like below:
location / {
rewrite (.*)//+(.*) $1/$2 break;
proxy_pass http://http_urltest;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffers 4096 32k;
proxy_buffer_size 32K;
proxy_busy_buffers_size 32k;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
After add these two lines, when I access my url with two slashes, it will return the result with one slash.

Try this(both nginx only and nginx with openresty configuration) you can improve site SEO by doing these 301 redirection
please keep this code inside the server section for your nginx site conf file
server {
........
........
set $test_uri $scheme://$host$request_uri;
if ($test_uri != $scheme://$host$uri$is_args$args) {
rewrite ^ $scheme://$host$uri$is_args$args? permanent;
}
location {
................
................
}
}
its working good for me and am using this code now
example:-
request url- http://www.test.com//test///category/item//value/
result url:- http://www.test.com/test/category/item/value/
301 redirection so that the SEO of the site will not goes down

Related

.Net core POST API in NGINX reverse proxy throws error 404

This is my first time posting a question. Kindly let me know if I am missing something that needs to be shared.
I am trying to POST some data into database through my C# APP and API (separately build), but it throws error 404 only for the POST API. All other pages work fine and so does the GET request. The app and API have been deployed on a LINUX machine through NGINX reverse proxy server. Both of them work on HTTP protocol. The feature works for localhost, but not for IP dependent URL.
Here is the content of service file for the app, I do not know what is missing in it. Please take care of the "/" as well where ever it is needed. While performing RnD, I found that the POST request in NGINX gets redirected to GET, I don't know if this will be helpful or not, but felt like sharing.
server {
listen myIP:6002;
server_name attendancepp;
root /home/user/net-core/Publish/AttendanceModule/AttendanceApp;
location /AttendanceApp/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://myIP:6002/;
proxy_set_header Accept-Encoding "";
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The app works on the URL http://myIP:6002/attendance/allPages . All the pages are accessible without any issue. Just the POST part is not working.
Thank you, in advance.
working fine after commenting
try_files $uri $uri/ =404
cd /etc/nginx/sites-available/
sudo nano default << nginx
edit file then CTR+X and 'y' for yes
sudo systemctl restart nginx

Nginx config changes for sticky session in round robin

Rails application(4.2) is hosted on nginx and serves at localhost:5478. The ip_hash in the code snippet below maintains the server request response consistency and works as expected.
upstream rails {
ip_hash;
To share the load, ip_hash was commented. Now the login for the user starts failing since passing of session cookie is required while works in similar way for Rails3. This is related to something around sticky session but unable to trace the exact way of handling it.
nginx.conf
upstream mongrel {
server 127.0.0.1:5469;
}
upstream rails {
#ip_hash;
server 127.0.0.1:5479;
server 127.0.0.1:5480;
server 127.0.0.1:5481;
server 127.0.0.1:5482;
}
location / {
# Setup redirection headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# Pass the request thru
proxy_pass http://mongrel;
}
}
server {
listen 5478 default;
server_name _;
root "../games/public";
location ~ ^/assets/ {
root "../d2/public";
expires 1y;
add_header Cache-Control public;
add_header ETag "";
break;
}
I tried using consistent_hash $scheme $request_uri;
as suggested but consistent_hash as a directive is not recognized and fails. Let me know if any config change is required for nginx. I also found the same nginx config with ip_hash commented works for Rails3 application, not sure if this is related
There are two ways to do this, either:
you share your sessions between your backends, or
you pass a cookie to allow nginx to stick a client to an upstream server.
Let me know if any config change is required for nginx.
if you cannot modify the application, e.g. letting multiple application instances using a common storage, you can try to use sticky directive of nginx (>=1.5.7).
Using your example, it should be something like
http {
...
upstream rails {
server 127.0.0.1:5479;
server 127.0.0.1:5480;
server 127.0.0.1:5481;
server 127.0.0.1:5482;
sticky rails_sticky expires=1d domain=.rails.local path=/ httponly secure;
}
...
server {
listen 5478;
server_name rails.local;
root "../games/public";
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://rails;
}
...
}
...
}
You may need to adjust your configuration according to your environment.
Using sticky, nginx should check and bound a client using a sticky cookie called rails_sticky, if it's not yet bound. Bounding a client still checks whatever balancing method you set in upstream directive, weighted round-robin by default.
If client has been bound to a server, any subsequent requests will be forwarded to designated upstream server. In an event that designated upstream server cannot be used, nginx will re-bound the client to another server.

How to point many paths to proxy server in nginx

I'm trying to set nginx location that will handle various paths and proxy them to my webapp.
Here is my conf:
server {
listen 80;
server_name www.example.org;
#this works fine
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8081/myApp/;
}
#not working
location ~ ^/(.+)$ {
proxy_pass http://localhost:8081/myApp/$1;
}
}
I would like to access myApp with various paths like: /myApp/ABC, /myApp/DEF, myApp/GEH or /myApp/ZZZ.
Of course these paths are not available in myApp. I want them to point to root of myApp and keep url.
Is that possible to archive with nginx ?
Nginx locations match in order of definition. location / is basically a wildcard location, so it will match everything, and nothing will reach the second location. Reverse the order of the two definitions, and it should work. But actually, now that I look at it more closely, I think both locations are essentially doing the same thing:
/whatever/path/ ->>proxies-to->> http://localhost:8081/myApp/whatever/path/
A very late reply. this might help someone
try proxy_pass /myApp/ /location1 /location2;
Each location separated with space.
You will probably have to do a rewrite followed by a proxy pass, I had the same issue. Check here: How to make a conditional proxy_pass within NGINX

nginx does not forward to my rails app

I am rather new to using nginx. I want to use it to serve static content in order to reduce the load on the rails server. It seems to be a rather simple task but I just can't find a solution that works for me.
I want nginx to serve static files which exist in the public directory inside my rails application directory. To be more precise: I got an index.html inside the directory I want to get served when entering http:/[domainname]. Instead I just get the default nginx index.html. I already checked that thin is running and when I query 127.0.0.1:3000 I get the page I want.
So here's the file called [domainname] in the sites-available directory.
upstream rails {
server 127.0.0.1:3000; #This is where thin is waiting for connections
}
# HTTP server
server {
listen 80;
server_name [domainname];
set $app /home/projektor/website/app.[domainname];
root $app/public;
# Set a limit to POST data
client_max_body_size 8M;
# Errors generated by Rails
error_page 400 /400.html;
error_page 422 /422.html;
error_page 500 504 /500.html;
# Errors generated *outside* Rails
error_page 502 #502;
error_page 503 #503;
# If the public/system/maintenance.html file exists,
# return a 503 error, that ...
if (-f $document_root/system/maintenance.html) {
return 503;
}
# ... will serve the very same file. This construct
# is needed in order to stop the request before
# handing it to Rails.
location #503 {
rewrite ^ /system/maintenance.html break;
}
# When a 502 error occurs - that is, Rails is not
# running, serve the 502.html file
location #502 {
rewrite ^ /502.html break;
}
# Add client-side caching headers to static files
#
location ~ ^/(stylesheets|javascripts|images|system/avatars) {
expires 720h;
}
# Hand over the request to Rails, setting headers
# that will be interpreted by request.remote_ip,
# request.ssl? and request.host
#
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# If the file exists as a static file serve it directly
if (-f $request_filename) {
break;
}
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
}
The file is based on this one.
Thanks for your help.
Edit:
I already exchanged 127.0.0.1:3000 for 0.0.0.0:3000 in the upstream part. I also checked the the ownership of the files in sites-available and sites-enabled and they should both be ok.
I hardcoded return 503; into the location instruction and it seems that it never matches. It seems that it always matches the precreated default configuration.
Take a look at the Mongrel example from the try_files documentation. Something like this should work:
location / {
try_files /system/maintenance.html $uri $uri/index.html $uri.html #thin;
}
location #thin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header Host $http_host;
proxy_redirect off;
# Oh yeah, hand over the request to Rails! Yay! :-D
proxy_pass http://rails;
}
When I set up a recent version of NGINX, the solution to getting it to point properly to my rails app had everything to do with fully, and properly configuring it. There were a few extra steps I didn't find in the documentation. It was acting just as yours is. I don't have a clean install to go off of, but bear with me here.
I set a directory called sites-enabled in the default install location. This is a debian install from apt repository, so the install loccations are /etc/nginx and /var/nginx.
mkdir /etc/nginx/sites-enabled /etc/nginx/sites-available
Place conf file for site in sites-available/
Add this line to the bottom of /etc/nginx/nginx.conf
include /etc/nginx/sites-enabled/*;
Look for and remove any reference that may include the DEFAULT configuration, which is telling nginx to actually load this file. Which is what gives you the nginx default index.html (/etc/nginx/conf.d/default.conf)
grep -r "default.conf" /etc/nginx
Symlink (man ln) your file in sites-available to sites-enabled.
ln -s /etc/nginx/sites-available/myfilename /etc/nginx/sites-enabled/myfilename
Test your configuration.
/etc/init.d/nginx configtest
Once your configuration is set up properly, restart nginx
/etc/init.d/nginx restart
I can't remember if I removed this reference or including the line in step 2 was enough. If you update or comment on this answer and give me the results, I can try to dig up the other steps I took.
I don't think the problem here is your actual configuration.

Can one application with one server serve websockets and http traffic?

Is this somehow possible? Is it possible to do something like this in Ruby on top of Rack? I've seen there's websockets-rack but as far as I understand, that is only a rack module to serve ONLY websocket traffic not http also.
So basically, as the question states, is it possible to serve both protocols with just one server on the same port, instead of firing of something like Faye, websockets-rack or em-websockets?
Websockets are just an in-protocol upgrade of HTTP(s), so they are not normal TCP sockets but reuse the existing HTTP(S) connection (and thus use the same port). So, in theory it should work and from what I know it works with the Perl Mojolicious framework. But I don't know if it works work ruby/rack.
The short answer is - (AFAIK) no.
Currently, a ruby HTTP server (like rails or sinatra) and a websocket server are mutually exclusive.
After saying that, you could use a third party to emulate that. Specifically Ngnix. With Nginx you can listen to a single port, but, accroding to a path, decide whether you want to dispatch the request to the HTTP server or the Websocket server.
For example, you can run the HTTP server on port 3000, and the Websocket server on port 3020, and then configure the nginx.conf like this:
upstream http_app {
server 127.0.0.1:3000;
}
upstream websocket_app {
server 127.0.0.1:3020;
}
server {
listen 80;
server_name .example.com;
access_log /var/www/myapp.example.com/log/access.log;
error_log /var/www/myapp.example.com/log/error.log;
root /var/www/myapp.example.com;
index index.html;
location /web {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://http_app;
}
location /socket {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://websocket_app;
}
}
Now any request to http://www.example.com/web/... will reach the HTTP server, and any request to http://www.example.com/socket will reach the Websocket server.

Resources