I'm setting up a web app, with a react frontend that I want to expose and some local backend modules, all deployed in docker-compose. Everything works fine in localhost.
Now, I need to use nginx to proxy the requests to my purchased domain, using a cloudflare website, and later using https. In Cloudflare, the SSL/TLS encryption mode is Off, for testing with http. Everything has been setup in cloudflare, according to some docs that I read. Unfortunatly, my nginx configuration is only working for localhost, preventing me from doing a https configuration. (getting 522 error when loading the page using my domain).
This is my nginx config file:
server {
listen 80;
server_name mydomain;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location /statsmodule {
proxy_pass http://statsmodule:3020;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
location /auth {
proxy_pass http://auth:3003;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
(...)
# the other location blocks for the other services are similar to this
}
What am I missing?
Related
I want to config nginx as a reverse proxy on my host ubuntu VM to point to the jupyterhub running inside a docker on port 8888. I am using subpaths for this and not subdomains and my corporate firewall gives me access only to port 80 and 443, all other ports are blocked, that's why i can't use rewrite. I came up with the following nginx configuration, which works but it does not display the assets from jupyter hub(css files, images and so on)
The path myservername.com/jphubdisplays the page but the assets are loaded from myservername.com (without the subpath /jphub)
Ex(the logo is loaded from myservername.com/hub/logo instead of myservername.com/jphub/hub/logo.
Does anyone know if i am doing this the right way? what should i change inside the config?
upstream jupyter {
server localhost:8888;
keepalive 32;
}
server {
listen 80;
server_name myservername.com;
ssl_certificate /etc/ssl/cert-request/cert.pem;
ssl_certificate_key /etc/ssl/private/cert.key;
ssl_prefer_server_ciphers on;
location /jphub/ {
proxy_pass http://jupyter/;
proxy_http_version 1.1;
proxy_redirect default;
proxy_redirect / /jphub/;
proxy_redirect http://jupyter/ https://$host/jphub/;
proxy_pass_header Set-Cookie;
proxy_pass_header Cookie;
proxy_pass_header X-Forwarded-For;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Nginx-Proxy true;
add_header X-Upstream $upstream_addr;
proxy_read_timeout 86400;
}
}
When the location path ends in /, Nginx removes the leading part before forwarding the request.
To have it forward the full path, remove the trailing /, so you have
location /jphub {
...
...
}
in your Nginx configuration.
I'm deploying some services using Docker, with docker-compose and a nginx container to proxy to my domain.
I have already deployed the frontend app, and it is accessible from the web. Supposedly, I only need to expose the frontend/nginx port to the web, without me needing to expose the rest of the services. But I'm not able to do that for now.
For example, Client -> Login Request -> frontend <-> (local) Backend get request.
Right now, I'm getting connection refused, and the get is pointing to http://localhost, and not the name of the service defined in docker-compose. (all containers are deployed on the same network, one that I have created)
What do I need to do to configure this?
Here is my nginx config so far:
server {
listen 80 default_server;
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
location / {
proxy_pass http://frontend:3000/;
}
location /auth {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://auth:3003;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
# Same for the other services
(...)
EDIT:
Should I create a location for every get and post that I have for my services?
As:
location getUser/ {
proxy_pass http://auth:3003;
}
What config.action_cable.url should be configured for websockets / Rails / Kubernetes /Minukube with Nginx?
When running "docker-compose" locally an Nginx processes in front of a Rails process (not API only, but with SSR) and a standalone Cable process (cf the guides), the websockets work fine by passing the following server-side (in say "/config/application.rb", with action_cable_meta_tag set in the layouts):
config.action_cable.url = 'ws://localhost:28080'
I am targetting Kubernetes with Minikube locally: I deployed Nginx in front of a Rails deployment (RAILS_ENV=production) along with a Cable deployment but I can't make it work. The Cable service is internal of type "ClusterIP", with "port" and "targetPort". I tried several variations.
Any advice?
Note that I use Nginx -> Rails + Cable on Minikube, and the entry-point is the Nginx service, external of kind LoadBalancer where I used:
upstream rails {
server rails-svc:3000;
}
server {
listen 9000 default_server;
root /usr/share/nginx/html;
try_files $uri #rails;
add_header Cache-Control public;
add_header Last-Modified "";
add_header Etag "";
location #rails {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Host $http_host;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_pass http://rails;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
To allow any origin, I also set:
config.action_cable.disable_request_forgery_protection = true
I have an answer: in a Minikube cluster, don't put anything and disable forgery protection, Rails will default the correct value. When Nginx is front of a Rails pod and a standalone ActionCable/websocket pod (the Rails image is launched with bundle exec puma -p 28080 cable/config.ru), if I name "cable-svc" the service that exposes the ActionCable container, and "rails-svc" the one for the Rails container, you need to:
in K8, don't set the config for CABLE_URI
in the Rails backend, you don't have the URL (unknown 127.0.0.1:some_port), do:
# config.action_cable.url <-- comment this
config.action_cable.disable_request_forgery_protection=true
in the Nginx config, add a specific location for the "/cable" path:
upstream rails {
server rails-svc:3000;
}
server {
[...root, location #rails {...}]
location /cable {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass "http://cable-svc:28080";
}
}
Check the logs, no more WebSockets in Rails, and Cable responds.
I have setup two nginx instances as follows:
nginx (https) → docker:[nginx (http) → uwsgi]
The front facing nginx process exposes the https service, which passes down all requests via proxy_pass to the docker nginx process. It also redirects all requests to http → https.
The problem is that the docker nginx process has the following line in a location block in its default server instance:
server {
...
location = / {
return 301 $scheme://$http_host${request_uri}login/;
}
}
With the intention of redirecting / to the login page. This works fine except that the redirection always points to an http://... url. E.g. A request to http://myserver.com/, gets redirected to https://myserver.com/, then it gets passed down to the docker nginx which returns a 301 with the following url: http://myserver.com/login/. I want it to be https://myserver.com/login/ or whatever schema the front-facing server may offer.
This is how I setup the front-facing nginx process:
server {
listen 443 ssl http2 default_server;
...
location / {
proxy_pass http://localhost:81;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
proxy_redirect https:// http://;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Scheme $scheme;
}
}
server {
listen 80 default_server;
...
location = / {
return 301 https://$http_host${request_uri};
}
}
Is this kind of redirection even possible?
Also, in case you wonder, I also tried all possible combinations of X-Forwarded-Proto, X-Scheme and proxy_redirect as other answers suggest, namely:
Nginx does redirect, not proxy
how to handle nginx reverse proxy https to http scheme redirect
One trick that I've found is that you can disable absolute redirects (e.g. instead of redirecting to http://localhost/your_url_with_trailing_slash_now it will redirect to /your_url_with_trailing_slash_now).
add the following to any point within the server block (of the nginx instance that does this 301 redirect, in your case the nginx-docker instance):
server {
absolute_redirect off;
...
More info can be found here: https://serverfault.com/questions/227742/prevent-port-change-on-redirect-in-nginx
I'm trying to setup websockets on my rails application. My application works with iOS client that uses SocketRocker library.
As websockets backend i use faye-rails gem.
It is integrated to the rails app as rack middleware
config.middleware.delete Rack::Lock
config.middleware.use FayeRails::Middleware, mount: '/ws', server: 'passenger', engine: {type: Faye::Redis, uri: redis_uri}, :timeout => 25 do
map default: :block
end
It works perfect until i upload it to the production server with Nginx. I have tried a lot of solutions to pass websocket request to the backend, but with no luck. The main thing is there are two servers running, but i have just one. My idea was i just needed to proxify requests from /faye endpoint to /ws (to update headers). What is correct proxy_pass parameters should be in my case?
location /faye {
proxy_pass http://$server_name/ws;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
I had a similar problem and after struggling for a while, I finally could make it work.
I'm using nginx 1.8 with thin server with gem 'faye-rails' and my mount point is /faye
My nginx config looked like this:
upstream thin_server {
server 127.0.0.1:3000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
proxy_redirect off;
proxy_cache off;
location = /faye {
proxy_pass http://thin_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
}
location / {
try_files $uri/index.html $uri.html $uri #proxy;
}
location #proxy {
proxy_pass http://thin_server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
...
}
The final turn point for me to make it work was when I set the "location = /faye". Before I tried "location /faye" and "location ~ /faye" and it failed.
It looks like the equal sign "=" prevents nginx to mix with other location settings.