I have a simple NGINX proxy configured, with some simple caching, and it's performance is behaving oddly in OpenResty vs vanilla NGINX.
Under load testing (300rpm) the vanilla NGINX works just fine, however, the moment I switch the from NGINX over to OpenResty, I get a portion of requests which suddenly hang, unresponsive, taking 20+ seconds to return.
My nginx.conf looks as follows:
events {
worker_connections 1024;
}
http {
proxy_cache_path /var/cache keys_zone=pagecache:10m;
server {
listen 80;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/mycert.pem;
ssl_certificate_key /etc/ssl/mycert.key;
location / {
proxy_cache pagecache;
proxy_cache_key $host$request_uri;
proxy_cache_lock on;
proxy_pass http://ssl-proxy-test.s3-website-eu-west-1.amazonaws.com/;
add_header X-Cache-Status $upstream_cache_status;
}
}
}
My Dockerfile for NGINX looks like this:
FROM nginx
COPY certificates /etc/ssl
COPY nginx.conf /etc/nginx/nginx.conf
And for OpenResty looks like this:
FROM openresty/openresty:buster
COPY certificates /etc/ssl
COPY nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
I've tried this on several OpenResty builds (buster, bionic, xenial), and get the same results on each.
The slow requests do however return 304 with a Cache-Status: HIT header, and don't appear to make it through to the upstream server, which makes me think the bottleneck must be while reading the cached data from memory/disk? Rather than coming from upstream.
I'm new to OpenResty, so am not entirely sure how much it differs in respect to vanilla NGINX concerning its cache behaviour.
Any advice on where to start debugging this? Or what might be the cause?
After trying my load tests on some different infrastructure I found that this problem only seemed to occur on AWS Elastic Container Service.
Switching to a Docker image which is based on Centos/Amazon Linux seemed to get things working much more consistently.
Still a little unsure as to the real cause, but at least have something working.
Related
I have a react app served with serve on docker.
I use Nginx to rewrite requests to my app with the following config:
http {
upstream myapp_servers {
server 1.2.3.4:8000;
}
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://myapp_servers;
}
}
When I access my app with myapp.com it works fine.
If i want to access a subroute on my app like myapp.com/route nginx returns 404 error,
try_files $uri index.html only works if nginx serves static files.
How can i solve this issue for dockerized react app?
Ok so I figured it out, the issue was not with Nginx.
What happened is that I used vercel\serve for my deployment server and i had to run it in single mode with -s, also i had to remove homepage: "."
from my package.json and now the application works properly in production.
I have a Rails API and a web app(using express), completely separate and independent from each other. What I want to know is, do I have to deploy them separately? If I do, how can I make it so that my api is in mysite.com/api and the web app in mysite.com/
I've seen many projects that do it that way, even have the api and the app in separate repos.
Usually you don't expose such web applications directly to clients. Instead you use a proxy server, that forwards all incoming requests to the node or rails server.
nginx is a popular choice for that. The beginners guide even contains a very similar example to what you're trying to do.
You could achieve what you want with a config similar to this:
server {
location /api/ {
proxy_pass http://localhost:8000;
}
location / {
proxy_pass http://localhost:3000;
}
}
This is assuming your API runs locally on port 8000 and your express app on port 3000. Also this is not a full configuration file - this needs to be loaded in or added to the http block. Start with the default config of your distro.
When there are multiple location entries nginx chooses the most specific one. You could even add further entries, e.g. to serve static content.
While Svens answer is completely correct for the question given. I'd prefer doing it at the DNS level so that I can change the server to a new location just in case my API or Web App experience heavy load. This helps us to run our APIs without affecting WebApp and vice-versa
DNS Structure
api.mysite.com => 9.9.9.9 // public IP address of my server
www.mysite.com = > 9.9.9.9 // public IP address of my server
Since now you'd want both your WebApp and API to run on the same server, you can use nginx to forward requests appropriately.
server {
listen 80;
server_name api.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:3000;
}
}
server {
listen 80;
server_name www.mysite.com;
# ..
# Removed for simplicity
# ..
location / {
proxy_pass http://localhost:8000;
}
}
Any time in future if you are experiencing overwhelming traffic, you can just alter the DNS to point to a new server and you'd be good.
I have both a working Rails 4 application (http://localhost:3000) and Nginx server (http://localhost:80) accessible through the browser.
Nginx has been configured as reverse proxy with my Rails 4 app so that http://localhost actually reaches my rails application http://localhost:3000. Now, this is working fine but the web pages get displayed extremely slowly whenever I access the application through Nginx. I have configured Tomcat with Apache Web Server in past and never slowness problem before and practically speaking Nginx is said to much lighter and faster than Apache Web Server.
This makes me think if I have configured my Rails app with Nginx correctly?
Modified nginx.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:3000;
}
...
...
}
I want to create a simple chat.
I am not a guru of server administration.
So I have a question about ngnix and faye.
I use ngnix + passenger for my production server. I have a droplet on digitalocean and want deploy my application on this.
So for deployment I use official passenger tutorial https://www.phusionpassenger.com/library/install/nginx/install/oss/trusty/
For model callbacks I use faye-rails gem. Like faye-rails say if I use passenger, I need use this configuration
config.middleware.use FayeRails::Middleware, mount: '/faye', :timeout => 25, server: 'passenger', engine: {type: Faye::Redis, host: 'localhost'} do
map '/announce/**' => SomeController
end
In my development localhost:3000 chat works perfectly fast. But when I deploy it, it works very slowly(the response comes in the interval of 5 to 60 seconds). I dont know how to fix it.
In my /etc/ngnix/sites-enabled/myapp.conf I use this config:
server {
listen 80;
server_name server_ip;
# Tell Nginx and Passenger where your app's 'public' directory is
root /project_path_to_public;
# Turn on Passenger
passenger_enabled on;
passenger_ruby /ruby_wrapper_path;
}
Need I upgrade my /etc/ngnix/sites-enabled/myapp.conf and how? Or what I need to do?
I'm currently using Faye and Redis on an application I'm developing. This is not a direct solution to the question's current setup, but an alternative method that I have implemented. Below is my nginx configuration and then I have Faye running via rackup in a screen on the server.
/etc/nginx/sites-enabled/application.conf:
server {
listen 80;
listen [::]:80;
server_name beta.application.org;
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/application/current/public;
# Turn on Passeger
passenger_enabled on;
passenger_ruby /usr/local/rvm/gems/ruby-2.2.1/wrappers/ruby;
rails_env production;
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
expires 1y;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 127.0.0.1:9292;
}
server {
listen 8020;
location / {
proxy_pass http://127.0.0.1:9292/push;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This link should provide a little insight into how it works.
https://chrislea.com/2013/02/23/proxying-websockets-with-nginx/
You can also reference the Faye github for some guidance on setting it up with Passenger.
Also, if you followed the Digital Ocean tutorials for initial server setup and ended up enabling your firewall, please ensure you allow the ports you have Faye/websockets running on. (See here under configuring a basic firewall: Additional Recommended Steps for New Ubuntu 14.04 Servers
My alternative method involves running Faye in a separate screen on the server. A few commands you will need to manage screens on an ubuntu server are:
screen -S <pick screen name> (new screen)
screen -ls (lists screens)
screen -r <screen number> (attach screen)
to quit from a screen, ctrl + a THEN "d" (detach screen)
Once you have a new screen running, run the Faye server in that screen using rackup: rackup faye.ru -s thin -E production
As a note, with this option, every time you restart your Digital Ocean server (i.e. if you create a screenshot as a backup), you will need to create a new screen and run the faye server again; however, using something like Daemon would be a better implementation to circumvent this (I merely haven't implemented it yet...). Head over to Github and look for FooBarWidget/daemon_controller.
Let me know if you have any other questions and I'll try to help out!
Well this is embarrassing. If for some reason my developer sends a bad build of our rails app to the production server, passenger may not be able to load. When that happens, web requests to passenger dump an error page with all of the variables in .env. As he prefers to put all of his secrets in .env like API keys to remote services, this is potentially a big security hole.
Is there any way to turn this behaviour off? We're using nginx. We're adding a staging server to the workflow to avoid pushing bad releases, but still, this seems like it shouldn't be happening.
Thanks. Here's the relevant portion of the nginx.conf file:
http {
passenger_root /home/X/.rvm/gems/ruby-2.1.1/gems/passenger-4.0.40;
passenger_ruby /home/X/.rvm/gems/ruby-2.1.1#XXX/wrappers/ruby;
server {
listen 443;
server_name www.X.com;
root /home/X/current/public;
passenger_enabled on;
..
Turn passenger_friendly_error_pages off. Since 4.0.42, it's off by default on production.