AWS Elastic Beanstalk Rails modify passenger config gzip - ruby-on-rails

Hi I have an EBS app and trying to enable gzip on this. However when i test for gzip compression, I see that the home page itself is gzip enabled, however the api calls are not gzip enabled.
For example:
masterpiecesart.com - gzip on!
http://masterpiecesart.com/api/v2/countries.json - gzip off!
Do i have to add some additional config files in the .ebextensions folder to enable gzip on all calls?
If so, where should this config be added. In one suggested answer(Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk), I see that the below /etc/nginx/.. folder is where the conf should be added.
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000777"
owner: ec2-user
owner: ec2-user
content: |
client_max_body_size 20M;
gzip on;
However, when I ssh into my ebs ec2 instance, I see no nginx folder inside the /etc/ directory. The nginx server is running at /var/lib/passenger-standalone/3.0.17-x86_64-ruby1.9.3-linux-gcc4.6.2-1002/nginx-1.2.3/sbin/nginx and thr config seems to be from the /tmp/passenger-standalone.1661/config file.
In the config file which I should add to .ebextensions, where should I add the proxy.conf file if not at the "/etc/nginx/conf.d/proxy.conf" as specified in the above answer.
Any help would be great, thanks in advance!

Related

Multi Tenant Rails app: Certbot SSL, Puma and nginx

I'm trying to set up a multi tenant rails app. Nginx & Puma with Certbot. However, certbot will rewrite the nginx conf to hard link to the last domain the server_name list.
Then the other sites "example1.com, exmaple2.com, etc" will not have certs served. They will have certs generated but not hard linked in the conf file.
I've tried seperating each domain into their own conf file in the sites-enabled directory but I get an error about multiple upstreams.
So, what is the solution?

What is the role of nginx when dockerizing an app?

I am new to docker and I've been working on dockerizing and deploying my app to an internal server at work.
The structure is that I have a Dockerfile for my react + nginx server and a flask backend.
Then I use docker-compose to merge these Dockerfiles.
I've been following the format that other people at my work have written previously, so I am not fully grasping all aspects.
I am especially confused about the role of nginx.
The Dockerfile that contains both react and nginx looks like this:
FROM node:latest as building
RUN npm config set proxy <proxy for my company>
RUN npm config set https-proxy <proxy for my company>
WORKDIR /app
ENV PATH /node_modules/.bin:$PATH
COPY package.json /app/
COPY ./ /app/
RUN npm install
RUN npm install react-scripts#3.0.1 -g
RUN npm run build
FROM nginx
RUN rm -rf /etc/nginx/conf.d
COPY deployment/nginx.conf /etc/nginx/nginx.conf
COPY --from=building /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and my customized nginx.conf looks like
user root;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
server_name <internal_server_box>
;
listen [::]:80;
listen 80;
root /usr/share/nginx/html;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /v1 {
proxy_pass <backend_container>:5000;
}
}
client_max_body_size 2M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
I am not sure what nginx does here because I can still make this app accessible from the outside just by putting up the react app without the nginx. I read somewhere that it could function as some kind of a gateway, but it wasn't clear for me.
It would be great if anyone can explain why we need nginx to make the server up while we can just put it up (make it accessible outside the internal server box) without it.
In the general case, there is no need or requirement to install nginx if you want to dockerize something.
If what you are dockerizing is a web app of some sort (in the broadest sense, i.e. something which people will use their browsers or an HTTP API to communicate with) and it can only handle a single client connection at a time, the benefit of a web server in between is to provide support for multiple concurrent clients.
Many web frameworks allow you to serve a single user or a small number of users without a web server, but this does not scale to production use with as many concurrent clients as the hardware can handle. When you deploy, you add a web server in between to take care of spawning as many instances of your server-side client handling code as necessary to keep up, as well as handle normal web server tasks like resource limits, access permissions, redirection, logging, SSL negotiation for HTTPS, etc.
The nginx has two important roles here. (Neither is specific to Docker.) As you say it's not strictly required, but this seems like a sound setup to me.
The standard Javascript build tooling (Webpack, for instance) ultimately compiles to a set of static files that get sent across to the browser. If you have a "frontend" container, it never actually runs your React code, it just serves it up. While most frameworks have a built-in development server, they also tend to come with a big "not for production use" disclaimer. You can see this in your Dockerfile: the first-stage build compiles the application, and in the second stage, it just copies in the built artifacts.
There are some practical issues that are solved if the browser can see the Javascript code and the underlying API on the same service. (The browser code can just include links to /v1/... without needing to know a hostname; you don't have to do tricks to work around CORS restrictions.) That's what the proxy_pass line in the nginx configuration does.
I consider the overall pattern of this Dockerfile to be a very standard Docker setup. First it COPYs some code in; it compiles or packages it; and then it sets up a minimal runtime package that contains only what's needed to run or serve the application. Your build tools and local HTTP proxy settings don't appear in the final image. You can run the resulting image without any sort of attached volumes. This matches the sort of Docker setups I've built for other languages.

Nginx error: client intended to send too large body

Periodically I get an error:
This site can't be reached.
The webpage at https://example.com/document might be temporarily down or it my have moved permanently to are new web address.
My site is stored on AWS.
I use rails + nginx + passenger.
Nginx error log:
client intended to send too large body: 3729822 bytes,
client: 172.42.35.54, server: example.com,
request: "POST /document HTTP/1.1", host: "test.example.com",
referrer: "https://test.example.com/document/new"
app log:
ActionController::RoutingError (No route matches [GET] "/document")
After a while, the error disappears. I have doubts that this is due to deployment, but I'm not sure. Could you please tell me, with what it can be related and how to fix such a problem?
For me path of nginx.conf was /etc/nginx/nginx.conf.
In my case I just added client_max_body_size in http block and it worked for me
http {
...
client_max_body_size 20M;
}
Make sure to restart nginx after changing this config
Default Nginx config limits client request body with 1Mb.
You have to increase client_max_body_size to allow users to post large documents.
Don't miss with the context (http, server, location) of this derictive and don't forget to reload configuration or restart Nginx after that.
I have updated /etc/nginx/nginx.conf
in my case, I have added client_max_body_size in http block after sendfile on; as below
http {
...
sendfile on;
client_max_body_size 20M;
}
it is very important to put client_max_body_size after sendfile on;
Don't forget to restart nginx as below after updating the nginx.conf
For ubuntu
sudo service nginx restart
For Centos
sudo systemctl restart nginx

NGINX Setup (Rails App in a subdirectory)

I'm using NGINX with Passenger to run a rails application on an Ubuntu server.
However, I'd like to have the rails app served from www.mydomain.com/store , and have
a wordpress install served from www.mydomain.com.
How would one go about setting up the nginx.conf?
From the official manual:
To do this, make a symlink from your Ruby on Rails application’s public folder to a directory in the document root. For example:
ln -s /webapps/mycook/public /websites/phusion/rails
Next, set passenger_enabled on and add a passenger_base_uri option to the server block:
server {
listen 80;
server_name www.phusion.nl;
root /websites/phusion;
passenger_enabled on; # <--- These lines have
passenger_base_uri /rails; # <--- been added.
}

Nginx and Passenger deploy issue

Currently I can only get the default nginx page to come up on my domain name. I am pretty sure the error is either in the /etc/hosts file or the enginx.config file.
my /etc/hosts file is
127.0.0.1 localhost.localdomain localhost
myip server.mydomain.com server
and nginx.config is:
server {
listen 80;
server_name server.mydomain.com;
root /whatever/pulic;
passenger_enabled on;
rails_env production;
I don't get any errors in the log. Incidentally I can run mongrel and on mydomain:3000 see the application there.
In your /etc/hosts, you have
123.45.67.78 server.domain.com servername
So in you nginx.conf you should have the line
server_name servername
as you defined in the third column of your /etc/host. Make sure you don't still have the default nginx server block in your nginx.conf file, too, otherwise it might be taking priority (based on it's relative position).
We just had this issue. The problem turned out to be that Nginx was using a different config file than we thought it was (possibly an issue with how it was compiled on the server?).
We discovered this by doing nginx -t, which lists the config file it's reading and tests the syntax. The one it said it was testing was not the one we expected.

Resources