Active Storage problems uplod and download - ruby-on-rails

I got activestorage to work with my rails 5.2 app on localhost.
class Course < ApplicationRecord
has_many_attached :files
I'm using direct-upload and storing to local storage. But when I deploy to the staging server I hit two problems:
Upload is broken
Of the three HTTP requests needed to upload a file:
POST /rails/active_storage/direct_uploads HTTP/1.1
PUT /rails/active_storage/disk/eyJfcmFpbHMi...3aff HTTP/1.1
POST /course/1908/file_update HTTP/1.1
the second request never get's a response. The response should be a simple 204 No Content, but instead it runs into a timeout.
the server setup is:
nginx revers proxy on on machine calls
apache on another machine, which runs
passenger
I can see in the logfile that rails writes that the response for the second
request is actually quite fast:
Started PUT "/rails/active_storage/disk/eyJfcmFpbHMi...
...
Disk Storage (0.8ms) Uploaded file to key: 82L8qxveeux.. (checksum: ..J1BK==)
Completed 204 No Content in 2ms (ActiveRecord: 0.0ms)
Download is broken
When I attache a file to a course via the rails console, and then
try to download the file, I recieve an empty file with the right filename.
Again, the rails logfile seems ok:
Started GET "/rails/active_storage/disk/eyJfcmFpbHMiOn....
...
Completed 200 OK in 1ms (ActiveRecord: 0.0ms)
but somewhere between nginx, apache and passenger the body of the response is lost.
Any ideas what could be at fault here?

Upload is broken
found no explanation but a solution at Nginx reverse proxy causing 504 Gateway Timeout
replace
proxy_set_header Connection "upgrade";
with
proxy_set_header Connection "";
and the timeout goes away, uploading files works.
Download is broken
I found an unexptected Heder in the HTTP response, containing the path of the uploaded file in local storage:
X-Sendfile: /var/www/.../storage/Rj/9o/Rj9oZL9W1jsHrnS7YJyw5
googling X-Sendfile I found an apache module
https://tn123.org/mod_xsendfile/
that takes this Response header and the send the file as the response body.
you can install it on ubuntu / debian with
apt install libapache2-mod-xsendfile
and configure it in apache:
XSendFile On
XSendFilePath /var/www/virthosts....
It seems activestorage uses this by default, but it's
not mentioned anywhere in the documentation.

Related

unicorn production configuration

I see my unicorn error file growing, and I can't find out why.
It log some path but when I try them on my browser, I don't see any issue neither redirection.
I'm using rails and this is a snippet of my unicorn production conf file:
stderr_path "log/unicorn.error.log"
stdout_path "log/unicorn.log"
One line from the error log file:
xxx.xxx.xxx.xxx, yyy.yyy.yyy - - [16/Mar/2019:20:13:54 +0100] "GET /fr/yyyyyyy HTTP/1.0" 200 - 0.0926
I thought 200 means that it was OK, but why do I see HTTP/1.0 when my site is only https?
More over, why do I get all those log entries when all the reported errors are working correctly for me?
Is there a way to format log so that I can get more info on error?
HTTP/1.0 is the protocol, this does not imply that your site is not using HTTPS.
By default, the Unicorn logger will write to stderr. So these are not errors, just logs of requests coming in.
Refer this for unicorn configuration options and their meanings - https://github.com/phusion/unicorn/blob/master/examples/unicorn.conf.rb

Nginx not rendering .json URLs - Rails

I have a Rails 4 application with nginx 1.4.4 and I'm having issues trying to access JSON routes.
I'm trying to access the following route:
http://www.example.com/products/106.json
Which produces a 404 / Not found in my nginx server. However, in my development server it works correctly (a Thin server), it displays the product as JSON.
I have already added the application/json json; line to the mime.types config file.
What else should I look in order to fix this? I can provide more information if needed. Thanks.

Passenger + Apache "TraceEnable Off"

We're using Passenger 4.0.59 behind Apache 2.2 (CentOS 6.latest) with Rails 3.2.
In /etc/httpd/conf/httpd.conf we have:
TraceEnable off
We have one virtual host configured in httpd.conf and a second virtual host configured in /etc/httpd/conf.d/ssl.conf that's configured with Passenger.
I'm using commands of this form to test:
curl -I -X {method} https://{host}/{resource}
...and seeing the following behavior:
When I TRACE a static image over http, i.e. http://host.domain.com/images/foo.png, I get a 405 response (as expected).
When I TRACE the same static image over https, meaning it goes through the virtual host configured with Passenger, I get 405 (as expected).
However, when I TRACE a Rails service in our app, e.g. https://host.domain.com/status.json, I get a 200 response w/ valid data.
I would expect Apache to shut down the request and return a 405 response before it even gets to Passenger/Rails, but that isn't happening.
What am I missing / misunderstanding?
What am I missing / misunderstanding?
TraceEnable off is the correct directive to use, but you may have another TraceEnable directive elsewhere in your configs.
You should check all of your apache config files to be sure there is no other TraceEnable directives.
Since the TraceEnable directive can be used within either the server config or the virtual host config, so you may just want to add it to both.

Rails + thin: Not possible to download large files

I have a rails app where users can manage large files (currently up to 15 GB). They have also the possibility to download the stored files.
Everything works perfect for files < 510 MB. But for > 510 MB, the download stops after 522,256 KB (510 MB).
I think thin produces this issue. When I start my dev server using thin, I cannot download the complete file. When I start the dev server using webrick, everything works.
I used top to compare the RAM/CPU behavior, but both server, thin and webrick, behave the same way. In development, both server read the complete file into RAM and then send it to the user/client.
I tried to change some options of send_file like stream, or buffer_size. I also set length manually. But again, I was not able to download the complete file using thin.
I can reproduce this behavior using Firefox, Chrome, and curl.
The problem is that my productive rails app uses 4 thin servers behind an nginx proxy. Currently, I cannot use unicorn, or passenger.
In development, I use thin 1.6.3, rails 4.1.8, ruby 2.1.2.
def download
file_path = '/tmp/big_file.tar.gz' # 5 GB
send_file(file_path, buffer_size: 4096, stream: true)
end
If you are using send_file, it is ideal to use a front end proxy to pass off the responsibility of serving the file. You said you are using nginx in production, so:
In your production.rb file, uncomment config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'.
You will also have to change your nginx configuration to accommodate Rack::Sendfile. Its documentation is located here. The changes amount to adding:
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/files/; # or something similar that doesn't interfere with your routes
to your existing location block and adding an additional location block that handles the X-Accel-Mapping that you added. That new location block might look like:
location ~ /files(.*) {
internal;
alias $1;
}
You will know it is working correctly when you ssh to your production server and curl -I the thin server (not nginx) and see a X-Accel-Redirect header. curl (no -I) directly to the thin server should not send the file contents.
You can see my recent struggle with nginx and send_file here.

How to upload large file in Ruby on rails?

I am Using Rails 3.0.9 , Ruby 1.9.3p0
I am trying to upload a zip/tar file to server.
When i am uploading file 2.5mb. Evey thing works fine. But when i am uplaoing a zip file of 350mb. It creates error.(network error connection time out/ or "aborted" in response of firebug) .
In apache config, i had increased timeout from 300 from 300000 but nothing happened.
Is this possibly an apache setting? Or could it be something on their end? Any suggestions on where I should look would be greatly appreciated.
I have used both apache and Nginx. For such big file uploads i prefer NGINX. If you use Nginx you can set client_max_body_size to something like 500 MB inside your nginx.conf . One of the question who prefers the same here .

Resources