send_file Rails 2 Issue - ruby-on-rails

In my Rails (2.3.10). I found some weird issue. My application needs to download a XML file when a user hit the downloaded URL.
For example :
http://www.example.com/test/all.xml
The problem here is, If I hit the url with alias name (http://www.example.com/test/all.xml). the XML not get downloaded.
If I hit production direct URL like http://xx.xx.xx.xx:3000/test/all.xml . The XML started downloading without any problem.
Can any one help on this. ? Please
Here is my code in Test controller :
def index
file_path="/tmp/all.xml"
send_file file_path, :type => 'text/xml; charset=utf-8'
end
I have placed the "all.xml" file into my production server /tmp/all.xml.
I am using Jruby(1.6.5) and a WAR file is deploying to Tomcat....

so production is running on port 80, but the application server is running on port 3000. What server is doing the redirect. I'd look to the configuration of that server.

Related

Serve files from public folder in ruby on rails app

I have been handed a Ruby Project that creates a document and serves it to the user, when I try to access the file on a local environment it it is delivered correctly, (this is the code that does so).
filepath = Rails.root.join("public",#records.document.url)
send_file (filepath)
So I know the file is constructed correctly and sending it to the user using send_file works at least in a local environment.
But when it's deployed on the production server (running Amazon EC2, ubuntu, deployed with dokku) I get a 500 Internal server error:
ActionController::MissingFile (Cannot read file *path of the file*)
Few things I'm noticing: doing a find / -iname "*filename*" tells me the file is stored in var/lib/docker/overlay2/*container_name*/merged/app/public/filename and var/lib/docker/overlay2/*container_name*/diff/app/public/filename but the result of joining Rails.root with the filename is app/public/filename, do I need to pass send_file the whole filepath?
I googled for a couple hours and it seems nginx has no access to the public folder because it's running in the host machine while the app is inside a container? How would I know if that is the case and if so, how should I serve the file?
The person who originally wrote the code told me to use OpenURI.open_uri() but googling it doesn't seem to turn up anything applicable to the situation.
Nothing you're doing here actually makes sense - its sounds like you're just following a bunch of misinformation down a bunch of rabbit holes.
The way this is supposed to work is that the files in /public - not /app/public are served directly by the HTTP server (NGinX or Apache) in production and your Rails application in development (so you don't have to configure a local HTTP server). The /app directory is for your application code and uncompiled assets. Do not serve files from there - ever.
The /public directory is used for your compiled assets and stuff like robots.txt, the default error pages and various icons. Serving the files directly by your HTTP server is far more efficient then serving them through your Rails application. You can do a litmus test to see if serving static assets are working by sending curl -v YOUR_URL/robots.txt.
If this isn't working in production you need to check your NGinX configuration. There is no shortage of guides on how to serve static files with NGinX and Docker.
Serving files with a Rails controller and send_data / send_file should only be done when its actually needed:
The file is not a static file or something that can be compiled at deploy time.
You need to provide access control to the files with your application.
Your proxying files from another source.

"Errno::ENOENT no file exists" but file DOES exist. Works locally, not in AWS

MY CODE BASE SETUP:
Ruby on Rails Deployed locally with Postgres
Production with Heroku (staging and production servers).
MY PROBLEM: Things are "seemingly" working on local machine (no hard errors) but NOT working in production. I believe my issue has to do with sending the proper information to the ruby, file.open() command.
I'm receiving the following error on my production(staging) server:
Errno::ENOENT in Projects#show
Showing /app/app/views/projects/show.html.erb where line #276 raised:
No such file or directory # rb_sysopen - https://[bucketname].s3.amazonaws.com/path/to/instance/of/file/file_name.stl
But if I take the resulting hyperlink and copy / paste it into my web browser, the file open / download menu is prompted. What am I doing wrong here?
BACKGROUND / HISTORY:
I haven't been able to resolve and I've scoured much of the stackoverflow questions looking for answers:
ENOENT, no such file or directory but file exists ()
File Exists But Receiving ENOENT Error (I'm not using node and my server restarts everytime I deploy)
Rails 3: Errno::ENOENT exception (if CentOS is like Heroku, then we have a similar issue which no one seems to be able to answer). Not sure about my Tmp file situation. But this question sounds similar to my issue.
Carrierwave, Rails 4; Errno::ENOENT (No such file or directory - identify) (might be relevant, but I'm not pulling an imagemagick file, I'm trying to parse a non-image file)
Remove "www", "http://" from string (I tried stripping various portions of my path in order to feed the file.open() command the right format to no avail
File.open, open and IO.foreach in Ruby, what is the difference? (this helped me understand the difference between each command type, which might be my issue).
Open an IO stream from a local file or url (sounds again like a reliable fix but I don't know how to utilize this open command as the gem specifies a file.open() command vs. these open() commands).
I defined the following method in my "project model" (project.rb) which takes the project file from a particular instance and from the "project_file path" input, the gem parses an STL file for my webapp.
This particular gem--"STL_parser"--calls a file.open(filepath,'rb') command which is where I believe my app is running into issues. Here's the method I call in my model which calls the STL_parser gem:
def analyze_STL project_file
if Rails.env.development? || Rails.env.test?
full_path = project_file.current_path
else
full_path = project_file.url.to_s
end
parser = STLParser.new
parser.process(full_path)
end
The RESULTS from each case and environment are as follows:
In Development:
project_file.current_path
...for one particular instance yields on my mac:
/Users/myname/full/path/to/file/on/laptop/unique_instance_file.stl
RESULT: app works on local machine AND if I copy/paste the above path into a web browser like Chrome, it prompts a file open / download menu.
In Production Environment:
project_file.current_path
RESULT:
Errno::ENOENT error - No such file or directory # rb_sysopen
-uploads/project/project_file/instance/instance_file.stl
and clearly lacks a subdomain, so in Production, I modified the path to call the file URL from AWS as seen above:
project_file.url.to_s
https://[bucketname].s3.amazonaws.com/path/to/instance/of/file/file_name.stl
RESULT:
Errno::ENOENT error - No such file or directory # rb_sysopen
-https://[bucketname].s3.amazonaws.com/path/to/instance/of/file/file_name.stl
...but if I copy / paste that path into my web browser, the file will prompt an open / download menu.
Thus the path seems to be pointing towards the correct to the file, but the file.open() command doesn't like that path format. Thoughts? I've been banging my head now for some time!
Possibilities:
Is it a permissions thing with AWS? If so, why would other image files produced by other ruby Gems open fine in the same AWS bucket?
When I apply the full-path as a hack...the URL format for my development environment, by simply creating a variable: hacked_path = "localhost:3000"+file_path
...this generates the Errno::ENOENT No such file or directory # rb_sysopen error. But I can copy / paste that same path in my web browser and it opens a download prompt. So it seems like my browser is piecing together some information about how to open the file, but the Gem I'm calling in my webapp isn't that flexible. A friend of mine said something about my webapp not being able to read the "binary" while my local machine can?
Is this an issue of file.open(filename, 'rb') vs. open(filename, 'rb')? I wouldn't even know how to modify the gem, but I could ask the developers for help.
Is this because the my app doesn't have enough time to download the file from AWS before it tries to process the file?
CORS permissions in AWS?
Something else more simple I haven't thought of?
Thanks in advance for your help and feedback! Yes I know I need more programming fundamentals courses. Any online ones you'd recommend? Already took the one from onemonth.

Rails application issue after deployment - 404 error

I'm new to RoR.
I was able to install Rails and host it in Webrick (Sample App with "Welcome" controller) in my windows.
Now i have a Unix Weblogic Server along with a dedicated domian.
After exporting the .WAR file using Warbler, i accessed the Oracle Admin Console from where i deployed the .WAR file in the dedicated domain. I did all this for the Sample app with only the Welcome controller in it.
But even after deploying the WAR file, on accessing the Domain along with the Port Number (:9002) i ended up with 404 file not found error On looking at the server logs,there wasn't any records relating to any error. The Application must have been deployed properly. I assume that i must have missed out on some basic configurations in the routes.rb or similar files before deploying. Can anyone Guess what are all the possibilities and if possible can anyone help me by pointing to any tuts that cover the Steps to be carried out for configuration before deployment. do i need to install both JRuby and Rails inside the server before depolyment?
I can't really guess with Eror 404 only.
You can try mapping your rails app rack config to a different base_uri.
All you need to do is wrap the existing 'run' command in a map block
try doing this in your rails 'config.ru' file:
map '/mydepartment' do
run Myapp::Application
end
Now when you 'rails server' the app should be at localhost:3000/mydepartment .
Not sure if this will give you the desired outcome, but worth a try.
One more thing you also add this to your config/environments/production.rb and config/environments/development.rb (if on production mode):
config.action_controller.asset_path = proc { |path| "/abc#{path}" }
otherwise when you call your helpers such as stylesheet_link_tag in your views, they will generate links without the "/abc".
Also, find some guides you may refer for good support.
JRubyOnRailsOnBEAWeblogic.
Use JRuby with JMX for Oracle WebLogic Server 11g
Let me know if it is not resolved.

rails app - sudden 403 after pull - how do I start to debug?

I'm been working on a rails 3.1 app with one other dev.
I've just pulled some of his recent changes, using git. And am now getting a 403 on any page I try to visit.
You don't have permission to access / on this server.
I'm running the site locally through passenger.
Oddly, when I start the app using rails' internal server. I can visit the site at http://0.0.0.0:3000
Looking at the changes in this recent pull, the only files have changed are some javascripts, some html the application.rb, routes.rb and a rake file.
How do I debug this, I'm a bit lost on where to start?
EDIT:
If I roll back to an earlier version the site works, through passenger. Which leads me to believe the problem is within the rails app, rather than an Apache error. Or it could be a permissions thing, can git change file permissions in this way?
IMHO this is a configuration error in Apache or wrong directory layouts. Make sure that the passenger_base_uri still points to the public folder inside your rails project and that there are no hidden .htaccess files which block access. Also verify that your sym-links are correct (if there are any). Also check your Apache error log.
Start by launching your console to see if rails and your app can be loaded.
In your application root directory type :
rails console

Rails sends 0 byte files using send_file

I can't get send_file(Model.attachment.path) to work.
It doesn't fail, instead, it sends a 0 byte size file to the client, the file names are correct though.
This problem started happening after I did a big migration from Rails 2.3.8 to 3.
There were a lot of other things that took place in this migration and I will try my best to detail all of them.
Distrubution change/Server Change. Rackspace RHEL5 to Linode Ubuntu 10.04LTS
Ruby version change, 1.8.6 -> 1.9.2
Rails version change, 2.3.8 -> 3.0.0
httpd platform change, apache2 -> nginx (However I tried on apache2 as well and it did not work).
I moved the attachments via ftp as they were not part of my git repositories so they were published via cap deploy, instead manual ftp remote(RHEL5) to local(Win7) then local(Win7) to remote(Ubuntu10).
I do know that FTPing does not retain the file permissions through the transfers, so what I've also done is mimicked the chmods that were seen on my previous servers so they are almost identical. (users/groups are different, set to root:root instead of olduser:olduser).
A snippet of the request to download a attachment from my production log.
Started GET "/attachments/replies/1410?1277105698" for 218.102.140.205 at 2010-09-16 09:44:31 +0000
Processing by AttachmentsController#replies as HTML
Parameters: {"1277105698"=>nil, "id"=>"1410"}
Sent file /srv/app/releases/20100916094249/attachments/replies/UE0003-Requisition_For_Compensation_Leave.doc (0.2ms)
Completed 200 OK in 78ms
Everything's okay. Let me also rule out local issues, I've tried downloading via Chrome on both Win7 and Ubuntu (on Vbox).
Let me also assure you that the path is indeed correct.
root#li162-41:/srv/app/current# tail /srv/app/releases/20100916094249/attachments/replies/UE0003-Requisition_For_Compensation_Leave.doc
#
#
%17nw
HQ��+1ae����
%33333333333(��QR���HX�"%%��#9
��#�p4��#P#��Unknown������������G��z �Times New Roman5��Symbol3&�
�z �Arial5&�
So to sum up the question, how do I get send_file to actually send files instead of fake 0 byte junk.
send_file has :x_sendfile param which defaults to true in Rails 3.
This feature offloads streaming download to front server - Apache (with mod_xsendfile) or lighttpd, by returning empty response with X-Sendfile header with path.
Nginx uses X-Accel-Redirect header for same functionality but you have to
configure Rails properly in proper environment file:
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
Rails 3 update: this line already exists in production.rb, just uncomment it.
Add sendfile on; to your nginx config to utilize header sent by Rails.
Remember the absolute path must be used and nginx must have read access to file.
Another way for aliased files:
For better security I use aliases in nginx instead of absolute paths,
however send_file method checks existence of file which fails with alias.
Thus I changed my action to:
head(
'X-Accel-Redirect'=> file_item.location,
'Content-Type' => file_item.content_type,
'Content-Disposition' => "attachment; filename=\"#{file_item.name}\"");
render :nothing => true;
In Rails 3, just uncomment the line config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' in production.rb inside environments folder.
Yes, I had the same problem with X-sendfile being enabled by default in Rails 3 too.
If you have large volume of "send_file" calls,
you can just comment-out following line in config/environments/production.rb:
#config.action_dispatch.x_sendfile_header = "X-Sendfile"
Then send_file method started working perfectly.
Because I can't install x-sendfile extension to Apache, I just searched a little and found this.
I hope it helps.
I've had similar issues with send_file() in the past, using send_data() instead saved me back then (e.g. send_data File.read(filename), :disposition => 'inline', :type => "some/mimetype")
On Rails 4, I realize my problem is that I deleted the temporary file which I've generated to send to user.
If I didn't delete the file, send_file works. I've not tested on thin but it works great on Passenger 5 as stand-alone server.

Resources