I have a Rails app that saves files in mongo. This works great and I have it set up to serve those files, but with some use cases I need to get the file and write it to disk (merging pdf files).
In IRB or from a simple Ruby file I can run the following code and get the file almost instantly, but when the same code is called from within Rails it times out.
require 'open-uri'
open('id1_front.pdf', 'wb') do |file|
file << open('http://127.0.0.1:3000/files/uploads/id1_front.pdf').read
p file
end
-
Timeout::Error (Timeout::Error):
app/controllers/design_controller.rb:38:in `block in save'
app/controllers/design_controller.rb:37:in `save'
Anyone know why it would be timing out in Rails? Any alternate solutions to get a file out of mongo and write it to disk?
thanks!
When you're running your development server, you have only one thread on which to respond to requests. This thread will be blocked when a request is being served: so, you request design_controller#save, which then tries to make another request to the web server for an uploaded file. This request will never successfully complete, because the webserver is still trying to complete the previous one.
You might be able to get around this problem by using thin as your Rails server, instead of webrick. Add gem thin to your gemfile and start your server with rails s thin. I'm not sure if this will allow more than one request to be serviced simultaneously, but it's at least worth a shot.
--EDIT--
After some testing I determined that thin is also single-threaded, unfortunately, so will also have this exact same problem.
After a bit of Googling, I did discover shotgun. It hasn't been active for awhile but it looks like it might fix your problem, since it spawns a new application per request in development. Give it a shot.
Related
My development Rails 5 server with Puma keeps freezing and hanging when sending multiple requests at one time from my separate frontend app to the Rails API. There is no error, it just hangs on the POST requests. When I try to kill the server with CTRL + C, nothing happens. I have to manually kill the port.
I've tried setting config.eager_load=true in development.rb. I've tried adding config.allow_concurrency in application.rb. I've Googled relentlessly to no avail. I am sending around 5 requests concurrently from frontend, so I believe this amount of requests is causing it, but I don't know for sure.
Has anyone else experienced this or have an idea of what needs to be done here? I can usually get all the requests coming back to the frontend successfully around 3-4 times, then the server just freezes.
It especially occurs after I change any one line of code in any file in the project while the server is running.
It's been nearly 2 years but I finally happened to stumble upon what had been causing my issue.
Basically it boiled down to a method in my code not being thread-safe. Since my current_user variable was only accessible from my controller, I had a before_action on my base controller to assign the current user to User.current so that I could access the current user globally via User.current, not just in my controllers.
So PLEASE make sure you're not dynamically updating classes like I this in your controllers. It is not thread-safe. I ended up following this thread-safe solution instead for my particular case: https://stackoverflow.com/a/2513456/7629239
What is your puma configuration? How many threads and workers(Puma workers not rails workers).
Ensure that your puma has enough threads, and that your db pool is large enough. Changing a line of code should not cause your server to get exhausted in resources. Are you using a watcher like watchman?
Sorry I know this is a generic question, I'll try to provide as much detail as possible
I am running Bitnami Rubystack (3.2.7) on Amazon EC2 Medium instance. and some aspects of Rails are extremely slow, here are some of them:
when logging in (I am using devise gem), if you provide an invalid password, it would take a long time to tell you that the password is invalid.
Sign Up process takes extremely long, responds after about 2 minutes (when all it has to do is run a couple of queries agains the db?)
File uploads (on carrierwave) are so slow they are practically not working. (files are going to S3 via Fog on CarrierWave).
The code in the above instances is pretty straight forward and I don't see anything obviously wrong. In fact, most of the work gets performed by the gems (e.g. devise handles registrations and logins). any help would be greatly appreciated.
Try to use a analytics tool, like New Relic
This will help you locate te slowest code and/or the slowest db query
EDIT
On the comments below you have mentioned that you are using devise 0.5.8, this is very bad, considering that devise is, today, at version 2.1.2
Please update your devise and keep me posted.
NEW EDIT
Since the devise version is not the problem you could look into the views.
In the views check for http requests that could be inserted on a single request or a async request.
For instance Google Analytics.
If the load of their javascript files are blocking your view to load maybe putting them on head or making them async could help
I would suggest you to compile a Ruby 1.9.3-p194 with falcon patch, it increases ruby and rails speed dramatically.
falcon patch in rvm
You download ruby src and apply this patch if you do not want to use RVM.
It might also be a DNS issue if some parameter for reverse DNS lookup enabled in Apache configuration.
Just put gem rails tweak in your gem file and after that run
bundle install
I think it will solve your problem.
Thanks
I have a rails app hosted on heroku and a mobile app made with rhodes.
I'd like to send images from the mobile app to my rails app using an HTTP POST request. Since heroku doesn't allow you to store files, I'm using amazon s3.
I can't send the file from heroku to s3 because it takes more than 30 seconds and causes a timeout. I've seen plenty of examples of uploading a file direct to s3 when the user has a form, but this obviously won't work in this case.
I tried using the suggestion here:
rails 3, heroku, aws-s3, simply trying to upload a file to S3 that is POSTed (http/multipart) to our app
but I still get a 503 request timeout.
I don't want to put my amazon s3 keys on the app.
Right now, I feel like my only option is to host my app on EC2 which I would rather not do as I like the simplicity of Heroku.
Also, it seems strange that these uploads would take so long regardless. I'm only posting images from a mobile phone camera, so they're not huge files.
I was getting the same error in a project in my job. Some people says that the only way to solve this is by uploading files directly to the S3 bucket. This is difficult in our case, because we are using Paperclip Gem for Rails and different size versions of the image.
Some other people says that "The Heroku timeout is a set in stone thing that you need to work around. Direct upload to S3 is the only option, with some sort of post-upload processing required", so I recomend to do the next:
Maybe this is not a solution but, it could be very useful, it was for me in a Rails App:
Worker Dynos, Background Jobs and Queueing
Perhaps you should move this heavy lifting into a background job which can run asynchronously from your web request.
Regards!
So I finally figured out how to do this.
After lots of back and forth with AWS reps and Cloudfiles reps and pulling my hair out, I realized it would be a lot less work to just get another rails server that could write to the filesystem.
So, I started another rails app on openshift. It's just as easy as Heroku to get started (in fact, I might consider moving my rails app there, but it's too new for my taste right now and doesn't have the community around it that Heroku does).
Then, I just had to have communications between my two rails apps.
I know it's not the best/scalable/elegant fix, but it got the job done, and that's what matters in the end!
I've got the following rails code
send_file '/test.pdf'
The file seems to be downloading with 0 bytes, has anyone got any ideas on how to fix this?
thanks
I believe that send_file depends on support from your web server to work. Are you running your app using the rails built in server? If so, I think you'll see the behaviour you've got here.
Basically, the idea behind send_file is that it sets an HTTP header 'X-Sendfile' pointing to the file you want to send. The web server sees this header and rather than returning the body of your response, sends the specified file instead.
The benefit of this approach is that your average web server is highly optimised for sending the content of static files very quickly, and will typically get the job done many times more quickly than rails itself can.
So, the solutions to your problem are to either:
* Use a webserver that supports X-Sendfile (e.g. Apache), or
* As rubyprince commented, use send_data instead, which will make ruby do the heavy lifting.
Just as an aside, you should be able to confirm that this is what's happening by looking at the response headers using either Firebug or Safari/Chrome's developer tools.
I have an edge case, although a very customer visible one, where Tomcat begins processing requests before all dependencies are properly loaded for a Ruby on Rails stack running underneath JRuby.
Once Tomcat is restarted, there is something similar to the following happening:
undefined method `utc_offset' for nil:NilClass
[RAILS_ROOT]/gems/gems/activesupport-2.3.8/lib/active_support/values/time_zone.rb:206:in `<=>'
This happens when the following code is invoked on one of my services:
#timezones = ActiveSupport::TimeZone.all
If you wait a few more seconds and refresh the requesting page, it'll load no problem.
Is there a way to ensure that Tomcat does not start processing these requests until the entire stack, ActiveSupport, ActiveRecord etc is loaded? Has anyone experienced any similar symptoms?
This sounds like a possible bug in JRuby-Rack, assuming that's what you're using to run your Rails app in Tomcat. JRuby-Rack is supposed to load the entirety of config/environment.rb before it will process requests, so I'm not sure how this would happen to you, but perhaps I've overlooked something. Could you share some more data (or maybe code or an app that reproduces the issue) about how you induced the error at http://kenai.com/jira/browse/JRUBY_RACK or http://bugs.jruby.org?
I'm not sure if there is something like that in Tomcat directly, but you can write a javax.servlet.Filter that will intercept all requests, and deny them until your application is loaded. When application is fully loaded, you ask filter to stop denying requests. (This isn't pure Ruby solution though).