When implementing a controller action such as the following:
def create_file
File.open('public/test.txt', "w+") do |f|
f.write('test')
end
sleep(60)
head :no_content
end
The file domain/test.txt will be accessible after the action completes; however, any attempts to access this URL before the action returns (such as during the sleep() call) seem to hang until it is finished.
I have use cases where I'd like to create publicly-accessible files based on user input, call a third-party API that requires passing a URL to such a file (no option to send the data iself in this case), then remove the file before the action is done. Unfortunately, this seems to be impossible thanks to the file not actually be accessible until the action is finished.
Is there some way around this in Rails, some type of flush call or route refresh or handle closer? I'm not sure why it's even hanging in this case. Or am I going to have to use separate actions to create and process the file (assuming I don't want to store it on a static, non-Rails site on the same server)?
Seems the problem was, as comments indicate, caused by only having one thread available to serve requests in development mode. Solution was to add config.threadsafe! to config\environments\development.rb and to launch thin with the --threaded option.
Related
Let's say that I have a POST endpoint in my Rails app, in which it gets a param called state, which will be an integer of either 200 or 503.
How can I make the Robots.txt file respond with the given state from that POST endpoint, I mean I need a way to control the response code of that only file (Robots.txt) depending on that POST endpoint.
BTW, question is not about how to store that state or something, it's only about how to change the response code of a public file?
Is that possible?
What I have in mind for this and trying now is to have a controller action matching the robots.txt route, but I feel this is so silly to do.
Yes, if you want Rails to be involved in deciding the response for a given URL, then you're going to want to define a controller action to handle those requests.
You can use send_file to actually do the file-sending part.
Depending on your web server's configuration, it's likely you'll need the actual robots.txt file to be stored somewhere other than public/ -- otherwise it might get served without Rails even having a chance to get involved.
You could instead arrange to rewrite your nginx (say) configuration file at runtime, based on what response code you want... but I think that would be silly to do.
A more practical middle-ground would be to have Rails create or delete a marker file, and then use a conditional in the nginx configuration based on whether that file exists. That would be an nginx question though... and would get complicated if you have more than one server.
Hi I am processing some background jobs and I need to redirect the URL from the module or directly from the worker but as per my knowledge, there is only one method i.e redirect_to but it's not available in module and worker as per the rails MVC architecture but I need to do this.
Please see below is my code:-
#oauth = Koala::Facebook::OAuth.new(Figaro.env.fb_app_id,Figaro.env.fb_secret_token,Figaro.env.fb_callback_url)
oauth_code_url = #oauth.url_for_oauth_code
redirect_to oauth_code_url
I have also included the include ActionController::UrlFor to get the redirect_to method in Module and Worker but it's again throwing the error and I was not able to call controller methods into the module or worker. could anyone please suggest what would be the best approach to do this?
Redirects only make sense inside the Request/Response cycle, workers usually happen on the background and asynchronously, so the user that might have initiated isn't waiting for the worker's response.
If you do want to wait (run the worker synchronously), it's not up to the worker to redirect, the worker should simply "signal" the controller to perform the redirect (this is because you want to keep a separation of concerns).
So I have the following scenario (it's a Grails 2.1 app):
I have a Controller that can be accessed via //localhost:8080/myController
This controller in turn executes a call to another URL opening a connection using new URL("https://my.other.url").openConnection()
I want to capture the request so I can log the information
I have a Filter present in my web.xml already which does the job well for controllers mapped in my app. But as soon as a request is fired to an external URL, I don't get anything.
I understand that my filter will only be invoked to URLs inside my app, and that depends on my filter mapping which is fine.
I'm struggling to see how a solution inside the app is actually viable. I'm thinking of using a mixed approach with the DevOps team to capture such outgoing calls from the container and then log them into a separate file.
I guess my questions are:
Is there a way to do it inside the app itself?
Is the approach I'm planning a sensible one?
Cheers!
Any reason why you don't want to use http-builder? There a Grails plugin for it, and it makes remote XML calls much easier than handling the plumbing yourself. At the bottom of the linked page they describe how you can enable request logging via log4j configuration.
Does the send_file method in Rails give a return value? I'd like to do one thing if the sending is successful, and another thing if the sending is not successful. I looked at the documentation at this link, but did not find anything relevant.
No, there is no way to confirm download completion or success with send_file. From this question: Can we find out when a Paperclip download is complete? :
send_file creates the file and then passes a special header to tell
the webserver telling it what to send. Rails doesn't actually send the
file at all, it sets this header which tells the webserver to send the
file but then returns immediately, and moves on to serve another
request. To be able to track if the download completes you'd have to
occupy your Rails application process sending the file and block until
the user downloads it, instead of leaving that to the webserver (which
is what its designed to do). This is super inefficient.
You may be able to do something using cookies and JavaScript on the client.
See this question: Rails File Download And View Update - Howto?
If there's an exception in a Rails application, one gets an error page with the call stack, request parameters and a code excerpt.
Is it possible to create a different output if the request is an XHR request?
Is it possible to re-define the exception output in general? (I assume that would automatically answer the first question)
You could try overriding rescue_action in your action controller.
def rescue_action(exception)
if request.xhr?
custom_xhr_error_handling_for(exception)
else
super
end
end
The more traditional way is to use rescue_from Exception, :custom_xhr_error_handling_for but that removes your ability to let the default code do the dirty work if it later turns out it was not an xhr response.
You only see the page with traceback in development mode, while in production mode you see a standard error page (located in public/500.html) which just says an error occurred.
This is meant for security reasons, and it's not, of course, limited to rails: all web application frameworks do the same, as the backtrace can disclose sensitive information (it sometimes happen that you see an error message on a web app displaying the db connection string, or some password, or the like; well, you don't want this to happen).
In development mode, on XHR calls, you still receive the backtrace (I use firebug to debug my apps, so I just copy it and paste somewhere).
In production mode you can handle XHR errors from within the ajax call, by explicitly set a function to be executed on error by setting the :failure param of functions like remote_function.