To fix issues with Safari frequently hanging when uploading files, I need to make a request to my Rails server and have it return an empty body with a "Connection: close" header. More details about this fix can be found here.
So far, I have tried:
def close
return head :ok, {'Connection' => 'close'}
end
def close
response.headers['Connection'] = 'close'
render :nothing => true
end
def close
response.headers['Connection'] = 'close'
return head :ok
end
None of these approaches seem to work. Inspecting the request in Firebug and Safari's developer console reveals that the response header, Connection, is always set to "keep-alive"
I'm running Rails 2.3.5 with Mongrel and Nginx. Setting a header such as Content-Type does work by the way.
Any ideas on how to fix this?
So I never figured out how to do this in Rails, but I did find out that nginx version 0.7.66+ disables keepalive connections for Safari. See the nginx changelog.
So I upgraded my nginx and all is well with Safari now.
Related
I have an external program running a local API that is set up to play wav files when an endpoint is hit.
In my development environment, it works fine, but when I push to live environment it doesn't work any longer.
Am i missing something?
thanks!
JqueryController
def playfile
require 'json'
HTTParty.get("http://192.168.1.161:5000/play/98")
response = HTTParty.get("http://192.168.1.161:5000/play/#{params[:id]}")
parsed = JSON.parse(response)
#message = params[:message]
Hint.first.update_attributes(message:#message)
if parsed['success'] == true
#success = "Success"
else
#success = "Failed"
end
end
The View
=form_tag("/jquery/playfile", method: 'post', remote:true) do
= label_tag 'hints on the tv!'
= hidden_field_tag :id , 39
=render 'layouts/play'
When I hit the endpoint .. googles inspector is showing a 'pending' request, which eventually dies, and returns an application error response (from heroku)
Im guessing it has something to do with not being allowed to hit a localhost address from a live site..is there a way to get around this?
You cannot access your LAN from the internet (heroku).
If you make a GET on http://192.168.1.161:5000/play/98heroku will search in its own network..
Solution 1:
make a link ().
If you then hit that link from your LAN this would work. (you can use redirect_back if you want to redirect to back to the production site).
Solution 2:
make a tunnel from your http://192.168.1.161:5000/play/98 device.
You can do this with ngrok for example.
Then you would be able to make a GET request with HTTP Party from heroku.
HTTParty.get("http://<ngrok-hash>.ngrok.com/play/98")
NOTE: you need to have a running server on http://192.168.1.161:5000/play/98 for both solutions
I'm testing these 2 valid urls
http://www.businessinsider.com.au/smartphone-impact-brain-body-sleep-2015-2
http://www.businessinsider.com.au/smartphone-impact-brain-body-sleep-2015-2#ooid=BvMjVqcjoHdZBG6tTpXy8UkhB5_46U_c
Running the code below for both, the first one returns 200 OK, but the second one returns 404 only in the heroku. Even escaping the url with URI.escape(url):
request = Typhoeus::Request.new(url, followlocation: true)
request.on_headers do |response|
puts response.code
end
request.run
Cannot have idea of this behavior. Maybe is some escaping problem with #?
If I make a little replace of the # to ?, it works
http://www.businessinsider.com.au/smartphone-impact-brain-body-sleep-2015-2?ooid=BvMjVqcjoHdZBG6tTpXy8UkhB5_46U_c
Thanks
The solution was updating libcurl in heroku updating to cedar-14 as pointed from https://github.com/typhoeus/ethon/issues/93
I am getting a LoadError - "Too many open files" when using Feedzirra. I am running it on my development server using the default WEBrick server.
I am parsing only 2 feeds. What is the problem?
I had the same issue with Feedzirra. You can notice that it leaves TCP connections in CLOSE_WAIT state forever, hence causing the problem.
It appears to be curb gem specific that is used to fetch feeds. Another project depending on libcurl had the same issue. They have fixed it by setting 'CURLOPT_FORBID_REUSE' option.
I've tried to do the same for Feedzirra but didn't succeed. Even with this option I had a growing number of CLOSE_WAIT sessions and Too many open files error eventually.
So I did the most straightforward thing, I download feeds using Net::HTTP:
def get_contents(furl)
url = URI.parse(furl)
req = Net::HTTP::Get.new(url.to_s)
res = Net::HTTP.start(url.host, url.port) { |http|
http.request(req)
}
unless res.kind_of? Net::HTTPSuccess
puts "can't get feed #{url.to_s}: #{res.code}"
return nil
end
res.body
end
Then I parse the XML with Feedzirra:
xml = get_contents(furl)
feedin = Feedzirra::Feed.parse xml
No more stuck connections and no more errors. You may also want to add better error handling to this sample code.
I'm trying to get my pages to stream but I'm getting the following error:
Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error
I've tried in both Chrome, IE and Firefox.
def login
redirect_to_stored and return if session[:user]
if request.post?
if session[:user] = User.authenticate(params[:login], params[:password])
redirect_to_stored(notice: 'Login successful') and return
else
flash_now :alert => "Login unsuccessful"
end
end
on_document_load "$('#login').focus();"
render :layout => 'login', :stream => true
end
I get the following response headers:
Cache-Control:must-revalidate, private, max-age=0
Connection:close
Content-Type:text/html; charset=utf-8
Server:thin 1.5.0 codename Knife
Set-Cookie:_rails3.website_session=biglongstringhere; path=/; expires=Sat, 02-Mar-2013 01:44:38 GMT; HttpOnly
Set-Cookie:__profilin=p%3Dt; path=/
Set-Cookie:__profilin=p%3Dt; path=/
Transfer-Encoding:chunked
X-MiniProfiler-Ids:[a bunch of stuff in here (this isn't the actual text)]
X-Request-Id:695da38a40064d87cbd463c83fef0a88
X-Runtime:0.034002
X-UA-Compatible:IE=Edge
Looking in Chrome's inspector, the file 'login' shows the Status as 'failed' but when clicking it, the Status Code is 200 OK. Another thing I've noticed is that if I put the following in before the original render call, the response displays OK, but Chrome inspector still thinks the Status response was in error (like the 1st time). Is the connection being closed before the rest of the body has a chance to be delivered? I don't understand what's wrong here.
render(:text => 'test', :stream => true) and return
As an addendum, the rails log does not show any errors. It renders everything just fine.
I have two servers; one at work and one at home. The one at home doesn't use nginx or any downstream application to communicate with Thin. Thin on its own produces an error...but using nginx in front, the files are served appropriately. Strangely though, when I don't have any streaming explicitly enabled in my rails app, the files are still served as Content Type: chunked.
Heck, even my public website is being served as chunked...but I never had streaming enabled in Rails...so that must mean the final result is chunked but not the actual rendering of the html. How can I actually test this then?
I'm currently hosting both my rails app and a faye-server app on Heroku. The faye server has been cloned from here (https://github.com/ntenisOT/Faye-Heroku-Cedar) and seems to be running correctly. I have disabled websockets, as they are not supported on Heroku. Despite the claim on Faye's site that:
"Faye clients and servers transparently support cross-domain communication, so your client can connect to a server on any domain you like without further configuration."
I am still running into this error when I try to post to a faye channel:
XMLHttpRequest cannot load http://MYFAYESERVER.herokuapp.com. Origin http://MYAPPURL.herokuapp.com is not allowed by Access-Control-Allow-Origin.
I have read about CORS and tried implementing some solutions outlined here: http://www.tsheffler.com/blog/?p=428 but have so far had no luck. I'd love to hear from someone who:
1) Has a rails app hosted on Heroku
2) Has a faye server hosted on Heroku
3) Has the two of them successfully communicating with each other!
Thanks so much.
I just got my faye and rails apps hosted on heroku communicating within the past hour or so... here are my observations:
Make sure your FAYE_TOKEN is set on all of your servers if you're using an env variable.
Disable websockets, which you've already done... client.disable(...) didn't work for me, I used Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) } instead.
This may or may not apply to you, but was the hardest thing to track down for me... in my dev environment, the port my application is running on will be tacked onto the end of the specified hostname for my faye server... but this appeared to cause a failure to communicate in production. I worked around that by creating a broadcast_server_uri method in application_controller.rb that handles inclusion of a port when necessary, and then use that anywhere I spin up a new channel.
....
class ApplicationController < ActionController::Base
def broadcast_server
if request.port.to_i != 80
"http://my-faye-server.herokuapp.com:80/faye"
else
"http://my-faye-server.herokuapp.com/faye"
end
end
helper_method :broadcast_server
def broadcast_message(channel, data)
message = { :ext => {:auth_token => FAYE_TOKEN}, :channel => channel, :data => data}
uri = URI.parse(broadcast_server)
Net::HTTP.post_form(uri, :message => message.to_json)
end
end
And in my app javascript, including
<script>
var broadcast_server = "<%= broadcast_server %>"
var faye;
$(function() {
faye = new Faye.Client(broadcast_server);
faye.setHeader('Access-Control-Allow-Origin', '*');
faye.connect();
Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) }
// spin off your subscriptions here
});
</script>
FWIW, I wouldn't stress about setting Access-Control-Allow-Origin as it doesn't seem to be making a difference either way - I see XMLHttpRequest cannot load http://... regardless, but this should still works well enough to get you unblocked. (although I'd love to learn of a cleaner solution...)
Can't say I have used Rails/Faye on Heroku but have you tried setting the Access-Control-Allow-Origin header to something like Access-Control-Allow-Origin: your-domain.com?
For testing you could also do Access-Control-Allow-Origin: * to see if that helps
Custom headers
Some services require the use of additional HTTP headers to connect to
their Bayeux server. You can add these headers using the setHeader()
method, and they will be sent if the underlying transport supports
user-defined headers (currently long-polling only).
client.setHeader('Authorization', 'OAuth abcd-1234');
Source: http://faye.jcoglan.com/browser.html
So try client.setHeader('Access-Control-Allow-Origin', '*');