so I'm using the Pusher Heroku Add-on for my application. The application has live notifications, so when a user receives a message he will see a pop up notification saying "new message". However, In production I am getting the below error:
Firefox can't establish a connection to the server at ws://ws.pusherapp.com/app/b1cc5d4f400faddcb40b?protocol=7&client=js&version=2.1.6&flash=false.
Reload the page to get source for: http://js.pusher.com/2.1/pusher.min.js
And here's the Pusher controller:
class PusherController < ApplicationController
protect_from_forgery :except => :auth # stop rails CSRF protection for this action
def auth
Pusher.app_id = ENV['PUSHER_APP_ID']
Pusher.key = ENV['PUSHER_KEY']
Pusher.secret = ENV['PUSHER_SECRET']
if current_user && params[:channel_name] == "private-user-#{current_user.id}"
response = Pusher[params[:channel_name]].authenticate(params[:socket_id])
render :json => response
else
render :text => "Not authorized", :status => '403'
end
end
end
And I'm using the figaro gem to push the keys to heroku.
What am I doing wrong?
Kind regards
JS
That looks like a problem with Javascript, rather than Rails
We've got pusher working very well with one of our production apps, and it works by firstly having the pusher gem installed, allowing you to call the pusher JS files from your layout:
#app/views/layouts/application.html.erb
<%= javascript_include_tag "http://js.pusher.com/2.1/pusher.min.js" %>
Rails
You may also wish to put the pusher initialization code into an initializer:
#config/initializers/pusher.rb
Pusher.url = ENV["PUSHER_URL"]
Pusher.app_id = ENV["PUSHER_APP_ID"]
Pusher.key = ENV["PUSHER_KEY"]
Pusher.secret = ENV["PUSHER_SECRET"]
This will ensure app-wide connectivity, rather than controller-specific (allowing for greater flexibility)
Firefox can't establish a connection to the server at ws://ws.pusherapp.com/app/b1cc5d4f400faddcb40b?protocol=7&client=js&version=2.1.6&flash=false.
Reload the page to get source for: http://js.pusher.com/2.1/pusher.min.js
This doesn't necessarily mean anything is wrong. it just means that an unsecured WebSocket connection couldn't be established. Pusher's fallback strategy should result in a successful connection being established via either HTTP fallback (HTTP or HTTPS) or via WSS (a secure WebSocket connection).
Failed connection attempts are logged as console errors. There's nothing that can be done about that.
To test this you can bind to connection events and ensure that you are indeed connecting. The pusher-js JavaScript logging will also help determine what's happening.
You can also try http://test.pusher.com/
Related
I installed the 'websocket-rails' gem and after doing the default configuration I just created a JS dispatcher and I get a 404 error on chrome console.
This is my JS:
var dispatcher = new WebSocketRails('localhost:3000/websocket');
This is the message I get:
WebSocket connection to 'ws://localhost:3000/websocket' failed: Error during WebSocket handshake: Unexpected response code: 404
Everything else is as suggested by the first-steps-guide
events.rb
subscribe :test, :to => ChatServerController, :with_method => :test
controller/chat_server_controller.rb
class ChatServerController < WebsocketRails::BaseController
def initialize_session
# perform application setup here
controller_store[:message_count] = 0
end
def test
puts 'Hello'
end
end
There's one potential solution involving a gem dependency posted on github. But, if you look at the repo (151 open issues, 27 pull requests), it doesn't look like this gem is being actively maintained. The closed issues in 2016 are being closed by the same people who opened them.
You can probably make your application work by forcing websockets to use http by including a second parameter, set to false.
var Dispatcher = new WebSocketRails('localhost:3000/websocket', false);
I have concerns about how scalable using http polling will be and about the future of the websocket-rails gem. For me, it seems like the best way forward is to upgrade to Rails 5 and use Action Cable.
I have an external program running a local API that is set up to play wav files when an endpoint is hit.
In my development environment, it works fine, but when I push to live environment it doesn't work any longer.
Am i missing something?
thanks!
JqueryController
def playfile
require 'json'
HTTParty.get("http://192.168.1.161:5000/play/98")
response = HTTParty.get("http://192.168.1.161:5000/play/#{params[:id]}")
parsed = JSON.parse(response)
#message = params[:message]
Hint.first.update_attributes(message:#message)
if parsed['success'] == true
#success = "Success"
else
#success = "Failed"
end
end
The View
=form_tag("/jquery/playfile", method: 'post', remote:true) do
= label_tag 'hints on the tv!'
= hidden_field_tag :id , 39
=render 'layouts/play'
When I hit the endpoint .. googles inspector is showing a 'pending' request, which eventually dies, and returns an application error response (from heroku)
Im guessing it has something to do with not being allowed to hit a localhost address from a live site..is there a way to get around this?
You cannot access your LAN from the internet (heroku).
If you make a GET on http://192.168.1.161:5000/play/98heroku will search in its own network..
Solution 1:
make a link ().
If you then hit that link from your LAN this would work. (you can use redirect_back if you want to redirect to back to the production site).
Solution 2:
make a tunnel from your http://192.168.1.161:5000/play/98 device.
You can do this with ngrok for example.
Then you would be able to make a GET request with HTTP Party from heroku.
HTTParty.get("http://<ngrok-hash>.ngrok.com/play/98")
NOTE: you need to have a running server on http://192.168.1.161:5000/play/98 for both solutions
For reasons similar to the ones in this discussion, I'm experimenting with messaging in lieu of REST for a synchronous RPC call from one Rails 3 application to another. Both apps are running on thin.
The "server" application has a config/initializers/amqp.rb file based on the Request / Reply pattern in the rubyamqp.info documentation:
require "amqp"
EventMachine.next_tick do
connection = AMQP.connect ENV['CLOUDAMQP_URL'] || 'amqp://guest:guest#localhost'
channel = AMQP::Channel.new(connection)
requests_queue = channel.queue("amqpgem.examples.services.time", :exclusive => true, :auto_delete => true)
requests_queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply..."
channel.default_exchange.publish(Time.now.to_s,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
Signal.trap("INT") { connection.close { EventMachine.stop } }
end
In the 'client' application, I'd like to render the results of a synchronous call to the 'server' in a view. I realize this is a bit outside the comfort zone of an inherently asynchronous library like the amqp gem, but I'm wondering if there's a way to make it work. Here is my client config/initializers/amqp.rb:
require 'amqp'
EventMachine.next_tick do
AMQP.connection = AMQP.connect 'amqp://guest:guest#localhost'
Signal.trap("INT") { AMQP.connection.close { EventMachine.stop } }
end
Here is the controller:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
end
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
When I start both applications on different ports and visit the welcome#index view.
#message is nil in the view, since the result has not yet returned. The result arrives a few milliseconds after the view is rendered and is displayed on the console:
$ thin start
>> Using rack adapter
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
[request] Sending a request...
[response] Response for 3877031: "2012-11-27 22:04:28 -0600"
No surprise here: subscribe is clearly not meant for synchronous calls. What is surprising is that I can't find a synchronous alternative in the AMQP gem source code or in any documentation online. Is there an alternative to subscribe that will give me the RPC behavior I want? Given that there are other parts of the system in which I'd want to use legitimately asynchronous calls, the bunny gem didn't seem like the right tool for the job. Should I give it another look?
edit in response to Sam Stokes
Thanks to Sam for the pointer to throw :async / async.callback. I hadn't seen this technique before and this is exactly the kind of thing I was trying to learn with this experiment in the first place. send_response.finish is gone in Rails 3, but I was able to get his example to work for at least one request with a minor change:
render :text => #message
rendered_response = response.prepare!
Subsequent requests fail with !! Unexpected error while processing request: deadlock; recursive locking. This may have been what Sam was getting at with the comment about getting ActionController to allow concurrent requests, but the cited gist only works for Rails 2. Adding config.allow_concurrency = true in development.rb gets rid of this error in Rails 3, but leads to This queue already has default consumer. from AMQP.
I think this yak is sufficiently shaven. ;-)
While interesting, this is clearly overkill for simple RPC. Something like this Sinatra streaming example seems a more appropriate use case for client interaction with replies. Tenderlove also has a blog post about an upcoming way to stream events in Rails 4 that could work with AMQP.
As Sam points out in his discussion of the HTTP alternative, REST / HTTP makes perfect sense for the RPC portion of my system that involves two Rails apps. There are other parts of the system involving more classic asynchronous event publishing to Clojure apps. For these, the Rails app need only publish events in fire-and-forget fashion, so AMQP will work fine there using my original code without the reply queue.
You can get the behaviour you want - have the client make a simple HTTP request, to which your web app responds asynchronously - but you need more tricks. You need to use Thin's support for asynchronous responses:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
# Trigger Rails response rendering now we have the message.
# Tested in Rails 2.3; may or may not work in Rails 3.x.
rendered_response = send_response.finish
# Pass the response to Thin and make it complete the request.
# env['async.callback'] expects a Rack-style response triple:
# [status, headers, body]
request.env['async.callback'].call(rendered_response)
end
# This unwinds the call stack, skipping the normal Rails response
# rendering, all the way back up to Thin, which catches it and
# interprets as "I'll give you the response later by calling
# env['async.callback']".
throw :async
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
As far as the client is concerned, the result is indistinguishable from your web app blocking on a synchronous call before returning the response; but now your web app can process many such requests concurrently.
CAUTION!
Async Rails is an advanced technique; you need to know what you're doing. Some parts of Rails do not take kindly to having their call stack abruptly dismantled. The throw will bypass any Rack middlewares that don't know to catch and rethrow it (here is a rather old partial solution). ActiveSupport's development-mode class reloading will reload your app's classes after the throw, without waiting for the response, which can cause very confusing breakage if your callback refers to a class that has since been reloaded. You'll also need to ask ActionController nicely to allow concurrent requests.
Request/response
You're also going to need to match up requests and responses. As it stands, if Request 1 arrives, and then Request 2 arrives before Request 1 gets a response, then it's undefined which request would receive Response 1 (messages on a queue are distributed round-robin between the consumers subscribed to the queue).
You could do this by inspecting the correlation_id (which you'll have to explicitly set, by the way - RabbitMQ won't do it for you!) and re-enqueuing the message if it's not the response you were waiting for. My approach was to create a persistent Publisher object which would keep track of open requests, listen for all responses, and lookup the appropriate callback to invoke based on the correlation_id.
Alternative: just use HTTP
You're really solving two different (and tricky!) problems here: persuading Rails/thin to process requests asynchronously, and implementing request-response semantics on top of AMQP's publish-subscribe model. Given you said this is for calling between two Rails apps, why not just use HTTP, which already has the request-response semantics you need? That way you only have to solve the first problem. You can still get concurrent request processing if you use a non-blocking HTTP client library, such as em-http-request.
I'm currently hosting both my rails app and a faye-server app on Heroku. The faye server has been cloned from here (https://github.com/ntenisOT/Faye-Heroku-Cedar) and seems to be running correctly. I have disabled websockets, as they are not supported on Heroku. Despite the claim on Faye's site that:
"Faye clients and servers transparently support cross-domain communication, so your client can connect to a server on any domain you like without further configuration."
I am still running into this error when I try to post to a faye channel:
XMLHttpRequest cannot load http://MYFAYESERVER.herokuapp.com. Origin http://MYAPPURL.herokuapp.com is not allowed by Access-Control-Allow-Origin.
I have read about CORS and tried implementing some solutions outlined here: http://www.tsheffler.com/blog/?p=428 but have so far had no luck. I'd love to hear from someone who:
1) Has a rails app hosted on Heroku
2) Has a faye server hosted on Heroku
3) Has the two of them successfully communicating with each other!
Thanks so much.
I just got my faye and rails apps hosted on heroku communicating within the past hour or so... here are my observations:
Make sure your FAYE_TOKEN is set on all of your servers if you're using an env variable.
Disable websockets, which you've already done... client.disable(...) didn't work for me, I used Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) } instead.
This may or may not apply to you, but was the hardest thing to track down for me... in my dev environment, the port my application is running on will be tacked onto the end of the specified hostname for my faye server... but this appeared to cause a failure to communicate in production. I worked around that by creating a broadcast_server_uri method in application_controller.rb that handles inclusion of a port when necessary, and then use that anywhere I spin up a new channel.
....
class ApplicationController < ActionController::Base
def broadcast_server
if request.port.to_i != 80
"http://my-faye-server.herokuapp.com:80/faye"
else
"http://my-faye-server.herokuapp.com/faye"
end
end
helper_method :broadcast_server
def broadcast_message(channel, data)
message = { :ext => {:auth_token => FAYE_TOKEN}, :channel => channel, :data => data}
uri = URI.parse(broadcast_server)
Net::HTTP.post_form(uri, :message => message.to_json)
end
end
And in my app javascript, including
<script>
var broadcast_server = "<%= broadcast_server %>"
var faye;
$(function() {
faye = new Faye.Client(broadcast_server);
faye.setHeader('Access-Control-Allow-Origin', '*');
faye.connect();
Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) }
// spin off your subscriptions here
});
</script>
FWIW, I wouldn't stress about setting Access-Control-Allow-Origin as it doesn't seem to be making a difference either way - I see XMLHttpRequest cannot load http://... regardless, but this should still works well enough to get you unblocked. (although I'd love to learn of a cleaner solution...)
Can't say I have used Rails/Faye on Heroku but have you tried setting the Access-Control-Allow-Origin header to something like Access-Control-Allow-Origin: your-domain.com?
For testing you could also do Access-Control-Allow-Origin: * to see if that helps
Custom headers
Some services require the use of additional HTTP headers to connect to
their Bayeux server. You can add these headers using the setHeader()
method, and they will be sent if the underlying transport supports
user-defined headers (currently long-polling only).
client.setHeader('Authorization', 'OAuth abcd-1234');
Source: http://faye.jcoglan.com/browser.html
So try client.setHeader('Access-Control-Allow-Origin', '*');
I'm attempting to add Facebook connect to our web app, and I'm running into a problem with. Everything works fine locally (I can authenticate through Facebook), but when I push the code to our dev server (which lives in the wild), every time I try to authenticate it returns the following error code:
OAuth2::HTTPError: Received HTTP 400 during request
That's really the only explanation I'm getting. Again, this works on my local machine, and the gems and such match between boxes, so I'm a bit confused. Here's the code I'm executing.
def facebook_connect
#Set the scope we want to pull from Facebook, along with the callback URL
options = {
:redirect_uri => facebook_callback_url,
:scope => "email,publish_stream"
}
#Go out and fetch the url
client = OAuth2::Client.new(FACEBOOK_API_KEY, FACEBOOK_SECRET, {:site => FACEBOOK_API_URL, :access_token_method => :post})
#Redirect to the callback for processing
redirect_to client.web_server.authorize_url(options)
end
def facebook_callback
#Client URL
client = OAuth2::Client.new(FACEBOOK_API_KEY, FACEBOOK_SECRET, {:site => FACEBOOK_API_URL, :access_token_method => :post})
#Parse out the access token
access_token = client.web_server.get_access_token(params[:code], :redirect_uri => facebook_callback_url)
#Get the user
fb_user = JSON.parse(access_token.get('/me'))
#Do some authentication database stuff
end
def facebook_callback_url
uri = URI.parse(request.url)
uri.path = '/users/facebook_callback'
uri.query = nil
uri.to_s
end
I searched Google, but the solutions that show up aren't working. Also, if anyone knows how to parse and display OAuth2 errors, I would appreciate that, as well. Thanks
Assuming that Facebook OATH knows of your server's IP address(they are very strict about it), I would recommend that you use use 'rescue' to catch that exception, get the backtrace and then find where it is being raised and place a bunch of debug statements to check the state of both request and the response, as well as access tokens.
Or you can configure remote debugging with Rubymine or NetBeans which is not an easy task :)
The issue actually ended up being a problem with the "Faraday" gem. Our dev server wasn't set up to handle SSL, which was returning an error code. We patched it using the following answer:
OmniAuth & Facebook: certificate verify failed