receive TCP/IP data on a rails application - ruby-on-rails

I have a custom device with a TCP/IP stack implemented that's sending a byte each 5 seconds to a remote IP.
On that remote IP, I'm building a site with rails 3.1.3 that will have to receive, store and display the data sent by the custom device.
I was thinking on having a TCP Socket running in the background, something like this, but i don't have a clue on how to integrate this with a rails site. Where to place it, how to start it and how to propagate the data to the views.
Does anybody have a clue on how shall I proceed?

To solve this I created a raketask that starts a TCP Server that will handle messages.
Note: This code has more than a year so I'm not 100% sure how it behaves, but I think the core part is this:
#server = TCPServer.new(#host, port)
loop do
Thread.start(#server.accept) do |tcpSocket|
port, ip = Socket.unpack_sockaddr_in(tcpSocket.getpeername)
begin
loop do
line = tcpSocket.recv(100).strip # Read lines from the socket
handle_message line # method to handle messages from the socket
end
rescue SystemCallError
#close the sockets logic
end
end
end

Related

Rails 4: how to make request from outside of Rails app during integration test?

I have an application that needs 2-way communication to external daemon (bitcoind). There is functionality in bitcoind that allows to call my application whenever new block or transaction of interest occurs ('--walletnotify' and '--blocknotify'). For that I'm using CURL to request "http://myapp/walletnotify" and so on:
walletnotify = /usr/bin/curl --max-time 60 http://myapp/walletnotify/%s
I'm trying to create integration tests for this callback behavior. Unfortunately when running integration tests, I'm receiving errors on daemon side, as it is not able to perform requests to "http://myapp/walletnotify" - obviously Rails server cannot be reached (or the connection is interrupted?). Of course tests fail as appropriate actions are not called.
My question is: how to properly test such scenario? Is there any way to allow for direct external requests to application during integration tests? Is there a way to make sure that Rails server is running during integration tests? Or maybe I should listen to such requests inside integration test and then proxy them to application?
Update 2018-06-03: I'm using minitest. The test that I'm trying to run is here:
https://github.com/cryptogopher/token_voting/blob/master/test/integration/token_votes_notify_test.rb
After calling
#rpc.generate(101)
bitcoind daemon in regtest mode should generate 101 blocks and call 'blocknotify' callbacks. The problem is it cannot send HTTP request to application during test.
Ok, I resolved this one.
Looks like 'minitest' does not start application server no matter which driver you choose. Still you can start your own HTTP server to listen for notifications from external sources and forward them to tested application.
For this to work you need:
require 'webrick'
Setup HTTP server (logging disabled to avoid clutter):
server = WEBrick::HTTPServer.new(
Port: 3000,
Logger: WEBrick::Log.new("/dev/null"),
AccessLog: []
)
Specify how to handle incoming HTTP requests. In my case there will be only GET requests, but it is important to forward them with original headers:
server.mount_proc '/' do |req, resp|
headers = {}
req.header.each { |k,v| v.each { |a| headers[k] = a } }
resp = get req.path, {}, headers
end
Start HTTP server in separate thread (it is blocking the thread where it is running):
#t = Thread.new {
server.start
}
Minitest.after_run do
#t.kill
#t.join
end
Timeout.timeout(5) do
sleep 0.1 until server.status == :Running
end

Server client to Browser client communication

This is more of a conceptual understanding gap rather than technical one. I am new to web socket\messaging api's.
I ran a chat application using faye ruby server and everything works fine between two browsers.I want to send a message from a stand alone ruby client to a browser client which is sending messages to same server. Is it possible to send a message from a client like the one below to a browser whose script is also given below ?
This is not related to the application I created, but I was trying to understand the use of WS client api. Or specifically put , can I send message from a server client to browser client ? I guess I am lacking the understanding of the word 'client' here.
I see the messages on the server console, but the browser doesn't get the message sent by the stand alone client.
Also I see this when i run the client :
Started GET "/faye/test123" for 127.0.0.1 at 2015-04-09 07:17:46 -0400
require 'faye'
require 'eventmachine'
EM.run {
ws = Faye::WebSocket::Client.new('ws://localhost:9292/faye/test123')
ws.onopen = lambda do |event|
p [:open, ws.headers]
ws.send('987654321')
end
ws.on :open do |event|
p [:open]
ws.send('123 123 123 123')
p [:sent]
end
}
Browser script :
window.client = new Faye.Client('http://localhost:9292/faye');
client.subscribe('/test123', function(payload){
if(payload.message)
{
console.log('I am in here 77777.......'+payload.message);
return $("#incomingText").append(payload.message);
}
}
Looking at your code I think it might be useful to highlight the difference between websockets and faye.
Faye is a framework that supports a number of transports, websockets being just one of them. It can also do long polling for example. One of the benefits of Faye is that it can select the right transport that both the client and server understand. It also implements a simple pub/sub protocol on top of that transport, giving you a nice API to build off of.
Doing a pure websocket implementation is totally doable, but if you're going to go with Faye it's probably a good idea to use Faye's publish/subscribe API and not muck with Faye's websockets directly.
To answer your specific question:
Is it possible to send a message from a client like the one below to a browser whose script is also given below ?
Yes, absolutely but I would suggest doing it with Faye::Client. Here's what your server side code might look like:
client = Faye::Client.new('http://localhost:9292/faye')
client.publish('/test123', 'message' => 'Hello world')
With much more info here:
http://faye.jcoglan.com/ruby/clients.html

How to use $remote_addr with rails and nginx secure_link

I have a rails application that makes calls to another server via net::http to retrieve documents.
I have set up Nginx with secure_link.
The nginx config has
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr mySecretCode";
On the client side (which is in fact my rails server) I have to create the secure url something like:
time = (Time.now + 5.minute).to_i
hmac = Digest::MD5.base64digest("#{time}/#{file_path}#{IP_ADDRESS} mySecretCode").tr("+/","-_").gsub("==",'')
return "#{DOCUMENT_BASE_URL}/#{file_path}?md5=#{hmac}&expires=#{time}"
What I want to know is the best way to get the value above for IP_ADDRESS
There are multiple answers in SO on how to get the ip address but alot of them do not seem as reliable as actually making a request to a web service that returns the ip address of the request as this is what the nginx secure link will see (we don't want some sort of localhost address).
I put the following method on my staging server:
def get_client_ip
data=Hash.new
begin
data[:ip_address]=request.ip
data[:error]=nil
rescue Exception =>ex
data[:error]=ex.message
end
render :json=>data
end
I then called the method from the requesting server:
response = Net::HTTP.get_response(URI("myserver.com/web_service/get_client_ip"))
if response.class==Net::HTTPOK
response_hash=JSON.parse response.body
ip=response_hash["ip_address"] unless response_hash[:error]
else
#deal with error
end
After getting the ip address successfully I just cached it and did not keep on calling the web service method.

Redis + ActionController::Live threads not dying

Background: We've built a chat feature in to one of our existing Rails applications. We're using the new ActionController::Live module and running Puma (with Nginx in production), and subscribing to messages through Redis. We're using EventSource client side to establish the connection asynchronously.
Problem Summary: Threads are never dying when the connection is terminated.
For example, should the user navigate away, close the browser, or even go to a different page within the application, a new thread is spawned (as expected), but the old one continues to live.
The problem as I presently see it is that when any of these situations occur, the server has no way of knowing whether the connection on the browser's end is terminated, until something attempts to write to this broken stream, which would never happen once the browser has moved away from the original page.
This problem seems to be documented on github, and similar questions are asked on StackOverflow here (pretty well exact same question) and here (regarding getting number of active threads).
The only solution I've been able to come up with, based on these posts, is to implement a type of thread / connection poker. Attempting to write to a broken connection generates an IOError which I can catch and properly close the connection, allowing the thread to die. This is the controller code for that solution:
def events
response.headers["Content-Type"] = "text/event-stream"
stream_error = false; # used by flusher thread to determine when to stop
redis = Redis.new
# Subscribe to our events
redis.subscribe("message.create", "message.user_list_update") do |on|
on.message do |event, data| # when message is received, write to stream
response.stream.write("messageType: '#{event}', data: #{data}\n\n")
end
# This is the monitor / connection poker thread
# Periodically poke the connection by attempting to write to the stream
flusher_thread = Thread.new do
while !stream_error
$redis.publish "message.create", "flusher_test"
sleep 2.seconds
end
end
end
rescue IOError
logger.info "Stream closed"
stream_error = true;
ensure
logger.info "Events action is quitting redis and closing stream!"
redis.quit
response.stream.close
end
(Note: the events method seems to get blocked on the subscribe method invocation. Everything else (the streaming) works properly so I assume this is normal.)
(Other note: the flusher thread concept makes more sense as a single long-running background process, a bit like a garbage thread collector. The problem with my implementation above is that a new thread is spawned for each connection, which is pointless. Anyone attempting to implement this concept should do it more like a single process, not so much as I've outlined. I'll update this post when I successfully re-implement this as a single background process.)
The downside of this solution is that we've only delayed or lessened the problem, not completely solved it. We still have 2 threads per user, in addition to other requests such as ajax, which seems terrible from a scaling perspective; it seems completely unattainable and impractical for a larger system with many possible concurrent connections.
I feel like I am missing something vital; I find it somewhat difficult to believe that Rails has a feature that is so obviously broken without implementing a custom connection-checker like I have done.
Question: How do we allow the connections / threads to die without implementing something corny such as a 'connection poker', or garbage thread collector?
As always let me know if I've left anything out.
Update
Just to add a bit of extra info: Huetsch over at github posted this comment pointing out that SSE is based on TCP, which normally sends a FIN packet when the connection is closed, letting the other end (server in this case) know that its safe to close the connection. Huetsch points out that either the browser is not sending that packet (perhaps a bug in the EventSource library?), or Rails is not catching it or doing anything with it (definitely a bug in Rails, if that's the case). The search continues...
Another Update
Using Wireshark, I can indeed see FIN packets being sent. Admittedly, I am not very knowledgeable or experienced with protocol level stuff, however from what I can tell, I definitely detect a FIN packet being sent from the browser when I establish the SSE connection using EventSource from the browser, and NO packet sent if I remove that connection (meaning no SSE). Though I'm not terribly up on my TCP knowledge, this seems to indicate to me that the connection is indeed being properly terminated by the client; perhaps this indicates a bug in Puma or Rails.
Yet another update
#JamesBoutcher / boutcheratwest(github) pointed me to a discussion on the redis website regarding this issue, specifically in regards to the fact that the .(p)subscribe method never shuts down. The poster on that site pointed out the same thing that we've discovered here, that the Rails environment is never notified when the client-side connection is closed, and therefore is unable to execute the .(p)unsubscribe method. He inquires about a timeout for the .(p)subscribe method, which I think would work as well, though I'm not sure which method (the connection poker I've described above, or his timeout suggestion) would be a better solution. Ideally, for the connection poker solution, I'd like to find a way to determine whether the connection is closed on the other end without writing to the stream. As it is right now, as you can see, I have to implement client-side code to handle my "poking" message separately, which I believe is obtrusive and goofy as heck.
A solution I just did (borrowing a lot from #teeg) which seems to work okay (haven't failure tested it, tho)
config/initializers/redis.rb
$redis = Redis.new(:host => "xxxx.com", :port => 6379)
heartbeat_thread = Thread.new do
while true
$redis.publish("heartbeat","thump")
sleep 30.seconds
end
end
at_exit do
# not sure this is needed, but just in case
heartbeat_thread.kill
$redis.quit
end
And then in my controller:
def events
response.headers["Content-Type"] = "text/event-stream"
redis = Redis.new(:host => "xxxxxxx.com", :port => 6379)
logger.info "New stream starting, connecting to redis"
redis.subscribe(['parse.new','heartbeat']) do |on|
on.message do |event, data|
if event == 'parse.new'
response.stream.write("event: parse\ndata: #{data}\n\n")
elsif event == 'heartbeat'
response.stream.write("event: heartbeat\ndata: heartbeat\n\n")
end
end
end
rescue IOError
logger.info "Stream closed"
ensure
logger.info "Stopping stream thread"
redis.quit
response.stream.close
end
I'm currently making an app that revolves around ActionController:Live, EventSource and Puma and for those that are encountering problems closing streams and such, instead of rescuing an IOError, in Rails 4.2 you need to rescue ClientDisconnected. Example:
def stream
#Begin is not required
twitter_client = Twitter::Streaming::Client.new(config_params) do |obj|
# Do something
end
rescue ClientDisconnected
# Do something when disconnected
ensure
# Do something else to ensure the stream is closed
end
I found this handy tip from this forum post (all the way at the bottom): http://railscasts.com/episodes/401-actioncontroller-live?view=comments
Here's a potentially simpler solution which does not use a heartbeat. After much research and experimentation, here's the code I'm using with sinatra + sinatra sse gem (which should be easily adapted to Rails 4):
class EventServer < Sinatra::Base
include Sinatra::SSE
set :connections, []
.
.
.
get '/channel/:channel' do
.
.
.
sse_stream do |out|
settings.connections << out
out.callback {
puts 'Client disconnected from sse';
settings.connections.delete(out);
}
redis.subscribe(channel) do |on|
on.subscribe do |channel, subscriptions|
puts "Subscribed to redis ##{channel}\n"
end
on.message do |channel, message|
puts "Message from redis ##{channel}: #{message}\n"
message = JSON.parse(message)
.
.
.
if settings.connections.include?(out)
out.push(message)
else
puts 'closing orphaned redis connection'
redis.unsubscribe
end
end
end
end
end
The redis connection blocks on.message and only accepts (p)subscribe/(p)unsubscribe commands. Once you unsubscribe, the redis connection is no longer blocked and can be released by the web server object which was instantiated by the initial sse request. It automatically clears when you receive a message on redis and sse connection to the browser no longer exists in the collection array.
Building on #James Boutcher, I used the following in clustered Puma with 2 workers, so that I have only 1 thread created for the heartbeat in config/initializers/redis.rb:
config/puma.rb
on_worker_boot do |index|
puts "worker nb #{index.to_s} booting"
create_heartbeat if index.to_i==0
end
def create_heartbeat
puts "creating heartbeat"
$redis||=Redis.new
heartbeat = Thread.new do
ActiveRecord::Base.connection_pool.release_connection
begin
while true
hash={event: "heartbeat",data: "heartbeat"}
$redis.publish("heartbeat",hash.to_json)
sleep 20.seconds
end
ensure
#no db connection anyway
end
end
end
Here you are solution with timeout that will exit blocking Redis.(p)subscribe call and kill unused connection tread.
class Stream::FixedController < StreamController
def events
# Rails reserve a db connection from connection pool for
# each request, lets put it back into connection pool.
ActiveRecord::Base.clear_active_connections!
# Last time of any (except heartbeat) activity on stream
# it mean last time of any message was send from server to client
# or time of setting new connection
#last_active = Time.zone.now
# Redis (p)subscribe is blocking request so we need do some trick
# to prevent it freeze request forever.
redis.psubscribe("messages:*", 'heartbeat') do |on|
on.pmessage do |pattern, event, data|
# capture heartbeat from Redis pub/sub
if event == 'heartbeat'
# calculate idle time (in secounds) for this stream connection
idle_time = (Time.zone.now - #last_active).to_i
# Now we need to relase connection with Redis.(p)subscribe
# chanel to allow go of any Exception (like connection closed)
if idle_time > 4.minutes
# unsubscribe from Redis because of idle time was to long
# that's all - fix in (almost)one line :)
redis.punsubscribe
end
else
# save time of this (last) activity
#last_active = Time.zone.now
end
# write to stream - even heartbeat - it's sometimes chance to
# capture dissconection error before idle_time
response.stream.write("event: #{event}\ndata: #{data}\n\n")
end
end
# blicking end (no chance to get below this line without unsubscribe)
rescue IOError
Logs::Stream.info "Stream closed"
rescue ClientDisconnected
Logs::Stream.info "ClientDisconnected"
rescue ActionController::Live::ClientDisconnected
Logs::Stream.info "Live::ClientDisconnected"
ensure
Logs::Stream.info "Stream ensure close"
redis.quit
response.stream.close
end
end
You have to use reds.(p)unsubscribe to end this blocking call. No exception can break this.
My simple app with information about this fix: https://github.com/piotr-kedziak/redis-subscribe-stream-puma-fix
Instead of sending a heartbeat to all the clients, it might be easier to just set a watchdog for each connection. [Thanks to #NeilJewers]
class Stream::FixedController < StreamController
def events
# Rails reserve a db connection from connection pool for
# each request, lets put it back into connection pool.
ActiveRecord::Base.clear_active_connections!
redis = Redis.new
watchdog = Doberman::WatchDog.new(:timeout => 20.seconds)
watchdog.start
# Redis (p)subscribe is blocking request so we need do some trick
# to prevent it freeze request forever.
redis.psubscribe("messages:*") do |on|
on.pmessage do |pattern, event, data|
begin
# write to stream - even heartbeat - it's sometimes chance to
response.stream.write("event: #{event}\ndata: #{data}\n\n")
watchdog.ping
rescue Doberman::WatchDog::Timeout => e
raise ClientDisconnected if response.stream.closed?
watchdog.ping
end
end
end
rescue IOError
rescue ClientDisconnected
ensure
response.stream.close
redis.quit
watchdog.stop
end
end
If you can tolerate a small chance of missing a message you can use subscribe_with_timeout:
sse = SSE.new(response.stream)
sse.write("hi", event: "hello")
redis = Redis.new(reconnect_attempts: 0)
loop do
begin
redis.subscribe_with_timeout(5 * 60, 'mycoolchannel') do |on|
on.message do |channel, message|
sse.write(message, event: 'message_posted')
end
end
rescue Redis::TimeoutError
sse.write("ping", event: "ping")
end
end
This code subscribes to a Redis channel, waits for 5 minutes, then closes connection to Redis and subscribes again.

How to check server connection

i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive

Resources