Store pointer to object in database - ruby-on-rails

I have a rails app, where every user can connect his Facebook account, and give permission to send messages from the app wich is using. So, every logged user with connected Facebook account must has one Jabber Client authorized with his Facebook-id, token etc, I'm doing it with xmpp4r GEM.
The connected facebook account with token, and facebook data is stored in Database as Mailman object.The Mailman class has also methods to control the Jabber client like run_client, connect_client, authorize_client, stop_client, get_client etc. The most important methods for me are connect_client and get_client.
class Mailman < ActiveRecord::Base
##clients = {} unless defined? ##clients
def connect_client
#some code
##clients[self.id] = Jabber::Client.new Jabber::JID.new(facebook_chat_id)
#some code
end
def get_client
##clients[self.id]
end
#other stuff
end
As you can see in the code, every Mailman object has get_client method which should return Jabber::Client object, and it's true, it is working, but only in a scope of running application, because the ##clients variable is stored only for specifc running app.
This is problem for me because I would like to use cron task to close idle clients, and the cron task is using different initalization of the app, so Mailman.find(x).get_client will return always nil, even if it returns Jabber::Client object in a production app.
How are you dealing with such issues? For example, is it possible to get a pointer to memory for Jabber::Client object and save it to database, so any other app's initalization could use it? I have no idea how to achive that. Thank you for any advice!

Even if you manage to store a "pointer to memory" in your database, it will be of no use to a cron job. The cron job is started as a new process, and the OS ensures that it won't have access to the memory space of any other process.
The best way is to create a controller to manage your running XMPP clients. This will provide a restful API to your cron job, allowing you to terminate idle clients using HTTP requests.

Related

Request Store variable cannot be accessed in delayed job

We are using request store gem in our app. It is used for storing global data. But the problem is If I try to access request store variable in the delayed job It is not accessible. Is there anything extra which needs to be done in order for the request store data to be available in delayed job ?
Delayed Job Code
class CustomersCreateJob < Struct.new()
def perform
puts "Request Data =====> #{RequestStore.store[:current_user] }"
end
end
In general, current_user by default is only available in controllers for reason.
You did not mention you method or running jobs, but in any way by the time when job starts, even if it happens to be in same process and thread - request is already finished and there's no current_user. So pass user's id to job explicitly (this depends on how you run them)
delayed_job workers won't get the request_store normally, because they are outside of the request/response cycle.
However this frequently isn't the desired behaviour, given the typical uses of request_store.
You can always extend ApplicationJob yourself with such functionality, (e.g. around_enqueue and around_perform), and I do recall having to do something similar at a previous role.

Why is Rails sharing code between user sessions?

When a user tries to sign into our Rails app, I contact a 3rd party ICAM server that returns some information about the user if he exists in the ICAM server. I get a hash back with the user name, email, etc... (Our environment is configured in a way that the ICAM server can detect the identify of the person who is attempting to sign in based on their workstation credentials).
We do all of this work in a custom gem. During the login process, I try to cache the info the ICAM sever returns so I don't have to talk to the ICAM server again. Naively, I had some code that basically did:
module Foo
def self.store_icam_data(data)
#icam_data = data
end
def self.icam_data
#icam_data || {}
end
end
I just discovered a problem when two users log into the system. When User A logs in, #icam_data is set with his info. When User B logs in, #icam_data is set with his info. The next time User A makes a request, #icam_data has by User B's info inside it instead of User A's!
I wasn't expecting the variable inside this module to be shared between threads/sessions like it is. It effectively makes all current users of the system become the last user who signs in... a pretty gnarly bug.
Can someone explain why the this #icam_data variable is getting shared across sessions? I was expecting the data/code to be more isolated than it apparently is.
There are only two ways you can share data between requests: your database (RDBMS, Redis, etc.) and session object (inside of controllers). Any other data which change and survive end of request is side-effect which should be avoided.
Your class variables are saved into memory (RAM) region which belong to particular app server process (e.g. Unicorn worker process). And single process naturally serve many requests, because it's inefficient to kill and restart Rails on each request.
So it's not "Rails sharing code", it's web application server shares its memory region amongst all requests which it serves.
If you want to bind small amount of data to current user, use session:
# save
session[:icam_data] = MyICAMModule.get_icam_data
# retain
MyICAMModule.set_icam_data(session[:icam_data])
More info on session is available in Action Controller Overview.
If you have large amount of data – use database.

Refresh data with API every X minutes

Ruby on Rails 4.1.4
I made an interface to a Twitch gem, to fetch information of the current stream, mainly whether it is online or not, but also stuff like the current title and game being played.
Since the website has a lot of traffic, I can't make a request every time a user walks in, so instead I need to cache this information.
Cached information is stored as a class variable ##stream_data inside class: Twitcher.
I've made a rake task to update this using cronjobs, calling Twitcher.refresh_stream, but naturally that is not running within my active process (to which every visitor is connecting to) but instead a separate process. So the ##stream_data on the actual app is always empty.
Is there a way to run code, within my currently running rails app, every X minutes? Or a better approach, for that matter.
Thank you for your time!
This sounds like a good call for caching
Rails.cache.fetch("stream_data", expires_in: 5.minutes) do
fetch_new_data
end
If the data is in the cache and is not old then it will be returned without executing the block, if not the block is used to populate the cache.
The default cache store just keeps things in memory so doesn't fix your problem: you'll need to pick a cache store that is shared across your processes. Both redis and memcached (via the dalli gem) are popular choices.
Check out Whenever (basically a ruby interface to cron) to invoke something on a regular schedule.
I actually had a similar problem with using google analytics. Google analytics requires that you have an API key for each request. However the api key would expire every hour. If you requested a new api key for every google analytics request, it'd be very slow per request.
So what I did was make another class variable ##expires_at. Now in every method that made a request to google analytics, I would check ##expires_at.past?. If it was true, then I would refresh the api key and set ##expires_at = 45.minutes.from_now.
You can do something like this.
def method_that_needs_stream_data
renew_data if ##expires_at.past?
# use ##stream_data
end
def renew_data
# renew ##stream_data here
##expires_at = 5.minutes.from_now
end
Tell me how it goes.

Sending data from an analytics engine to a Rails server

I have an analytics engine which periodically packages a bunch of stats in JSON format. I want to send these packages to a Rails server. Upon a package arriving, the Rails server should examine it, generate a model instance out of it (for historical purposes), and then display the contents to the user. I've thought of two approaches.
1) Have a little app residing on the same host as the Rails server to be listening for these packages (using ZeroMQ). Upon receiving a package, the app would invoke a Rails action through CURL, passing on the package as a parameter. My concern with this approach is that my Rails server checks that only signed-in users can access actions which affect models. By creating an action accessible to this listening app (and therefore other entities), am I exposing myself to a major security flaw?
2) The second approach is to simply have the listening app dump the package into a special database table. The Rails server will then periodically check this table for new packages. Upon detecting one or more, it will process them and remove them from the table.
This is the first time I'm doing something like this, so if you have techniques or experiences you can share for better solutions, I'd love to learn.
Thank you.
you can restrict access to a certain call by limiting the host name that is allowed for the request in routes.rb
post "/analytics" => "analytics#create", :constraints => {:ip => /127.0.0.1/}
If you want the users to see updates, you can use polling to refresh the page every minute orso.
1) Yes you are exposing a major security breach unless :
Your zeroMQ app provides the needed data to do authentification and authorization on the rails side
Your rails app is configured to listen only on the 127.0.0.1 interface and is thus not accessible from the outside
Like Benjamin suggests, you restrict specific routes to certain IP
2) This approach looks a lot like what delayed_job does. You might wanna take a look there : https://github.com/collectiveidea/delayed_job and use a rake task to add a new job.
In short, your listening app will call a rake task that will add a custom delayed_job when receiving a packet. Then let delayed_job handle the load. You benefit from delayed_job goodness (different queues, scaling, ...). The hard part is getting the result.
One idea would be to associated a unique ID with each job, and have the delayed_job task output the result in a data store wich associated the job ID with the result. This data store can be a simple relational table
+----+--------+
| ID | Result |
+----+--------+
or a memecache/redis/whatever instance. You just need to poll that data store looking for the result associated with the job ID. And delete everything when you are done displaying that to the user.
3) Why don't you directly POST the data to the rails server ?
Following Benjamin's lead, I implemented a filter for this particular action.
def verify_ip
#ips = ['127.0.0.1']
if not #ips.include? request.remote_ip
redirect_to root_url
end
end
The listening app on the localhost now invokes the action, passing the JSON package received from the analytics engine as a param. Thank you.

Should tweets be done in the background?

On a high Twitter app site. Where the app sends tweets via the users oauth credentials. Should the tweets be sent in the background, via a background worker (Resque, Delayed Job, etc)? Or should the web process handle it?
It really depends on your use case. Twitter itself I think sends an AJAX request to the API. You could do the same if it makes sense in your interface, but it does mean that you're using a web process to do this. One of the benefits to this is that you can verify that the request was successful before returning a resopnse to the user. This is much easier than a scenario where you queue something in the background, it fails, and you want to alert the user (e.g. through a "real-time" ajax/socket-based message system or a flash notice on another request).
If you aren't worried about showing the Tweets (e.g. your application is sending as part of a larger action), then doing it in the background is definitely the way to go.
Resque is great and jobs are really lightweight, so you could a quick integration to process these in the background pretty quickly.
# app/jobs/send_tweet.rb
class SendTweet
#queue = :tweets
def self.perform(user_id, content)
user = User.find(user_id)
# send Tweet
end
end
# app/controllers/tweet_controller.rb
def create
# assuming some things here, like validation and a `current_user` method
Resque.enqueue(SendTweet, current_user.id, params[:tweet][:message])
redirect_to :index
end

Resources