We are using request store gem in our app. It is used for storing global data. But the problem is If I try to access request store variable in the delayed job It is not accessible. Is there anything extra which needs to be done in order for the request store data to be available in delayed job ?
Delayed Job Code
class CustomersCreateJob < Struct.new()
def perform
puts "Request Data =====> #{RequestStore.store[:current_user] }"
end
end
In general, current_user by default is only available in controllers for reason.
You did not mention you method or running jobs, but in any way by the time when job starts, even if it happens to be in same process and thread - request is already finished and there's no current_user. So pass user's id to job explicitly (this depends on how you run them)
delayed_job workers won't get the request_store normally, because they are outside of the request/response cycle.
However this frequently isn't the desired behaviour, given the typical uses of request_store.
You can always extend ApplicationJob yourself with such functionality, (e.g. around_enqueue and around_perform), and I do recall having to do something similar at a previous role.
Related
I have a user model and i am setting a value in a thread
Thread.current[:partner_domain] = "example.com"
I am able to access this in model, but not in delayed job worker, as it runs in separate thread, i can't save this domain in my database due to some business requirement.
To be more clear i am using Thread.current[:partner_domain] in a dynamically created method, that is being invoked by delayed job worker
Please help me with this.
Multithreading has nothing to do with this. DelayedJob worker runs in a separate process and, as such, doesn't share anything with your rails server process. Not threads, not memory, nothing.
The right thing to do would be to bundle all the data the job needs into its arguments. Something like this:
MyClass.delay.do_action(primary_data, options)
Where options contain your partner domain name and all the other info. Then the job just accesses the info from the arguments.
If the delayed job worker needs this value for processing jobs, I think you could pass the value as a job's argument.
Ruby on Rails 4.1.4
I made an interface to a Twitch gem, to fetch information of the current stream, mainly whether it is online or not, but also stuff like the current title and game being played.
Since the website has a lot of traffic, I can't make a request every time a user walks in, so instead I need to cache this information.
Cached information is stored as a class variable ##stream_data inside class: Twitcher.
I've made a rake task to update this using cronjobs, calling Twitcher.refresh_stream, but naturally that is not running within my active process (to which every visitor is connecting to) but instead a separate process. So the ##stream_data on the actual app is always empty.
Is there a way to run code, within my currently running rails app, every X minutes? Or a better approach, for that matter.
Thank you for your time!
This sounds like a good call for caching
Rails.cache.fetch("stream_data", expires_in: 5.minutes) do
fetch_new_data
end
If the data is in the cache and is not old then it will be returned without executing the block, if not the block is used to populate the cache.
The default cache store just keeps things in memory so doesn't fix your problem: you'll need to pick a cache store that is shared across your processes. Both redis and memcached (via the dalli gem) are popular choices.
Check out Whenever (basically a ruby interface to cron) to invoke something on a regular schedule.
I actually had a similar problem with using google analytics. Google analytics requires that you have an API key for each request. However the api key would expire every hour. If you requested a new api key for every google analytics request, it'd be very slow per request.
So what I did was make another class variable ##expires_at. Now in every method that made a request to google analytics, I would check ##expires_at.past?. If it was true, then I would refresh the api key and set ##expires_at = 45.minutes.from_now.
You can do something like this.
def method_that_needs_stream_data
renew_data if ##expires_at.past?
# use ##stream_data
end
def renew_data
# renew ##stream_data here
##expires_at = 5.minutes.from_now
end
Tell me how it goes.
I have a rails app, where every user can connect his Facebook account, and give permission to send messages from the app wich is using. So, every logged user with connected Facebook account must has one Jabber Client authorized with his Facebook-id, token etc, I'm doing it with xmpp4r GEM.
The connected facebook account with token, and facebook data is stored in Database as Mailman object.The Mailman class has also methods to control the Jabber client like run_client, connect_client, authorize_client, stop_client, get_client etc. The most important methods for me are connect_client and get_client.
class Mailman < ActiveRecord::Base
##clients = {} unless defined? ##clients
def connect_client
#some code
##clients[self.id] = Jabber::Client.new Jabber::JID.new(facebook_chat_id)
#some code
end
def get_client
##clients[self.id]
end
#other stuff
end
As you can see in the code, every Mailman object has get_client method which should return Jabber::Client object, and it's true, it is working, but only in a scope of running application, because the ##clients variable is stored only for specifc running app.
This is problem for me because I would like to use cron task to close idle clients, and the cron task is using different initalization of the app, so Mailman.find(x).get_client will return always nil, even if it returns Jabber::Client object in a production app.
How are you dealing with such issues? For example, is it possible to get a pointer to memory for Jabber::Client object and save it to database, so any other app's initalization could use it? I have no idea how to achive that. Thank you for any advice!
Even if you manage to store a "pointer to memory" in your database, it will be of no use to a cron job. The cron job is started as a new process, and the OS ensures that it won't have access to the memory space of any other process.
The best way is to create a controller to manage your running XMPP clients. This will provide a restful API to your cron job, allowing you to terminate idle clients using HTTP requests.
I am building an application which will send status requests to users (via email & sms) on a regular basis. I want to execute the service each hour which will:
Query the database for all requests that need to be sent (based on some logic)
Send the requests through Amazon's Simple Email Service (this is already working)
Write a record of the status request notification back to the data store
I am considering wrapping up this series of operations into a single controller with an end point that can be called remotely to kick off the process within the rails app.
Longer term, I will break this process out into an app that can be run independently of my rails app, but for now I'm just trying to keep it simple.
My first inclination is to build the following:
Controller with the following elements:
A method which will orchestrate the steps outlined above (and can be called externally)
A call to the status_request model which will bring back a collection of request needing to be sent
A loop to iterate through the pending requests, which will:
Make a call to my AWS Simple Email Service module to actually send the email, and
Make a call to the status_request model to log the request back to the database
Model:
A method on my status_request model which will bring back a collection of requests that need to be sent
A method in my status_request model which will log that a notification was sent
Since this will behave as a service that gets called periodically from an outside scheduler I don't think I'll need a view for this operation. (Will, of course, need views to show users and admins what requests have been sent, but that's later...).
As someone new to Rails, I'm asking for review of this approach and any suggestions you may have.
Thanks!
Instead of a controller which Jeff pointed out exposes a security risk, you may just want to expose a rake task and use cron to invoke it on an hourly basis.
If you are still interested in building a controller, look at devise gem and its single access token, token_authenticatable, for securing the methods you are exposing.
You may also want to look at delayed_job or resque to offload the call to status_request and the loop to AWS simple service to a background worker process.
You may want a seperate controller and view for the log file so you can review progress on demand.
And if you want to get real fancy use Amazon SNS to send you alerts when the service reaches some unacceptable level of failures, backlog, etc.
Since you are trying to invoke this from an outside process, your approach should work. You could also have a worker process that processes task when they are there.
You will need routes to expose your service, and you may want to also make security decisions. How will the service that invokes your application authenticate so all others can't hit it at will?
Another consideration should be how many emails are you sending. If there are enough, we may want to look into the fact that writing this sort of loop is going to be extremely top heavy; and may affect users on the current system if it's a web application.
In the end, there are many ways to do this. I would focus on the performance/usage you expect as well as security. There's never one perfect way to solve a problem like this, and your way should just be aware of the variables it will need to be operating within.
Resque and Redis might be helpful to you in scheduling and performing operatio n .They are simple and superfast, [here](http://railscasts.com/episodes/271-resque] is a simple tut on same.
In my rails app, I'm using the SendGrid parse API which posts mail to my server. Every now and then SendGrid's Parse API submits the same email twice.
When I get a posted mail I place it in the IncomingMail model. so in order to prevent this double submitting issue, I look at each IncomingMail when processing to see if there is a duplicate in the table within the last minute. That tested great on development, it caught all the double submissions.
Now I pushed that live to heroku, where I have 2+ dynos and it didn't work. My guess being that it has something to do with replication. So that being the case, how can scalable sites with multiple server deal with something like this?
Thanks
You should look at using a background job queue. Heroku has "Workers" (which was Delayed Job). Rather than sending the email immediately, you push it onto the queue. Then one or more Heroku 'workers' need to be added to your account, and each one will pull jobs in sequence. This means there can be a short delay (depending on load) before the email is sent, but this delay is not presented to the user, and should there be a lot of email to send you just add more workers.
Waiting for an external service like an email provider on each user action is dangerous because any network problem will take down your site as several users have to 'wait' for their HTTP requests to be responded to while Heroku is blocked with these third party calls.
In this situation with workers each job would fail but would be retried and eventually succeed.
This sounds like it could be a transaction issue. If you have multiple workers running simultaneously their operation may be 'interleaved'. For instance this sequence of events would result in 2 mails being sent.
Worker A : Checks for an existing record and doesn't find one
Worker B : Checks for an existing record and doesn't find one
Worker A : Post to Sendgrid
Worker B : Post to Sendgrid
You could wrap everything in a transaction to keep this from happening. Something like this should do it.
class IncomingMail < ActiveRecord::Base
def check_and_send(email_address)
transaction do
# your existing code for preventing duplicates and sending
end
end
end