Support for IMAP IDLE in ruby - ruby-on-rails

Ok, I have been suck on it for hours. I thought net/imap.rb with ruby 1.9 supported the idle command, but not yet.
Can anyone help me in implementing that? From here, I though this would work:
class Net::IMAP
def idle
cmd = "IDLE"
synchronize do
tag = generate_tag
put_string(tag + " " + cmd)
put_string(CRLF)
end
end
def done
cmd = "DONE"
synchronize do
put_string(cmd)
put_string(CRLF)
end
end
end
But imap.idle with that just return nil.

I came across this old question and wanted to solve it myself. The original asker has disappeared - oh well.
Here's how you get IMAP idle working on Ruby (this is super cool). This uses the quoted block in the original question, and the documentation here.
imap = Net::IMAP.new SERVER, :ssl => true
imap.login USERNAME, PW
imap.select 'INBOX'
imap.add_response_handler do |resp|
# modify this to do something more interesting.
# called every time a response arrives from the server.
if resp.kind_of?(Net::IMAP::UntaggedResponse) and resp.name == "EXISTS"
puts "Mailbox now has #{resp.data} messages"
end
end
imap.idle # necessary to tell the server to start forwarding requests.

Are you sure it isn't working? Have you looked at the strings it has sent over the socket?
After doing some digging, it looks like put_string returns nil unless you have debug enabled, which is why imap.idle returns nil.
So your idle method might very well be working since it isn't throwing errors.
Does that help explain the behavior?
If you want to use debug, use Net::IMAP.debug = true

#Peter
I've done some research on how to scale an IDLE IMAP solution. I'm now essentially thinking of 2 options.
Option 1: Run a daemon that checks the mail for all accounts on a continuous loop.
Option 2: Open an IDLE connection for every account and receive updates.
Since my app is dealing with multiple (perhaps thousands or hundreds of thousands of accounts) option 2 seems like an impossibility. I think my best bet is to go with option one, and then break the server into multiple workers after hitting some sort of maximum.
The basic code/idea is outlined here http://railspikes.com/2007/6/1/rails-email-processing

with Ruby 2.x:
the solution is described by mzolin's code chunk here:
https://stackoverflow.com/a/21345164/1786393
I just wrote a complete (but still draft) script to fetch unseen mails here
https://gist.github.com/solyaris/b993283667f15effa579
btw, comments welcome.

Related

Object variables in multithreaded Sneaker works like a global mutable data

I have a sneaker worker(given below) as a backend of a chatbot.
class RabbitMQWorker
include Sneakers::Worker
from_queue "message"
def work(options)
parsed_options = JSON.parse(options)
# Initializing some object variables
#question_id = parsed_options[:question_id]
#answer = parsed_option[:answer]
#session_id = parsed_option[:session_id]
ActiveRecord::Base.connection_pool.with_connection do
# send next question to the session_id based on answer
end
ack!
end
end
What's happening
The problem I am facing here is that when I run sneaker with more than 1 thread and multiple users are chatting at the same time, then the ampq event which comes slightly later cause to override the #session_id and as a result, the second user gets two questions and the first one gets none. This happens because by the time 1st event is getting process the second event came and override #session_id. Now when it's time to send the next question to the first user by using #session_id, the question get's send to the second user.
My Questions
Do the work method and any instance variables I create in it works like global mutable data for sneaker's threads?
If yes then I am guessing I need to make them as thread-local variables. If I do that, then do I need to make these changes deep down in my Rails logic as well? As this worker works with Rails.
Curiosity question
How does Puma manage these things? It is a multi-threaded app server and we use instance variables in controllers and it manages to serve multiple requests simultaneously. Does it mean that Puma handles this multi-contexting implicitly and Sneakers don't?
What I have done till now
I read the documentation of Sneaker and couldn't found anything regarding this.
I perform a load tests to verify the problem and it is the problem as I stated above.
I tried getting my logic clear on how actually multi-threading works but everywhere there is only general stuff. The curiosity question I asked above will help a lot in terms of clearing the concepts, I am searching for an explanation of it for days but couldn't found any.
After 2 days of searching for an issue where messages seemed to get mixed up I was finally able to solve this by removing all instance variables from my workers.
This thread gave me the clue to do so: https://github.com/jondot/sneakers/issues/244
maybe we should simply disallow instance variables in workers since
changing the behavior to instantiate multiple worker instances might
break existing code somehow
and:
I think that an instance per thread is the way to go.
So when you remove your instance variables you should be fine!

devise-two-factor how to get code valid for custom time to send over SMS/Email etc

I am trying to implement 2FA(two factor authentication) in my existing rails 4.2.10 application, I have configured many bits.
Issue I am facing is to get/retrieve a code which is valid for 5 minutes and send this code over to user on his defined phone number or email.
I did tried ROTP::TOTP.new(user.otp_secret).at(Time.now), guessing from gem's source code, which seems to work fine and give a valid otp_code in console, but in sessions_controler, as weird as it sounds, user.otp_secret is null, always...
I have posted an issue on the gem.
I don't think this can be bug, rather this is a functionality I want to build.
My stack:
Ruby: 2.4.2
Rails: 4.2.10
Devise: 4
attr_encrypted: 1.4(if it matters)
Additionally, I want to extend drift period(code acceptance time) to 5 minutes. I think that will be easy, but doing it for single code, not universally, or for all codes, this has me thinking for a while now.
My main issue is the first one, getting the code to send through SMS, this is a subproblem, which I think is doable, but if anyone has/had experience with this and can help, that will be great.
UPDATE: I updated attr_encrypted and restarted the system, it started working, also I realized there is a method current_otp in which devise_two_factor adds in the user model, so I started using that. BUT after a few minutes, it is also throwing the same issue of user.otp_secret being nil. Its getting weird...
UPDATE 2/Hacky solution: Weirdly enough, I had to add these 3 methods in user model and everything started working:
def encrypted_otp_secret
self[:encrypted_otp_secret]
end
def encrypted_otp_secret_iv
self[:encrypted_otp_secret_iv]
end
def encrypted_otp_secret_salt
self[:encrypted_otp_secret_salt]
end
As you can suspect, i got here by examining a behavior thatdoing user.encrypted_otp_secret was giving me nil while it was not, even after reloading user model. And doing user[:encrypted_otp_secret] was giving me the actual value.
It seems like a bug in attr_encrypted. I am not sure yet.
For anyone else that runs into this issue, I have found a next step needed to get the current_otp method to work. In the method pre_otp method call
> u = User.find_by(email: 'test#example.com')
> u.otp_required_for_login = true
> u.otp_secret = User.generate_otp_secret
> u.save!
and then you can call u.current_otp...
https://blog.tommyku.com/blog/integrating-two-step-two-factor-authentication-into-rails-4-project-with-devise/

Catching errors with Ruby Twitter gem, caching methods using delayed_job: What am I doing wrong?

What I'm doing
I'm using the twitter gem (a Ruby wrapper for the Twitter API) in my app, which is run on Heroku. I use Heroku's Scheduler to periodically run caching tasks that use the twitter gem to, for example, update the list of retweets for a particular user. I'm also using delayed_job so scheduler calls a rake task, which calls a method that is 'delayed' (see scheduler.rake below). The method loops through "authentications" (for users who have authenticated twitter through my app) to update each authorized user's retweet cache in the app.
My question
What am I doing wrong? For example, since I'm using Heroku's Scheduler, is delayed_job redundant? Also, you can see I'm not catching (rescuing) any errors. So, if Twitter is unreachable, or if a user's auth token has expired, everything chokes. This is obviously dumb and terrible because if there's an error, the entire thing chokes and ends up creating a failed delayed_job, which causes ripple effects for my app. I can see this is bad, but I'm not sure what the best solution is. How/where should I be catching errors?
I'll put all my code (from the scheduler down to the method being called) for one of my cache methods. I'm really just hoping for a bulleted list (and maybe some code or pseudo-code) berating me for poor coding practice and telling me where I can improve things.
I have seen this SO question, which helps me a little with the begin/rescue block, but I could use more guidance on catching errors, and one the higher-level "is this a good way to do this?" plane.
Code
Heroku Scheduler job:
rake update_retweet_cache
scheduler.rake (in my app)
task :update_retweet_cache => :environment do
Tweet.delay.cache_retweets_for_all_auths
end
Tweet.rb, update_retweet_cache method:
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
end
end
User.rb, twitter method:
def twitter
authentication = Authentication.find_by_user_id_and_provider(self.id, "twitter")
if authentication
#twitter ||= Twitter::Client.new(:oauth_token => authentication.oauth_token, :oauth_token_secret => authentication.oauth_secret)
end
end
Note: As I was posting this, I noticed that I'm finding all "twitter" authentications in the "cache_retweets_for_all_auths" method, then calling the "User.twitter" method, which specifically limits to "twitter" authentications. This is obviously redundant, and I'll fix it.
First what is the exact error you are getting, and what do you want to happen when there is an error?
Edit:
If you just want to catch the errors and log them then the following should work.
def self.cache_retweets_for_all_auths
#authentications = Authentication.find_all_by_provider("twitter")
#authentications.each do |authentication|
being
authentication.user.twitter.retweeted_to_me(include_entities: true, count: 200).each do |tweet|
# Actually build the cache - this is good - removing to keep this short
end
rescue => e
#Either create an object where the error is log, or output it to what ever log you wish.
end
end
end
This way when it fails it will keep moving on to the next user but will still making a note of the error. Most of the time with twitter its just better to do something like this then try to do with each error on its own. I have seen so many weird things out of the twitter API, and random errors, that trying to track down every error almost always turns into a wild goose chase, though it is still good to keep track just in case.
Next for when you should use what.
You should use a scheduler when you need something to happen based on time only, delayed jobs for when its based on an user action, but the 'action' you are going to delay would take to long for a normal response. Sometimes you can just put the thing plainly in the controller also.
So in other words
The scheduler will be fine as long as the time between updates X is less then the time it will take for the update to happen, time Y.
If X < Y then you might want to look at calling the logic from the controller when each indvidual entry is accessed, isntead of trying to do them all at once. The idea being you would only update it after a certain time as passed so. You could store the last time update either on the model itself in a field like twitter_udpate_time or in a redis or memecache instance at a unquie key for the user/auth.
But if the individual update itself is still too long, then thats when you should do the above, but instead of doing the actually update, call a delayed job.
You could even set it up that it only updates or calls the delayed job after a certain number of views, to further limit stuff.
Possible Fancy Pants
Or if you want to get really fancy you could still do it as a cron job, but have a point system based on views that weights which entries should be updated. The idea being certain actions would add points to certain users, and if their points are over a certain amount you update them, and then remove their points. That way you could target the ones you think are the most important, or have the most traffic or show up in the most search results etc etc.
Next off a nick picky thing.
http://api.rubyonrails.org/classes/ActiveRecord/Batches.html
You should be using
#authentications.find_each do |authentication|
instead of
#authentications.each do |authentication|
find_each pulls in only 1000 entries at a time so if you end up with a lof of Authentications you don't end up pulling a crazy amount of entries into memory.

Why am I losing session when working with a dashed domain name (example-dashed.com)?

I have a website which is www.abrisud.com. This website has 7 domain names (one for each language): abrisud.com, abrisud.it, abrisud.de, etc... and abrisud-enclosure.co.uk.
The problem is on the last one: I am losing my session on every single request. Each Time I load a page I have a different session ID. On the other domains everything is working just fine.
The website is running ruby 1.8.7 and rails 3.0.0.
I am really convinced that the problem comes from the "-" in the domain name but I just can't find anything (or almost anything) on the subject through the web.
Hopefully I am being clear enough, if not just tell me.
Here is the answer :
From Module ActionDispatch::Http::URL (Rails 3.0.x), be sure to read the comments ;-)
# Returns the \domain part of a \host, such as "rubyonrails.org" in "www.rubyonrails.org".
# You can specify a different <tt>tld_length</tt>, such as 2 to catch rubyonrails.co.uk in "www.rubyonrails.co.uk".
def domain(tld_length = 1)
return nil unless named_host?(host)
host.split('.').last(1 + tld_length).join('.')
end
Well, calling the domain method with the appropriate _tld_lenght_ argument did not play the trick, the request.domain (abrisud-enclosure.co.uk) was good, but not the session_domain (still co.uk).
So I had to add the following lines as a before filter to my application_controller :
def set_session_domain
request.session_options[:domain] = request.domain
end
If you have a better solution I am open to it as I think this is a really dirty fix.
Thanks
I have taken a peak at your site, the cookie is set with: domain=co.uk;path=/
So the problem is within your rails stack and not the browser(s) - time to do some debugging :-)

why class variable of Application Controller in Rails is re-initialized in different request

I have my Application Controller called McController which extends ApplicationController, and i set a class variable in McController called ##scheduler_map like below:
class McController < ApplicationController
##scheduler_map = {}
def action
...
end
private
def get_scheduler(host, port)
scheduler = ##scheduler_map[host+"_"+port]
unless scheduler
scheduler = Scheduler.create(host, port)
##scheduler_map[host+"_"+port] = scheduler
end
scheduler
end
end
but i found that from second request start on ##scheduler_map is always an empty hash, i run it in development env, could someone know the reason? is that related to the running env?
Thank you in advance.
You answered your own question :-)
Yes this is caused by the development environment (i tested it) and to be more precise the config option "config.cache_classes = false" in config/environments/development.rb
This flag will cause all classes to be reloaded at every request.
This is done because then you dont have to restart the whole server when you make a small change to your controllers.
You might want to take in consideration that what you are trying can cause HUGE memory leaks when later run in production with a lot of visits.
Every time a user visits your site it will create a new entree in that hash and never gets cleaned.
Imagine what will happen if 10.000 users have visited your site? or what about 1.000.000?
All this data is kept in the systems memory so this can take a lot of space the longer the server is online.
Also, i'm not really sure this solution will work on a production server.
The server will create multiple threats to handle a lot of visitors on the same time.
I think (not sure) that every threat will have his own instances of the classes.
This means that in treat 1 the schedule map for ip xx might exist but in treat 2 it doesn't.
If you give me some more information about what this scheduler is i might be able to give a suggestion for a different solution.

Resources