Recommendations on proper refactoring in a When statement? - ruby-on-rails

I'm trying to call two lengthy commands in a when statement, but for some reason, because of its syntax, it performs two of the commands twice when it is called :
#email = Email.find(params[:id])
delivery = case #email.mail_type
# when "magic_email" these two delayed_jobs perform 2x instead of 1x. Why is that?
when "magic_email" then Delayed::Job.enqueue MagicEmail.new(#email.subject, #email.body)
Delayed::Job.enqueue ReferredEmail.new(#email.subject, #email.body)
when "org_magic_email" then Delayed::Job.enqueue OrgMagicEmail.new(#email.subject, #email.body)
when "all_orgs" then Delayed::Job.enqueue OrgBlast.new(#email.subject, #email.body)
when "all_card_holders" then Delayed::Job.enqueue MassEmail.new(#email.subject, #email.body)
end
return delivery
How can I make it so that when I hit when "magic_email", it only renders both those delayed jobs once ?

I have tried this with following example:
q = []
a = case 1
when 1 then q.push 'ashish'
q.push 'kumar'
when 2 then q.push 'test'
when 4 then q.push 'another test'
end
puts a.inspect #######["ashish", "kumar"]
This is working fine. It means your case-when syntax is ok. It might be you have aome other problem.

You are calling return delivery and delivery varible may be having the value to call the delayed job again. It depends on what the then statement returns, so try not to return anything if possible. I believe you want to do the delayed job and not return anything by using the function.
Perhaps you should just have the case and dont store it in any variable. I mean delivery variable has no purpose here.

Related

How to wait for all Concurrent::Promise in an array to finish/resolve

#some_instance_var = Concurrent::Hash.new
(0...some.length).each do |idx|
fetch_requests[idx] = Concurrent::Promise.execute do
response = HTTP.get(EXTDATA_URL)
if response.status.success?
... # update #some_instance_var
end
# We're going to disregard GET failures here.
puts "I'm here"
end
end
Concurrent::Promise.all?(fetch_requests).execute.wait # let threads finish gathering all of the unique posts first
puts "how am i out already"
When I run this, the bottom line prints first, so it's not doing what I want of waiting for all the threads in the array to finish its work first, hence I keep getting an empty #some_instance_var to work with below this code. What am I writing wrong?
Never mind, I fixed this. That setup is correct, I just had to use the splat operator * for my fetch_requests array inside the all?().
Concurrent::Promise.all?(*fetch_requests).execute.wait
I guess it wanted multiple args instead of one array.

Can Sidekiq run a loop with wait and see a change to the db?

I have a sidekiq worker that waits for a change to happen to a record made by a remote client. Something like the following:
#myworker async process to wait for client to confirm status
perform(myRecordID)
sendClient(myRecordID)
didClientAcknowledge = false
while !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
break
end
# wait for client to perform an update on the record to confirm status
sleep 5.seconds
end
Rails.logger.info("client got the message")
end
my problem is that although I can see that the client has in fact performed the acknowledgement and updated the record with correct status update (ACK_OK), my sidekiq thread continues to see the old status for myRecord.
I'm sure my logic is flawed here but it seems like the sidekiq process does not "see" changes to the DB...but if I used my rails console I can see that the client has in fact updated the DB as expected...
Thanks!
Edit 1
ok so here's a thought, instead of the loop, I'll schedule another call to the worker within 5 seconds... so here's the updated code:
perform(myRecordID, retry_count)
retry_count -= 1
if retry_count < 1
return
end
sendClient(myRecordID)
didClientAcknowledge = false
if !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
Rails.logger.info("client got the message")
return
end
# wait for client to perform an update on the record to confirm status
myWorker.perform_in(5.seconds)
end
Rails.logger.info("client got the message")
end
This seems to work, but will test a bit more..one challenge is having a retry count which means I need to maintain some sort of variable between calls to the worker...
edit2 possibly this can be done by passing in the time to the first call and then checking if a timeout has been surpassed before invoking the next instance...assuming time does not stand still as well inside the async call...
edit3 Adding the retry_count argument allows us to control how many times this worker will be spawned...

How to defer execution of expensive create_by option

The following question is almost exactly what I need: https://stackoverflow.com/a/2394783/456188
I'd like to run the following:
find_or_create_by_person_id(:person_id => 10, :some_other => expensive_query)
But more importantly, I'd like to defer the execution of expensive_query unless I actually have to create the object.
Is that possible with find_or_create_by?
Turns out find_or_create_by* accepts a block that only gets run in the create case.
find_or_create_by_person_id(10) do |item|
item.some_other = expensive_query
end

Delayed Job not saving new records

I have trying to save a new record with delayed job. The code in question is below:
#method being called:
ibo.add_to_database(params[:url])
#method definition
def add_to_database(url)
feed = Feeds.new do |f|
f.url = url
f.title = self.feed_title if self.feed_title
f.link = self.site_link if self.site_link
f.image = self.feed_image if self.feed_image
end
feed.save!
end
handle_asynchronously :add_to_database
I get absolutely no errors, and the job is removed from the database as it should be. Except there is no change to the Feeds model. Anyone have any ideas what gives?
delayed_job runs as a daemon thread, so the first thing to do would be to check whether that it is running:
ps ax | grep delayed_job
the next thing I would check the log of actual delayed job, it would probably have you error description:
less log/delayed_job.log
Other then that, your code sniplet looks fine.

find_or_create and race-condition in rails, theory and production

Hi I've this piece of code
class Place < ActiveRecord::Base
def self.find_or_create_by_latlon(lat, lon)
place_id = call_external_webapi
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
result
end
end
Then I'd like to do in another model or controller
p = Post.new
p.place = Place.find_or_create_by_latlon(XXXXX, YYYYY) # race-condition
p.save
But Place.find_or_create_by_latlon takes too much time to get the data if the action executed is create and sometimes in production p.place is nil.
How can I force to wait for the response before execute p.save ?
thanks for your advices
You're right that this is a race condition and it can often be triggered by people who double click submit buttons on forms. What you might do is loop back if you encounter an error.
result = Place.find_by_place_id(...) ||
Place.create(...) ||
Place.find_by_place_id(...)
There are more elegant ways of doing this, but the basic method is here.
I had to deal with a similar problem. In our backend a user is is created from a token if the user doesn't exist. AFTER a user record is already created, a slow API call gets sent to update the users information.
def self.find_or_create_by_facebook_id(facebook_id)
User.find_by_facebook_id(facebook_id) || User.create(facebook_id: facebook_id)
rescue ActiveRecord::RecordNotUnique => e
User.find_by_facebook_id(facebook_id)
end
def self.find_by_token(token)
facebook_id = get_facebook_id_from_token(token)
user = User.find_or_create_by_facebook_id(facebook_id)
if user.unregistered?
user.update_profile_from_facebook
user.mark_as_registered
user.save
end
return user
end
The step of the strategy is to first remove the slow API call (in my case update_profile_from_facebook) from the create method. Because the operation takes so long, you are significantly increasing the chance of duplicate insert operations when you include the operation as part of the call to create.
The second step is to add a unique constraint to your database column to ensure duplicates aren't created.
The final step is to create a function that will catch the RecordNotUnique exception in the rare case where duplicate insert operations are sent to the database.
This may not be the most elegant solution but it worked for us.
I hit this inside a sidekick job that retries and gets the error repeatedly and eventually clears itself. The best explanation I've found is on a blog post here. The gist is that postgres keeps an internally stored value for incrementing the primary key that gets messed up somehow. This rings true for me because I'm setting the primary key and not just using an incremented value so that's likely how this cropped up. The solution from the comments in the link above appears to be to call ActiveRecord::Base.connection.reset_pk_sequence!(table_name) This cleared up the issue for me.
begin
result = Place.where(:place_id => place_id).limit(1)
result = Place.create(:place_id => place_id, ... ) if result.empty? #!
rescue ActiveRecord::StatementInvalid => error
#save_retry_count = (#save_retry_count || 1)
ActiveRecord::Base.connection.reset_pk_sequence!(:place)
retry if( (#save_retry_count -= 1) >= 0 )
raise error
end

Resources