To test that my mails are being sent I'm running heroku run rails c -a my_app. Then I enqueue the job and it is enqueued fine. However, when I go to Redis and see queued jobs the job is not there. Instead, it is on "retry".
This is what I see:
{"retry":true,"queue":"default","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","args":[{"job_class":"SendMailJob","job_id":"4b4ba46f-94d7-45cd-b923-ec1678c73076","queue_name":"default","arguments":["any_help",{"_aj_globalid":"gid://gemfeedapi/User/546641393834330002000000"}]}],"jid":"f89235d7ab19f605ed0461a1","enqueued_at":1424175756.9351726,"error_message":"Error while trying to deserialize arguments: \nProblem:\n Document(s) not found for class User with id(s) 546641393834330002000000.\nSummary:\n When calling User.find with an id or array of ids, each parameter must match a document in the database or this error will be raised. The search was for the id(s): 546641393834330002000000 ... (1 total) and the following ids were not found: 546641393834330002000000.\nResolution:\n Search for an id that is in the database or set the Mongoid.raise_not_found_error configuration option to false, which will cause a nil to be returned instead of raising this error when searching for a single id, or only the matched documents when searching for multiples.","error_class":"ActiveJob::DeserializationError","failed_at":1424175773.317896,"retry_count":0}
However, object is in the Database.
I've tried to add after_create callback (Mongoid) but doesn't make any difference.
Any idea on what is happening?
Thanks.
Sidekiq is so fast that it executes your job before the database has committed the transaction. Use after_commit to create the job.
Ok, my fault. You need to start a new Heroku Worker Dyno in order to make Sidekiq work (it doesn't do it automatically).
Related
Probably the title is not self explanatory, the situation is this:
# user.points: 0
user.update!(points: 1000)
UserMailer.notify(user).deliver_later. # user.points = 0 => Error !!!!
user instance is updated and after that the Mailer is called with the user as a parameter, and in the email that changes are non-existent: user.points=0 instead of 1000
But, with a sleep 1 just after the user_update the email is sent with the changes updated, so it seems that the email job is faster than updating data to database.
# user.points: 0
user.update!(points: 1000)
sleep 1
UserMailer.notify(user).deliver_later. # user.points = 1000 => OK
What's the best approach to solve this avoiding this two possible solutions?
One solution could be calling UserMailer.notify not with the user instance but with the user values
Another solution, it could be sending the mail in the user callback after_commit
So, is there another way to solve this keeping the user instance as the parameter and avoiding the after_commit callback?
Thanks
Remember, Sidekiq runs copy of your Rails app in a separate process, using Redis as the medium. When you call deliver_later, it does not actually 'pass' user to the mailer job. It spawns a thread that enqueues the job in Redis, passing a serialized hash of user properties, including the ID.
When the mailer job runs in the Sidekiq process, it loads a fresh copy of user from the database. If the transaction containing your update! in the main Rails app has not yet finished committing, Sidekiq gets the old record from the database. So, it's a race condition.
(update! already wraps an implicit transaction around itself if there isn't one, so wrapping it in your own transaction is redundant, and doesn't help the race condition since nested ActiveRecord transactions commit only when the outermost transaction commits.)
In a pinch, you could delay enqueuing the job with something hacky like .deliver_later(wait_until: 10.seconds.from_now), but your best bet is to put the mailer notification in an after_commit callback on your model.
class User < ApplicationRecord
after_commit :send_points_mailer
def send_points_mailer
return unless previous_changes.includes?(:points)
UserMailer.notify(self).deliver_later
end
end
A model's after_commit callbacks are guaranteed to run after the final transaction is committed, so, like nuking from orbit, it's the only way to be sure.
You didn't mention it, but I'm assuming you are using ActiveRecord? If so you likely need to assure to flush the database transaction before your sidekiq job is scheduled.
https://api.rubyonrails.org/v6.1.4/classes/ActiveRecord/Transactions/ClassMethods.html
I am a very confused. I have code which stores data in the database succesfully, then I start a Sidekiq job, and first line is to select that record, but it fails on error RecordNotFound (Couldn't find Message with 'id'=5789035)
Here is the code:
message = Message.new(
user: user,
from: from,
content: content,
kind: kind
)
message.save!
Until now, everything is absolutely ok, save returns object with ID, but then I kick out the job.
SendMessage.perform_later(message_id: message.id) if message
The code of SendMessage fails on the first line, which is message = Message.find(args.first[:message_id])
And fails on error RecordNotFound (Couldn't find Message with 'id'=5789035) and there is no record in the database with this id.
I don't understand why method save didn't fail or so, or why the record is missing. It happens just sometimes, I cannot find the case. But yes can be anything, but why method save behaves like this?
I am logging message object after calling save! method with ID, and there is data including ID inside.
When I that case repeat on console, it is successfully repeated.
These failures are around 10 - 50 per month and successful savings are around 2 thousand.
The database is set correctly
Please do you have any suggestions?
It could be caused by a cached DB which is a bit common in ActiveJobs
try:
Message.uncached do
message = Message.find(args.first[:message_id])
# rest of the block
end
alternately ActiveJob now uses the new Globalid library behind the scenes to serialize/deserialize an ActiveRecord instance, therefore you can now pass an ActiveRecord object.
SendMessage.perform_later(message: message) if message
I am having a code that runs a delayed job that generates a report and then sends in an email here :
InstancesExportJob.perform_later(instances: instances, custom_field_key: options[:custom_field_key],tag_columns: options[:tags],user: User.current,report_url: report_url)
where instances is a generated active record query that is generated by another class this code fails and gives the following error: ActiveJob::SerializationError:Unsupported argument type: ActiveRecord::Relation.
where the first operation that is done on instances is a map function call.
but changing the code to this makes it work fine:
InstancesExportJob.perform_later(instances: instances.to_a, custom_field_key: options[:custom_field_key],tag_columns: options[:tags],user: User.current,report_url: report_url).
I am confused as running the code without delayed jobs works fine.
Am using rails 4.2 and sidekiq
I think the problem was with sidekiq as it do not allow a wide variety of parameter types and having the instance as an active record relation made the whole thing raise an error
I have around 10 workers that performs a job that includes the following:
user = User.find_or_initialize_by(email: 'some-email#address.com')
if user.new_record?
# ... some code here that does something taking around 5 seconds or so
elsif user.persisted?
# ... some code here that does something taking around 5 seconds or so
end
user.save
The problem is that at certain times, two or more workers run this code at the exact time, and thus I later found out that two or more Users have the same email, in which I should always end up only unique emails.
It is not possible for my situation to create DB Unique Indexes for email as unique emails are conditional -- some Users should have unique email, some do not.
It is noteworthy to mention that my User model has uniqueness validations, but it still doesn't help me because, between .find_or_initialize_by and .save, there is a code that is dependent if the user object is already created or not.
I tried Pessimistic and Optimistic locking, but it didn't help me, or maybe I just didn't implement it properly... should you have some suggestions regarding this.
The solution I can only think of is to lock the other threads (Sidekiq jobs) whenever these lines of codes get executed, but I am not too sure how to implement this nor do I know if this is even a suggestable approach.
I would appreciate any help.
EDIT
In my specific case, it is gonna be hard to put email parameter in the job, as this job is a little more complex than what was just said above. The job is actually an export script in which a section of the job is the code above. I don't think it's also possible to separate the functionality above into another separate worker... as the whole job flow should be serial and that no parts should be processed parallely / asynchronously. This job is just one of the jobs that are managed by another job, in which ultimately is managed by the master job.
Pessimistic locking is what you want but only works on a record that exists - you can't use it with new_record? because there's nothing to lock in the DB yet.
I managed to solve my problem with the following:
I found out that I can actually add a where clause in Rails DB Uniqueness Partial Index, and thus I can now set up uniqueness conditions for different types of Users on the database-level in which other concurrent jobs will now raise an ActiveRecord::RecordNotUnique error if already created.
The only problem now then is the code in between .find_or_initialize_by and .save, since those are time-dependent on the User objects in which always only one concurrent job should always get a .new_record? == true, and other concurrent jobs should then trigger the .persisted? == true as one job would always be first to create it, but... all of these doesn't work yet because it is only at the line .save where the db uniqueness index validation gets called. Therefore, I managed to solve this problem by putting .save before those conditions, and at the same time I added a rescue block for .save which then adds another job to the queue of itself should it trigger the ActiveRecord::RecordNotUnique error, to make sure that async jobs won't get conflicts. The code now looks like below.
user = User.find_or_initialize_by(email: 'some-email#address.com')
begin
user.save
is_new_record = user.new_record?
is_persisted = user.persisted?
rescue ActiveRecord::RecordNotUnique => exception
MyJob.perform_later(params_hash)
end
if is_new_record
# do something if not yet created
elsif is_persisted
# do something if already created
end
I would suggest a different architecture to bypass the problem.
How about a producer-worker model, where one master Sidekiq process gets a list of email addresses, and then spawns a worker Sidekiq process for each email? Sidekiq makes this easy with a dedicated queue for master and workers to communicate.
Doing so, the email address becomes an input parameter of workers, so we know by construction that workers will not stump on each other data.
Here is my scenario, i'm using resque to queue a job in redis, the usual way its done in ROR. The format of my key looks something like this (as per my namespace convention)
"resque:lock:Jobs::XYZ::SomeCreator-{:my_ids=>[101]}"
The job runs successfully to completition. But the key still exists in redis. For a certain flow, i need to queue and execute the job again for the same parameters (the key will essentially be same). But seems like the job does not get queued.
My guess is that since the key already exists in Redis, it does not queue the job again.
Questions:
Is this behavior of resque normal (not removing the key after successful completition)?
If Yes, how should i tackle this scenario (as per best practices)?
If No, can you help me understand what is going wrong?
After a couple of hours of debugging, finally this is the observed behavior:
I was creating the job and passing the options (parameters) with symbolized keys which when created the Redis key for the same job with symbolized param in the key.
Example:
Jobs::Abc::SomeJobCreator.create({:some_ids => [101]}) would create the "redis key" as "resque:lock:Jobs::Abc::SomeJobCreator.create({:some_ids => [101]})" (Notice the key being a symbol in the key)
Now when the after_perform_hook executes, it tries to remove the Redis Key but it searches the key with Stringified keys: "resque:lock:Jobs::Abc::SomeJobCreator-({\"some_ids\"=>[101]}" Which obviously won't be found, as the key in Redis has symbolized params in key.
To fix this issue i had to change the calls to job creation in the code and use stringified params like this: Jobs::Abc::SomeJobCreator.create({'some_ids' => [101]}). This works fine.
Not sure if this has anything to do with the version of Resque. Since its a old codebase i haven't yet updated the version. Its currently at Resque v1.25.2