Recently on my current project, I noticed that on our integration test, there are many assert_enqueued_jobs 1 code are commented out, then I ask other developer about it, He told me that we've changed many ActiveJob perform_later call to perform_now for speed reason, and that assert_enqueued_jobs can't catch those performed now jobs.
I've tried using assert_performed_jobs also but didn't worked out.
Can anyone give me insight or suggestion to test it? or it just can't be testable?
Untestable? Never!
The assert_enqueded_jobs is not actually testing code, rather checking that something was actually enqueued to happen later. If something happens right away, why test that it was enqueued?
I would try to keep the queue and force jobs to be performed/cleared using other of the ActiveJob::TestHelpers. But to that's just me, it makes little difference.
https://apidock.com/rails/v4.2.1/ActiveJob/TestHelper/assert_enqueued_jobs
Say your job was to send some email, just call the ActiveJob#perform_now and check ActionMailer::Base.deliveries.count. At this point, the actual test will be very tailored to the job.
Could be creating a Notification and you might want to assert that Notification.count has changed, whatever.
The main thing is that instead of checking that the job was enqueued and end of story, we're looking for the desired outcome from that job running.
Related
I am using Sidekiq and Rails (6.0.3.7). I have a worker which executes an async task that creates a lot of data on my database, sequentially. So basically what it happens is, for example:
First, it creates an User, then it creates a PostCategory, then it creates a Post, and then it creates 10 comments.
Sometimes, this process fails midway, maybe when creating a PostCategory, or when creating Post.
What i want to happen is that if the task fails at any given point, all the data that has been already created in said task, is discarded. Another approach could be that all the data is created only once i am sure the process has not failed. So basically it would have to "check create" everything, before actually writing to the database.
An example of this would be that the User has been created, but for some reason, it failed to create a PostCategory, and the AsyncTask failed. What i want to happen is that it automatically deletes the created User, or that it was never created in the first place, because the task failed.
Is there any approach or technique i could use to do this on my current worker without messing around too much with the actual code? Some "double check" method already implemented on Sidekiq? What do you recommend i should look into?
Thanks in advance for any help you can give me with this issue.
First of all, great, that you design your tasks to be all-or-none. The main and preferable approach is to use database transactions, as that is what they were designed for. Open transaction before starting entity creation, and commit once all the checks are done.
Account.transaction do
balance.save!
account.save!
end
Note bang methods (those with trailing !) their intent is to throw exceptions. An exception will automatically rollback the transaction, and that's what you need.
N.B. Try to make your task idempotent which mean returning the same result regardless of the number of calls with the same input results. This could save lots of time in the future.
We have a Worker which had a bug that caused eroneous responses to a method being called. The issue has since been fixed, however when we restart the background workers we seem to still experience the issue.
We know the issue is resolved because for the meantime we have moved the logic to a rake task and it is now working fine. We suspect the issue relates to failed or unperformed jobs in the sidekiq queue.
We tried to overcome this by clearing the redis DB with the below approach:
Sidekiq.redis { |r| puts r.flushall }
Has anyone experienced a similar issue when using Sidekiq/Redis and how did you over come it?
I think flushall might just retry immediately all the still failed, but still queued, jobs
In most cases, if you fix a bug in the worker without changing the signature, you may be able to just let them retry (assuming the job itself is idempotent which is kinda recommended for this reason and others).
If however, the bug was in what you were passing into the async job call, then you'll have to remove all those entries because they will continue to fail every time they are retried which, by default, can go on for weeks.
I think what you want to do is clear them all... you can do it for that particular queue. You may have to be careful to not remove jobs that are newly queued if that's a problem by perhaps examining the job entries. If you just want to nuke them all:
queue = Sidekiq::Queue.new('your_queue')
queue.clear
What's Happening
In our Rspec + Capybara + selenium (FF) test suite we're getting A LOT of inconsistent "Capybara::ElementNotFound" errors.
The problem is they only happen sometimes. Usually they won't happen locally, they'll happen on CircleCi, where I expect the machines are much beefier (and so faster)?
Also the same errors usually won't happen when the spec is run in isolation, for example by running rspec with a particular line number:42.
Bare in mind however that there is no consistency. The spec won't consistently fail.
Our current workaround - sleep
Currently the only thing we can do is to litter the specs with 'sleeps'. We add them whenever we get an error like this and it fixes it. Sometimes we have to increase the sleep times which is making out tests very slow as you can imagine.
What about capybara's default wait time?
Doesn't seem to be kicking in I imagine as the test usually fails under the allocated wait time (5 seconds currently)
Some examples of failure.
Here's a common failure:
visit "/#/things/#{#thing.id}"
find(".expand-thing").click
This will frequently result in:
Unable to find css ".expand-thing"
Now, putting a sleep in between those two lines fixes it. But a sleep is too brute force. I might put a second, but the code might only need half a second.
Ideally I'd like Capybara's wait time to kick in because then it only waits as long as it needs to, and no longer.
Final Note
I know that capybara can only do the wait thing if the selector doesn't exist on the page yet. But in the example above you'll notice I'm visiting the page and the selecting, so the element is not on the page yet, so Capybara should wait.
What's going on?
Figured this out. SO, when looking for elements on a page you have a few methods available to you:
first('.some-selector')
all('.some-selector') #returns an array of course
find('.some-selector')
.first and .all are super useful as they let you pick from non unique elements.
HOWEVER .first and .all don't seem to auto-wait for the element to be on the page.
The Fix
The fix then is to always use .find(). .find WILL honour the capybara wait time. Using .find has almost completely fixed my tests (with a few unrelated exceptions).
The gotcha of course is that you have to use more unique selectors as .find MUST only return a single element, otherwise you'll get the infamous Capybara::Ambiguous exception.
Ember works asynchronously. This is why Ember generally recommends using Qunit. They've tied in code to allow the testing to pause/resume while waiting for the asynchronous functions to return. Your best bet would be to either attempt to duplicate the pause/resume logic that's been built up for qunit, or switch to qunit.
There is a global promise used during testing you could hook up to: Ember.Test.lastPromise
Ember.Test.lastPromise.then(function(){
//continue
});
Additionally visit/click return promises, you'll need some manner of telling capybara to pause testing before the call, then resume once the promise resumes.
visit('foo').then(function(){
click('.expand-thing').then(function(){
assert('foobar');
})
})
Now that I've finished ranting, I'm realizing you're not running these tests technically from inside the browser, you're having them run through selenium, which means it's not technically in the browser (unless selenium has made some change since last I used it, possible). Either way you'll need to watch that last promise, and wait on it before you can continue, testing after an asynchronous action.
When certain conditions are met, I'd like to schedule a worker to run a particular job in 5 minutes. The thing is, if the same conditions are met again, I want to check if there's something scheduled to run. If there is such a worker scheduled to run, then, I don't want to enqueue again, but if there isn't, it should be queued. I hope you guys understood what I'm trying to do. Can it be achieved? If yes, how?
Sounds like you want to use or implement a simple persisted lock. The code that enqueues the job can first check for the availability of the lock, acquire and enqueue if available, skip if not. The enqueued job can be responsible for releasing the lock. You'll want to account for failure, like adding a lock timeout. The redis-mutex gem may be a useful implementation of this idea.
Best practices promote jobs that are idempotent. This means that you should be writing them in such a way that it should be safe to run them more than once. Any subsequent call doesn't change the result of the first call. You achieve this by writing logic that does the proper checks, and acts accordingly. Since you don't provide a description of what your worker does, I can't be more specific.
For an example, here is a link to the Sidekiq's FAQ: Make your workers idempotent and transactional
The benefit of this approach is that you're playing along the convenient abstraction of scheduled workers, instead of fighting against it.
I'm using a Observer on my classes. When one of the records is created/updated I need to notfify another service (via a URL call). What is the best way to do this to avoid slowing down my class? Would using a gem liked delayed_job be overkill?
In my Observer's after_update() / after_create() I just want to launch a thread that calls the URL...
If your notifier is non-thread blocking, you could simply spawn a thread and perform the notification there. That way, your program will continue to run while that thread is waiting on the response.
Of course, you'll want some way to handle failure. You could have it try three times, and if it still fails, write a notification to the log or something.
The best (most reliable) solution would be to use a job queue. That way, if a job fails outright, you can inspect and resubmit the job again.
Definitely use a community supported/accepted gem like delayed_job or resque. It's really not as hard as you think and your app will scale better down the line.