I used Delayed_Jobs to send emails. Except I think that if it fails to send to 'every single email' then it tries and reruns the entire batch again.
How do I make it skip an email address if its not correct?
if an exception occurs delayed_job will treat the job as failed and keep rerunning it.
You should capture exceptions to make sure that at the end a job will always be considered as succesful.
Related
Because we need to have the ability to schedule email delivery, we have migrated to using Sidekiq to send emails with the deliver_later method. Under the old regime that used deliver_now, our testing could use
ActionMailer::Base.deliveries[index]
to inspect the recipient, subject, body, attachments, etc...
For testing purposes, is there an equivalent mechanism to inspect the contents of queued email when using Sidekiq and deliver_later?
For testing contents of the email, you have to render the email template for which the job needs to be executed. I suggest you should separate unit tests:
Job spec to check if email is getting enqueued with correct parameters.
Email spec to check the email contents.
If you want to go the end-to-end style, use Sidekiq::Testing.inline! to execute the job and then use ActionMailer::Base.deliveries[index] like before.
The solution turned out to be to execute perform_enqueued_jobs after all emails were queued. After this, all of the existing testing mechanisms worked.
See https://api.rubyonrails.org/v7.0.4/classes/ActiveJob/TestHelper.html for additional information.
We have a delayed_job class that gets unprocessable responses for certain API requests. The responses aren't going to change, so there is no point in re-queuing the job when it fails. However we still want to reqeue other failures, just not this one.
I can avoid the re-queue by capture the exception in perform, but then the failure is not even logged. I would like to have it logged as an error, not as a success.
It seems to me there would be a way to indicate certain exception classes as being exempt from re-queuing - is there?
I thought of using the error hook to simply delete the job, but my guess would be that something would explode if the job disappeared out from under DJ while it was processing it.
Any ideas?
In my rails app, I'm using the SendGrid parse API which posts mail to my server. Every now and then SendGrid's Parse API submits the same email twice.
When I get a posted mail I place it in the IncomingMail model. so in order to prevent this double submitting issue, I look at each IncomingMail when processing to see if there is a duplicate in the table within the last minute. That tested great on development, it caught all the double submissions.
Now I pushed that live to heroku, where I have 2+ dynos and it didn't work. My guess being that it has something to do with replication. So that being the case, how can scalable sites with multiple server deal with something like this?
Thanks
You should look at using a background job queue. Heroku has "Workers" (which was Delayed Job). Rather than sending the email immediately, you push it onto the queue. Then one or more Heroku 'workers' need to be added to your account, and each one will pull jobs in sequence. This means there can be a short delay (depending on load) before the email is sent, but this delay is not presented to the user, and should there be a lot of email to send you just add more workers.
Waiting for an external service like an email provider on each user action is dangerous because any network problem will take down your site as several users have to 'wait' for their HTTP requests to be responded to while Heroku is blocked with these third party calls.
In this situation with workers each job would fail but would be retried and eventually succeed.
This sounds like it could be a transaction issue. If you have multiple workers running simultaneously their operation may be 'interleaved'. For instance this sequence of events would result in 2 mails being sent.
Worker A : Checks for an existing record and doesn't find one
Worker B : Checks for an existing record and doesn't find one
Worker A : Post to Sendgrid
Worker B : Post to Sendgrid
You could wrap everything in a transaction to keep this from happening. Something like this should do it.
class IncomingMail < ActiveRecord::Base
def check_and_send(email_address)
transaction do
# your existing code for preventing duplicates and sending
end
end
end
I have config.action_mailer.delivery_method = :test and use delayed_job. I run some code that places an email to be sent in a queue, then run rake jobs:work, but nowhere do I see the email that is sent out, and ActionMailer::Base.deliveries is nil. I'm just looking to debug and view the content of htese emails, how can I do so?
When config.action_mailer.delivery_method is set to :test, emails are not actually sent but instead merely added to a list of "sent" messages. That list exists only in memory. That means only the process that "sent" the email can see the list and verify that it was actually "sent".
Since the code that actually sends your mail is being executed in an external process (through a system() or backtick call), your calling script won't be able to see the in-memory queue of that external process and thus won't be able to verify that the emails were actually "sent".
This shouldn't really be a big deal unless something has gone wrong. By default outgoing emails will be written to the log file, so you can verify that they're actually sending by checking there. If you want to view/manipulate the queue in-memory, you'll have to add code to your job to do so, as that is the only code that will have access to it.
I'm using rails 2.2.2 on a linux box and am using a rake task to send some emails. It's a very simple script without error handling:
1.) Get emails to send
2.) Send email
3.) Mark emails as sent
Right now, the port on the smtp server has been blocked temporarily so steps 1 and 3 are completing but not step 2. My question is, when I unblock the port, are the messages that previously failed queued somewhere (and are going to be sent) or did they timeout and get /dev/nulled? If they are queued up somewhere, can I get the file location? I just want to understand the behavior so I can build out appropriate error handling. Thanks,
Drew