Symfony mailer: Swift_TransportException between message sending - symfony1

On a current project which I'm currently working, i have a symfony task that runs some mass data insertion to database and runs it for at least half an hour.
When the task starts a mail notification is sent correctly, the problem is that at the of the task execution we can't send another mail to notify about the end of processing.
The mailer factory is currently configured with the spool delivery strategy but, in this specific situation, we desire to fire a notification immediately, using the sendNextImmediately() method.
I'm are getting the exception:
[Swift_TransportException]
Expected response code 250 but got code "451", with message "451 4.4.2 Timeout - closing connection. 74sm1186065wem.17
"
and the flowing error on php log file:
Warning: fwrite(): SSL: Broken pipe in /var/www/project/lib/vendor/symfony/lib/vendor/swiftmailer/classes/Swift/Transport/StreamBuffer.php on line 209
Can anyone give some help?
Is there any way that i can perhaps refresh symfony mailer to establish a new connection?

Doing a Symfony2 project, I ran across this failure too. We were using a permanently running php script, which produced the error.
We figured out that following code does the job:
private function sendEmailMessage($renderedTemplate, $subject, $toEmail)
{
$mailer = $this->getContainer()->get('mailer');
/* #var $mailer \Swift_Mailer */
if(!$mailer->getTransport()->isStarted()){
$mailer->getTransport()->start();
}
$sendException = null;
/* #var $message \Swift_Message */
$message = \Swift_Message::newInstance()
->setSubject($subject)
->setFrom($this->getContainer()->getParameter('email_from'))
->setTo($toEmail)
->setBody($renderedTemplate);
$mailer->send($message);
$mailer->getTransport()->stop();
//throw $sendException;
}

For Symfony1 Users
My guess was that the connection was being hold for too long (with no activity at all), causing an ssl connection timeout.
For now, the problem can be solved by stopping the Swift_Transport instance and starting it again explicitly, just before sending the second message.
Here is the code:
$this->getMailer()->getRealtimeTransport()->stop();
$this->getMailer()->getRealtimeTransport()->start();
$this->getMailer()->sendNextImmediately()->send($message);

I had exactly the same problem and above solutions were very helpful, but there is one thing I had to do differently: order.
$this->getMailer()->sendNextImmediately()->send($message);
$this->getMailer()->getRealtimeTransport()->stop();
It didn't worked for me if I tried to stop Transport before sending message (connection timeout was already hanging). Also You don't need to run getRealtimeTransport()->start() - it will be started automatically.

Related

Send an Email after an Artisan command execution

I need to send an email right after an artisan command is execute to get confirmed whether its executed correctly or not. What i'am thinking right now is send it inside the handle function in the command class as,
public function handle()
{
// command logic
//send an email
Mail::send('....
}
I found that this is so embarassing because there are lot of command registered in the app.
My question is about: Is there any global place that i can handle this case, both success and failed(due to an Exception)?
P.S. I saw that there is a method called reportException in ConsoleKernel class so i think i can override that function in Kernel class to send the mail when there is a failure, correct me if i'am wrong.

Net-ssh timeout for execution?

In my application I want to terminate the exec! command of my SSH connection after a specified amount of time.
I found the :timeout for the Net::SSH.start command but following the documentation this is only for the initial connection. Is there something equivalent for the exec command?
My first guess would be not using exec! as this will wait until the command is finished but using exec and surround the call with a loop that checks the execution status with every iteration and fails after the given amount of time.
Something like this, if I understood the documentation correctly:
server = NET::SSH.start(...)
server.exec("some command")
start_time = Time.now
terminate_calculation = false
trap("TIME") { terminate_calculation = ((Time.now - start_time) > 60) }
ssh.loop(0.1) { not terminate_calculation }
However this seems dirty to me. I expect something like server.exec("some command" { :timeout=>60}). Maybe there is some built in function for achieving this functionality?
I am not sure if this would actually work in a SSH context but Ruby itself has a timeout method:
server = NET::SSH.start ...
timeout 60 do
server.exec! "some command"
end
This would raise Timeout::Error after 60 seconds. Check out the docs.
I don't think there's a native way to do it in net/ssh. See the code, there's no additional parameter for that option.
One way would be to handle timeouts in the command you call - see this answer on Unix & Linux SE.
I think your way is better, as you don't introduce external dependencies in the systems you connect to.
Another solution is to set ConnectTimeout option in OpenSSH configuration files (~/.ssh/config, /etc/ssh_config, ...)
Check more info in
https://github.com/net-ssh/net-ssh/blob/master/lib/net/ssh/config.rb
what I did is have a thread that's doing the event handling. Then I loop for a defined number of seconds until channel closed.If after these seconds pass, the channel is still open, then close it and continue execution.

How to verify AWS server termination with Fog?

I have 3 line of code for a Rails app in a 'begin' block that is meant to terminate an AWS compute instance using Fog and set a string value upon success:
#server = #connection.servers.get(params[:id])
#server.destroy
#server_deletion_result = "success"
This code works, but it simply sends a command to terminate the instance to AWS. Using Fog, how can I verify that the instance has finished terminating?
I tried this, to no avail:
while #server.state != "terminated" do
sleep 3
end
#server_deletion_result = "success"
It just appears to hang, even well after the instance shows "terminated" in the AWS console.
So, thoughts?
A friend of mine helped me answer this question via Twitter. The answer was to call the reload() function on the server object, then check it. Fog caches the server object and it must be updated to check the state.
Here was my final solution:
#server.reload
while #server.state != "terminated" do
sleep 3
#server.reload
end
EDIT: Thanks to Frederick Cheung, who has a better answer in the comments:
#server.wait_for {state == 'terminated'}

how can we tell delayed_job when a delayed task fails so it will auto-retry?

Our app is hosted on heroku and we use delayed job when sending info to a remote system (via a GET to a url with some url params)
the remote system returns a success code usually, but it it;s real busy it returns a tryagain code.
suppose the our method is
def send_info
the_url = "http://mydomain.com/dosomething?arg=#{self.someval}"
the_result = open(the_url).read
successflag = get_success_flag_from(the_result)
end
and so somewhere in our code we do
#widget.delay.send_info
and that all works fine.
Except it does not automatically handle the case where the remote said to try back later.
Is there any way for the send_info method (which is what delayed job will execute) to "tell" delayed_job "retry me again"? Do we need to throw some custom exception or something?
Raising any kind of exception ought to cause delayed_job to requeue it (subject to only-trying-so-many-times); if you don't especially need a custom exception you can just raise a RuntimeError.

ActiveResource timeout not functioning [duplicate]

This question already has an answer here:
Overriding/Modifying Rails Class (ActiveResource)
(1 answer)
Closed 3 years ago.
I'm trying to contact a REST API using ActiveResource on Rails 2.3.2.
I'm attempting to use the timeout functionality so that if the resource I'm contacting is down I can fail quickly - I'm doing this with the following:
class WorkspaceResource < ActiveResource::Base
self.timeout = 5
self.site = "http://mysite.com/restAPI"
end
However, when I try to contact the service when I know it isn't available, the class only times out after the default 60 seconds. I can see from the error stack that the timeout error does indeed come from an ActiveResource class in my gem folder that has the proper functions to allow timeout settings, but my set timeout never seems to work.
Any thoughts?
So apparently the issue is not that timeout is not functioning. I can run a server locally, make it not return a response within the timeout limit, and see that timeout works.
The issue is in fact that if the server does not accept the connection, timeout does not function as I expected it to - it doesn't function at all. It appears as though timeout only works when the server accepts the connection but takes too long to respond.
To me, this seems like an issue - shouldn't timeout also work when the server I'm contacting is down? If not, there should be another mechanism to stop a bunch of requests from hanging...anyone know of a quick way to do this?
The problem
If you're running on Ruby 1.8.x then the problem is its lack of real system threads.
As you can read first hereand then here, there are systemic problems with timeouts in Ruby. An interesting discussion but for you in particular some comments suggest that the timeout is effectively ignored and defaults to 60 seconds - exactly what you are seeing.
Solutions ...
I have a similar issue with our own product when trying to send emails - if the email server is down the thread blocks. For me the solution was to spin the request off on a separate thread and therefore my main request-processing thread doesn't block.
There are non-blocking libraries out there for Ruby but perhaps you could take a look first at this System Timeout Gem.
An option open to anyone using Rails behind a proxy like nginx would be to set the upstream timeout to a lower number - that way you'll get notified if the server is taking too long. I'd only do this if I were really stuck for a solution.
Last but not least, it's possible that running Rails 2.3.2 on top of Ruby 1.9.1 will fix the issue.
Alternatively, you could try to catch these connection errors and retry once (after certain period of time) just to make sure the connection is really out.
retried = false
begin
#businesses = Business.find(:all, :params => { :shop_domain => #shop.domain })
retried = false
rescue ActiveResource::TimeoutError => ex
#raise ex
rescue ActiveResource::ConnectionError, ActiveResource::ServerError, ActiveResource::ClientError => ex
unless retried
sleep(((ex.respond_to?(:response) && ex.response['Retry-After']) || 5).to_i)
retried = true
retry
else
# raise ex
end
end
Inspired by this solution from Shopify for paginating a large number of records. https://ecommerce.shopify.com/c/shopify-apis-and-technology/t/paginate-api-results-113066

Resources