I'm using the excellent Viewpoint ruby gem to create an event in my EWS calendar and send invites to attendees, on an heroku app.
This works very well in development, and in heroku production console.
When I do it from an heroku web dyno, the event gets created with correct attendees, but invites are not sent. Instead, I receive an email from exchange server saying my outbound message can't be delivered due to spam detection:
Remote Server returned '550 5.1.8 Access denied, spam abuse detected'
En-têtes de message d'origine :
Authentication-Results: gmail.com; dkim=none (message not signed)
header.d=none;
Received: from AMSPR01MB103.eurprd01.prod.exchangelabs.com (10.242.91.146) by
AMSPR01MB101.eurprd01.prod.exchangelabs.com (10.242.91.140) with Microsoft
SMTP Server (TLS) id 15.1.154.19; Mon, 4 May 2015 09:18:41 +0000
Received: from AMSPR01MB103.eurprd01.prod.exchangelabs.com ([169.254.4.13]) by
AMSPR01MB103.eurprd01.prod.exchangelabs.com ([169.254.4.13]) with mapi id
15.01.0154.018; Mon, 4 May 2015 09:18:40 +0000
Content-Type: application/ms-tnef; name="winmail.dat"
Content-Transfer-Encoding: binary
From: Nicolas Marlier <nicolas#marlier.onmicrosoft.com>
To: "nmarlier#gmail.com" <nmarlier#gmail.com>
Subject: =?utf-8?B?Tm91dmVsIMOpdsOobmVtZW50?=
Thread-Topic: =?utf-8?B?Tm91dmVsIMOpdsOobmVtZW50?=
Thread-Index: AdCGSDTEah5PWrxeqEy/iSmi4R7oNg==
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-Exchange-Calendar-Originator-Id: 52a7b57c-9423-417c-8560-37fa3ae0e74a;/o=ExchangeLabs/ou=Exchange
Administrative Group
(FYDIBOHF23SPDLT)/cn=Recipients/cn=096fdd7902e244399c6fbd8bf2401533-nicolas
Date: Mon, 4 May 2015 09:18:40 +0000
Message-ID: <AMSPR01MB10304E80D885534965D74C3F7D20#AMSPR01MB103.eurprd01.prod.exchangelabs.com>
Accept-Language: fr-FR, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator: <AMSPR01MB10304E80D885534965D74C3F7D20#AMSPR01MB103.eurprd01.prod.exchangelabs.com>
MIME-Version: 1.0
X-Originating-IP: [54.163.190.73]
Return-Path: nicolas#marlier.onmicrosoft.com
X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:AMSPR01MB101;
X-Microsoft-Antispam-PRVS:
<AMSPR01MB10156182A79CE60E68A4106F7D20#AMSPR01MB101.eurprd01.prod.exchangelabs.com>
X-Exchange-Antispam-Report-Test: UriScan:;
X-Exchange-Antispam-Report-CFA-Test:
BCL:0;PCL:0;RULEID:(601004)(5005006)(3002001);SRVR:AMSPR01MB101;
X-Forefront-PRVS: 05669A7924
I'm thinking this may be caused by the heroku instance not being whitelisted, and I'm considering trying something like the Proximo add-on to fix that. Any thought on another way to make sure the invite is being sent ?
So I have an application that earlier did not need ActiveRecord - thus we removed ActiveRecord from the application and formatted our logging as such:
In application.rb:
class DreamLogFormatter < Logger::Formatter
SEVERITY_TO_COLOR_MAP = {'DEBUG'=>'32', 'INFO'=>'0;37', 'WARN'=>'35', 'ERROR'=>'31', 'FATAL'=>'31', 'UNKNOWN'=>'37'}
def call(severity, time, progname, msg)
color = SEVERITY_TO_COLOR_MAP[severity]
"\033[0;37m[%s] \033[#{color}m%5s - %s\033[0m\n" % [time.to_s(:short), severity, msg2str(msg)]
end
end
class ActiveSupport::BufferedLogger
def formatter=(formatter)
#log.formatter = formatter
end
end
In development.rb
config.logger = Logger.new("log/#{Rails.env}.log")
config.logger.formatter = DreamLogFormatter.new
ActiveResource::Base.logger = Logger.new("log/#{Rails.env}.log")
ActiveResource::Base.logger.formatter = DreamLogFormatter.new
Note: The ActiveResource logger configuration was added because we want the URL output of our ActiveResource calls like so:
GET http://localhost:2524/users/
--> 200 OK 239 (13.0ms)
Logging with this configuration gave us a nice output combination of ActiveResource calls and our own logging using Rails.logger.
However, we have needed to add ActiveRecord back into our application as we needed to change our session storage from cookie store to ActiveRecord store. And since adding ActiveRecord back in, the logging is no longer working nicely.
Previous Log Output:
Started GET "/audit/?key1=value1&key2=value2" for 0:0:0:0:0:0:0:1%0 at 2012-08-15 15:39:58 -0400
[15 Aug 15:39] INFO - Processing by AuditController#index as HTML
[15 Aug 15:39] INFO - Parameters: {"utf8"=>"✓", "key1"=>"value1", "key2"=>"value2"}
[15 Aug 15:39] INFO - GET http://localhost:2524/api/users/jeff/prefs/?label=language
[15 Aug 15:39] INFO - --> 200 OK 151 (55.0ms)
[15 Aug 15:39] WARN - There is no user currently logged in - retrieving default theme.
[15 Aug 15:39] INFO - GET http://localhost:2524/api/users/jeff/prefs/?label=theme
[15 Aug 15:39] INFO - --> 200 OK 151 (35.0ms)
Note: What I really enjoy about this format is each request is logged beginning with a Started GET <URL> or POST or PUT, etc - followed by which controller and function is doing the processing and what the parameters sent are. This is extremely helpful for debugging. As well, this format allowed us to print out our own Rails.logger logging information.
Current Log Output (With ActiveRecord):
[20 Aug 11:40] INFO - GET http://localhost:2524/api/users/jeff
[20 Aug 11:40] INFO - --> 200 OK 199 (144.0ms)
[20 Aug 11:40] INFO - GET http://localhost:2524/api/users/jeff/prefs/?label=language
[20 Aug 11:40] INFO - --> 200 OK 148 (12.0ms)
[20 Aug 11:40] INFO - GET http://localhost:2524/api/users/jeff/prefs/?label=theme
[20 Aug 11:40] INFO - --> 200 OK 155 (15.0ms)
Essentially all we get now is a constant stream of URL calls - it doesn't log anything coming from Rails.logger and also there is no separation between different requests - it is literally just one constant stream of URL's and responses.
I have tried setting ActiveResource::Base.logger = ActiveRecord::Base.logger as a lot of the blogs recommend but that just made things worse - it logged a couple URLs, and then just stopped logging completely unless it was at ERROR level (and nowhere do I set the logging level, so it should still be at default)
Any help or suggestions would be greatly appreciated!! Thanks
Maybe the implementation of lograge will help:
https://github.com/roidrage/lograge/blob/master/lib/lograge.rb#L57
I'm working on a small single-page application using HTML5. One feature is to show PDF documents embedded in the page, which documents can be selected form a list.
NOw I'm trying to make Chrome (at first, and then all the other modern browsers) use the local client cache to fulfill simple GET request for PDF documents without going through the server (other than the first time of course). I cause the PDF file to be requested by setting the "data" property on an <object> element in HTML.
I have found a working example for XMLHttpRequest (not <object>). If you use Chrome's developer tools (Network tab) you can see that the first request goes to the server, and results in a response with these headers:
Cache-Control:public,Public
Content-Encoding:gzip
Content-Length:130
Content-Type:text/plain; charset=utf-8
Date:Tue, 03 Jul 2012 20:34:15 GMT
Expires:Tue, 03 Jul 2012 20:35:15 GMT
Last-Modified:Tue, 03 Jul 2012 20:34:15 GMT
Server:Microsoft-IIS/7.5
Vary:Accept-Encoding
The second request is served from the local cache without any server roundtrip, which is what I want.
Back in my own application, I then used ASP-NET MVC 4 and set
[OutputCache(Duration=60)]
on my controller. The first request to this controller - with URL http://localhost:63035/?doi=10.1155/2007/98732 results in the following headers:
Cache-Control:public, max-age=60, s-maxage=0
Content-Length:238727
Content-Type:application/pdf
Date:Tue, 03 Jul 2012 20:45:08 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
The second request results in another roundtrip to the server, with a much quicker response (suggesting server-side caching?) but returns 200 OK and these headers:
Cache-Control:public, max-age=53, s-maxage=0
Content-Length:238727
Content-Type:application/pdf
Date:Tue, 03 Jul 2012 20:45:13 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
The third request for the same URL results in yet another roundtrip and a 304 response with these headers:
Cache-Control:public, max-age=33, s-maxage=0
Date:Tue, 03 Jul 2012 20:45:33 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
My question is, how should I set the OutputCache attribute in order to get the desired behaviour (i.e. PDF requests fullfilled from the client cache, within X seconds of the initial request)?
Or, am I not doing things right when I cause the PDF to display by setting the "data" property on an <object> element?
Clients are never obligated to cache. Each browser is free to use its own heuristic to decide whether it is worth caching an object. After all, any use of cache "competes" with other uses of the cache.
Caching is not designed to guarantee a quick response; it is designed to, on average, increase the likelihood that frequently used resources that are not changing will already be there. What you are trying to do, is not what caches are designed to help with.
Based on the results you report, the version of Chrome you were using in 2012 decided it was pointless to cache an object that would expire in 60 seconds, and had only been asked for once. So it threw away the first copy, after using it. Then you asked a second time, and it started to give this URL a bit more priority -- it must have remembered recent URLs, and observed that this was a second request -- it kept the copy in cache, but when the third request came, asked server to verify that it was still valid (presumably because the expiration time was only a few seconds away). The server said "304 -- not modified -- use the copy you cached". It did NOT send the pdf again.
IMHO, that was reasonable cache behavior, for an object that will expire soon.
If you want to increase the chance that the PDF will stick around longer, then give a later expiration time, but say that it must check with the server to see if it is still valid.
If using HTTP Cache-Control header, this might be: private, max-age: 3600, must-revalidate. With this, you should see a round-trip to server, which will give 304 response as long as the page is valid. This should be a quick response, since no data is sent back -- the browser's cached version is used.
private is optional -- not related to this caching behavior -- I'm assuming whatever this volatile PDF is, it only makes sense for the given user and/or shouldn't be hanging around for a long time, in some shared location.
If you really need the performance of not talking to the server at all, then consider writing javascript to hide/show the DOM element holding that PDF, rather than dropping it, and needing to ask for it again.
Your javascript code for the page is the only place that "understands" that you really want that PDF to stick around, even if you aren't currently showing it to the user.
Have you tried setting the Location property of the OutputCache to "Client"
[OutputCache(Duration=60, Location = OutputCacheLocation.Client)]
By default the location property is set to "Any" which could mean that the response is cached on the client, on a proxy, or at the server.
more at MSDN OutputCacheLocation
In my rails app, when ever the mysql service is stopped. I see a
/!\ FAILSAFE /!\ Tue Dec 28 14:37:59
+0530 2010 Status: 500 Internal Server Error Mysql::Error
I could not catch this exception in my rescue_action_in_public exception handler to give a custom error msg. Is there anyway to catch this exception?
This link appears to explain what you ask. I can't speak to it's applicability in your particular situation, however.
http://www.simonecarletti.com/blog/2009/11/re-raise-a-ruby-exception-in-a-rails-rescue_from-statement/
In my Rails log file, I see lots of
Started GET "/" for 63.148.78.244 at Fri Sep 24 19:03:39 +0000 2010
Processing by ProductsController#index as HTML
I understand this means Rails is serving up an HTML page. However, what does this mean?
Started GET "/" for 63.148.78.244 at Fri Sep 24 18:05:51 +0000 2010
Processing by ProductsController#index as */*
Completed in 0ms
Why the */*?
It depends on the HTTP_ACCEPT header that is sent by the browser. The common scenario is that the browser sends list of all MIME types that can process and server returns the result in one of them - typically HTML.
But in some cases it's not this way. For example if you use wget without any other parameters.
Try
wget http://yourserver
and you will see in your log file * / * which means that the "browser" accepts anything you will send back (it's quite obvious that wget can accept anything as it is just storing it into the file).