I've been checking my production.log today and there's a number of requests hitting my site that appear to be malicious, but I'm confused as to how they're even getting to us.
For example:
Processing PublicController#unknown_request (for 217.23.4.13 at 2009-11-09 09:15:52) [GET]
Parameters: {"anything"=>["results.aspx"], "action"=>"unknown_request", "first"=>"200", "controller"=>"public", "q"=>"\"bbs/cbbs.cgi?\" intitle:\"Book\" intext:\"2008\" site:.uz ", "count"=>"200", "FORM"=>"PERE"}
Completed in 16ms (View: 12, DB: 0) | 200 OK [http : // search . live .com /results.aspx?q=%22bbs/cbbs.cgi%3F%22%20intitle%3A%22Book%22%20intext%3A%222008%22%20site%3A.uz%20&count=200&first=200&FORM=PERE]
These are happening every 30 seconds or so. Obviously, PublicController/Unknown_request is my controller/action 404 error.
The access log shows these requests as:
217.23.4.13 - - [09/Nov/2009:09:57:25 +1000] "GET http://search.live.com/results.aspx?q=%22en-gb.html%22%20intitle%3A%22Home%22%20intext%3A%222006%22%20site%3A.mn%20&count=200&first=400&FORM=PERE HTTP/1.1" 200 3626 "-" "Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.1$
How are these requests even hitting my site? Does anyone have any ideas?
I think this might be the same problem you're having: http://penguinpetes.com/b2evo/index.php?p=567&more=1&c=1&tb=1&pb=1
Basically, live/bing are doing some sort of testing that involves going to your site looking like someone searched something completely irrelevant to the content you have.
Related
We have a rails app(v5.2.6) running inside a unicorn.
we are bombarded with log msgs like the following:
24.33.33.243, 12.0.52.41 - - [26/Jun/2022:13:43:39 +0000] "GET /our_path HTTP/1.1" 200 - 0.0078
No idea where this log is coming from? its bombarding our log and I must find a way to filter it.
We have a custom Rails.logger, where we also filter some msg that applies to some regexs, the problem is that this msg gets to the log from another place
I have an issue similar to Heroku & Rails - Varnish HTTP Cache Not Working, but the solution (wait for a while, then everything works) doesn't seem to apply - I've had the setup below for several days.
This thread on the Heroku Google group has some users with the same problem. They mention that it takes a while for everything to be cached, but my understanding is that after a while, everything should get cached, no? Or does that only apply if there is a Lot of traffic?
I need some advice on where I should be looking/what I can try changing in order to get caching working properly.
My setup:
I have http://www.swingoutlondon.co.uk running on Heroku (Rails 3.0.3, Ruby 1.9.2, bamboo-mri-1.9.2) and the main index page performs a lot of database queries to return what is essentially a static page - usually taking about 2-3 seconds (yes, that's something I really do need to address, but I figure varnish caching is a quick win).
I've set the Cache-Control response header as described here, and indeed that does seem to have been set on the page:
>> curl -I http://swingoutlondon.co.uk
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 13 May 2012 00:01:05 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Cache-Control: public, max-age=300
Etag: "2565201f3ae39c6a9a1f6b1fb8bbbe0a"
X-Ua-Compatible: IE=Edge,chrome=1
X-Runtime: 1.699667
Content-Length: 44224
Accept-Ranges: bytes
X-Varnish: 681634826
Age: 0
Via: 1.1 varnish
Note: Cache-Control: public, max-age=300
I assume that Age: 0 indicates that it hasn't retrieved a cached copy, and indeed the command returns in the normal slow 2-3 seconds.
If keep repeatedly trying that curl, I can occasionally a cached copy (the page loads in under half a second and Age is greater than 0).
I must confess to not fully understanding HTTP headers, but one clue might be: when Age is greater than 0, I get two lots of digits in X-Varnish (in all other cases I only get one set):
X-Varnish: 848670407 848650521
Here's what I've checked:
the source of is identical each time.
I have one before_filter on that page, which sets the time the page was last updated as an instance variable.
there are a number of cookies - as far as I can see they are all set by either Google Analytics or the Twitter or Facebook buttons.
For good measure, here are my Request headers:
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-GB,en-US;q=0.8,en;q=0.6
Cache-Control:max-age=0
Connection:keep-alive
Cookie:__utma=264326157.189257391.1336869624.1336869624.1336869624.1; __utmb=264326157.2.10.1336869624; __utmc=264326157; __utmz=264326157.1336869624.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Host:www.swingoutlondon.co.uk
If-None-Match:"2565201f3ae39c6a9a1f6b1fb8bbbe0a"
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.168 Safari/535.19
Ah well - turns out that because Heroku uses multiple independent Varnish servers, and because traffic to Swing Out London is relatively low, I shouldn't expect to have many pages served by the caches if my max-age is only 5 minutes. Setting it to 20 or 30 minutes results in much more caching.
I've written a detailed blog post collecting my learnings. Thanks to Garry Shulter for helping me out with this one.
I'm running a Rails 3.0.7 app with nginx and Passenger. I have a custom 500 page that is properly displayed when the app encounters an 500 internal error, however the actual '500' status is not being output to the logs.
I'd like to be able to periodically grep the logs to find 500 errors, but I can't seem to figure out why the actual status is not being rendered. I've even looked through the Rails code, and everything looks fine. All other status codes are successfully logged.
Here is an error-free 200 response:
Completed 200 OK in 1265ms (Views: 1262.4ms | ActiveRecord: 69.6ms | Sphinx: 0.0ms)
Here is a 500 response:
Completed in 500ms
It appears that something is supposed to be there, but is not, so spaces are output instead.
Looks like this has been resolved in Rails master, but is not in the gem for 3.0.7 yet.
https://github.com/rails/rails/commit/7927fc2ff77543a0ab151ac1cb3d60318e2dfa68
I'm using Authlogic 2.1.6 for Authorization in Rails 3.0.5 and I have a session cookie problem with AJAX requests.
After a POST or PUT AJAX Call I'm getting a 401 Response and a new session key. After that every call will return a 401 response. Before the POST or PUT call every GET call succeeds.
This doesn't happen in test mode, only in development and production mode.
Does anybody know how to avoid that?
EDIT: I think there is a problem with the forgery protection authenticity key, because after disabling forgery protection everything worked fine.
This is a request header which will produce a 401:
Accept:*/*
Cache-Control:max-age=0
Content-Type:application/json; charset=UTF-8
Origin:http://localhost:3000
Referer:http://localhost:3000/
User-Agent:Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; de-de) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4
X-Requested-With:XMLHttpRequest
Rails Log entry is the following:
Started POST "/users.json" for 127.0.0.1 at Tue Apr 12 10:47:33 +0200 2011
Processing by UsersController#create as JSON
Parameters: {"user"=>{"password_confirmation"=>"[FILTERED]", "group_id"=>2, "lastname"=>"test1", "prename"=>"test1", "password"=>"[FILTERED]", "login"=>"test1"}}
Rendered text template (0.0ms)
Completed 401 Unauthorized in 19ms (Views: 0.9ms | ActiveRecord: 3.0ms)
EDIT2:
Next weird thing: I i send a BASIC AUTH Header instead of a cookie with session ID, I'm not getting a 401, very weird
A basic error with the 'forgery protection authenticity' is to forget adding the csrf_meta_tag
In your layout add it in your header tag. Without it no Ajax request is do with the good token.
If it's not with Ajax use the simple_form helper to generate the token in your form
I have a rails app that is working fine except for one thing.
When I request something that doesn't exist (i.e. /not_a_controller_or_file.txt) and rails throws a "No Route matches..." exception, the response is this (blank line intentional):
HTTP/1.1 200 OK
Date: Thu, 02 Oct 2008 10:28:02 GMT
Content-Type: text/html
Content-Length: 122
Vary: Accept-Encoding
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Status: 500 Internal Server Error
Content-Type: text/html
<html><body><h1>500 Internal Server Error</h1></body></html>
I have the ExceptionLogger plugin in /vendor, though that doesn't seem to be the problem. I haven't added any error handling beyond the custom 500.html in public (though the response doesn't contain that HTML) and I have no idea where this bit of html is coming from.
So Something, somewhere is adding that HTTP/1.1 200 status code too early, or the Status: 500 too late. I suspect it's Apache because I get the appropriate HTTP/1.1 500 header (at the top) when I use Webrick.
My production stack is as follows:
Apache 2
Mongrel (5 instances)
RubyOnRails 2.1.1 (happens in both 1.2 and 2.1.1)
I forgot to mention, the error is caused by a "no route matches..." exception
This is a fairly old thread, but for what it's worth I found a great resource that includes a detailed description of the problem and the solution. Apparently this bug affects Rails < 2.3 when used with Mongrel.
The article that helped me understand the problem & write my own patch.
An official Rails bug ticket that includes a patch for Rails 2.2.2.
This html file is coming from Rails. It is encountering some sort of error (probably an exception of some kind, or some other unrecoverable error).
If the extra blank line between the Status: header and the actual headers is there, and not just a typo, then this would go a long way to explaining why Apache is reporting a 200 OK message.
The Status header is how Rails, PHP, or whatever tells Apache "There was an error, please return this code instead of 200 OK." The fact there is a blank line means something extra is going on and Ruby is outputting a blank line before the error output for whatever reason. Maybe it's previous output from your script. The long and short of it is though, the extra blank line means that Apache thinks "Oh, blank line, no extra headers, this is all content now.", which would be consistent with the Content-Length header you provided.
My guess for why there's a blank line would be previous script output, perhaps a line ending at the end of a fully script page. As to why the 500 error is happening, there isn't nearly enough info here to tell you that. Maybe a file I/O error.
Edit: Given the extra information provided by Dave about the internals, I'd say this is actually an issue with the proxying that goes on behind the scenes... I couldn't tell you exactly what though, beyond what's already been said.
This is coming from rails itself.
http://github.com/rails/rails/tree/master/actionpack/lib/action_controller/dispatcher.rb#L60
The dispatcher is return an error page with the status code of 200 (Success).