how do I block my rails app from being hit by bots? - ruby-on-rails

I'm not even sure I'm using the right terminology, whether this is actually bots or not. I didn't want to use the word 'spam' because it's not like I have comments or posts that are being created/spammed. It looks more like something is making the same repeated request to my domain, which is what made me think it was some kind of bot.
I've opened my first rails app to the 'public', which is a really a small group of users, <50 currently. That was last Friday. I started having performance issues today, so I looked at the log and I see tons of these RoutingErrors
ActionController::RoutingError (No route matches "/portalApp/APF/pages/business/util/whichServer.jsp" with {:method=>:get}):
They are filling up the log and I'm assuming this is causing the slowdown. Note the .jsp on the end and this is a rails app, so I've got no urls remotely like this in my app. I mean, the /portalApp I don't even have, so I don't know where this is coming from.
This is hosted at Dreamhost and I chatted with one of their support people, and he suggested a couple sites that detail using htaccess to block things. But that looks like you need to know the IP or domain that the requests are coming from, which I don't.
How can I block this? How can I find the IP or domain from the request? Any other suggestions?
Follow up info:
After looking at the access logs, it looks like it's not a bot. Maybe I'm not reading the logs right, but there are valid url requests (generated from within my Flex app) coming from the same IP. So now I'm wondering if it's some kind of plugin generating the requests, but I really don't know. Now I'm wondering if it's possible to block a certain url request, based on a pattern, but I suppose that's a separate question.

Old question, but for people who are still looking for alternatives I suggest checking out Kickstarter's rack-attack gem. Allows not only blacklisting and whitelisting, but also throttling.

These page seems to offer some good advice:
Here
The section on blocking by user agent may be something you could look at implementing. Is there anyway you can get the useragent from the bot from your logs? If so look for the unique aspect of the useragent that probably identifies the bot and add the following to .htaccess replacing the relevant bits
BrowserMatchNoCase SpammerRobot bad_bot
Order Deny,Allow
Deny from env=bad_bot
Its detail on that link in more detail and of course, if you can't get the useragent from your logs then this will be of no use to you!

You can also update your public/robots.txt file to allow/disallow robots.
http://www.robotstxt.org/wc/robots.html

Related

How to get subpages of an URL without knowing them?

I'd like to know any Subpages of a certain URL. E.g. I have the URL example.com. There might exist the subpages example.com/home, example.com/help and so on. Is it possible to get all of such subpages without knowing there exact name?
I thought I can handle this problem with a web crawler. But it just crawls for pages mentioned on the page itself.
I hope you understand my problem and can help me with it.
Thank you!
To answer your question, yes. Scrapy "crawl" spiders work by setting rules that can be set to do exactly what you're trying to. When in doubt, always go to the docs!
Couple things to note:
You can create a crawl spider the same way when creating the generic spider!
scrapy genspider -t crawl nameOfSpider website.com
With a crawl spider, you then have to set rules to basically tell scrapy where and where not to go; how's your regex?!
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com'] # PART 1: Domain Restriction
start_urls = ['http://www.example.com']
rules = (
Rule(LinkExtractor(allow=('.*')), callback='parse_item'), # PART 2: Call Back
)
Now I copied and pasted this from the Official docs, and changed up what it should look like for you but I havent checked the code so yeah... teh logic is there though..
IThis works by getting ALL the link that it can see depending on the rules you set, does something with said link.
You want to restrict all other domains but the one your scrapinng
In the example I set the wildcard to literally accept every and any page in the domain... once you figure out tehs tructure of a website, you can use logic to build out what you need.
You should take a look at the docs more often though. I have been using scrapy for about 6-7 years and I still find myself going back to the man pages!
No, you can’t.
The way you describe the situation, the website intends those desired URLs to be secret.
Any way to find such URLs would be a security exploit that should be reported to the website owners right away so they can fix it.

Is there a way to check open/click rates, bounces of emails via Rails?

I'm currently trying to send emails from a Rails application and would like to check the open/click rates of these emails. (without using any web service) Is there a gem or plugin that I can use to help me find out? Or is it even possible to do this?
Take a look here:
http://www.codingforums.com/archive/index.php/t-122920.html
I think the first mentioned method, detecting how many times an image has been viewed would be the easiest. Then again, these are not exact solutions, but I think an exact solution would be sort of a security hole (i.e. sending an HTTP request to a foreign server once you open an email).
Varatis is correct, using image tracking is the most common way this is done, and it is the way that most web services provide you with analytics on the e-mails they send on your behalf. Here is another Stack Overflow question that includes an example of how you might do this in Rails.

Rails request statistics gem?

I'm looking for a rails plugin to show request statistics (# of sql queries, time, etc) on each request while in development mode. Something like http://getglimpse.com/ would be great. I've seen one or two of these before but for the life of me I can't find them. Any help?
Ideally, it would show in the header or the footer of every page.
I found a few including the one I was thinking about and some others.
This one is amazing so far:
https://github.com/dsboulder/query_reviewer
This is the one I was thinking of:
https://github.com/josevalim/rails-footnotes
This seems to be a similar but better plugin to rails-footnotes:
https://github.com/brynary/rack-bug
May be best to mix and match these, try rack-bug and query_reviewer
A few others are linked from query_reviewer

Blocking to be indexed

I am wondering is there any (programming) way to block that any search engine indexes the content of a website.
You can specify it in robots.txt
User-agent: *
Disallow: /
As the other answers already say, Robots.txt is the standard that every proper search engine adheres to. This should be enough in most cases.
If you really want try to programmatically block malicious bots that do not listen to robots.txt, check out this question I asked a few months ago on how to tell bots apart from human visitors. You may find some good starting points there.
Create a robots.txt file for your site. For more info - see this link.
Most search engine bots identify themselves using a unique user agent.
You can block specific user agents using robots.txt
Here is a list of some user agents.
Since you did not mention programming language, I'll give my input on this as from a php perspective - there is a wordpress plugin called bad behavior, which does exactly what you are looking for, it is configurable via a code script listing an array of search agent's strings. And based on what the agent is crawling on your site, the plugin automatically checks the user-agent's string and id, or IP address and based on the array, if there's a match, it either rejects or accepts the agent.
It might be worth your while to have a peek at the code to see how is it done from a programmer's perspective of the code.
If the language is other than php, and not satisfy what you are looking for, then I apologize for posting this answer.
Hope this helps,
Best regards,
Tom.

Can the Google Search appliance generate a report showing broken links on your site?

I know the Google Search Appliance has access to this information (as this factors into the PageRank Algorithm), but is there a way to export this information from the crawler appliance?
External tools won't work because a significant portion of the content is for a corporate intranet.
Might be something available on Google, but I have never checked. I usually use the link checker provided by W3C. The W3C one can also detect redirects which is useful if your server handles 404s by redirecting instead of returning a 404 status code.
You can use Google Webmaster Tools to view, among other things, broken links on your site.
This won't show you broken links to external sites though.
It seems that this is not possible. Under Status and Reports > Crawl Diagnostics there are
2 styles of report available: the directory drill-down 'Tree View'
and the 100 URLs at a time 'List View'. Some people have tried creating programs to page through the List View
but this seems to fail after a few thousand URLs.
My advice is to use your server logs instead.
Make sure that 404 and referrer URL logging are enabled on your web server,
since you will probably want to correct the page containing the broken link.
You could then use a log file analyser to generate a broken link report.
To create an effective, long-term way of monitoring your broken links, you may want to set up a cron job to do the following:
Use grep to extract lines containing 404 entries from the server log file.
Use sed to remove everything except requested URLs and referrer URLs from every line.
Use sort and uniq commands to remove duplicates from the list.
Output the result to a new file each time so that you can monitor changes over time.
A free tool called Xenu turned out the be the weapon of choice for this task. http://home.snafu.de/tilman/xenulink.html#Download
Why not just analyze your webserver logs and look for all the 404 pages? That makes far more sense and is much more reliable.
I know this is an old question but you can use the Export URLs feature on the GSA admin console then look for URLs with a state of not_found. This will show you all the URLs that the GSA has discovered but returned it a 404 when it attempted to crawl them.

Resources