Heroku Block Requests to URL - ruby-on-rails

For some reason, a script of some sort makes a repeated request to my site every second or faster to a URL that doesn't exist. It's painful because it clogs up the logs (and is an unnecessary (although small) consumption of resources). Just wondering if there's a good way to deal with this for a site hosted on Heroku. The requests come from a different IP address each time.
Edit: As a note, the requests are always to the exact same URL.
Here's an example, which repeats every second or so, except from a different IP:
Feb 22 08:37:28 myApp app/web.1: ActionController::RoutingError (Not Found):
Feb 22 08:37:28 myApp app/web.1: app/controllers/application_controller.rb:31:in `not_found'
Feb 22 08:37:28 myApp app/web.1: app/controllers/my_controller.rb:141:in `my_method'
Feb 22 08:37:28 myApp app/web.1: [Exceptiontrap] Raised Exceptiontrap::Rack::Exception
Feb 22 08:37:28 myApp app/web.1: [Exceptiontrap] Catched Exception: ActionController::RoutingError
Feb 22 08:37:28 myApp app/web.1: Started GET "/aSpecificURL" for 109.242.56.44 at 2014-02-22 13:37:28 +0000
Feb 22 08:37:28 myApp heroku/router: at=info method=GET path=/aSpecificURL host=www.myApp.com request_id=9caeabcf-adcc-417f-940d-0458a81d9c32 fwd="109.242.56.44" dyno=web.1 connect=2ms service=24ms status=404 bytes=1632

You can't block specific requests through heroku. It sounds to me like someone is scanning your app for security vulnerabilities.
You could setup cloudflare to help block some of the requests. But overall, this is pretty common and not something to worry about.

Related

Someone trying to get into my server?

I hosted my Rails application last week. Today I was going through our log file and noticed lots of request like this.
I, [2016-03-14T00:42:18.501703 #21223] INFO -- : Started GET "/testproxy.php" for 185.49.14.190 at 2016-03-14 00:42:18 -0400
F, [2016-03-14T00:42:18.510616 #21223] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/testproxy.php"):
Someone is trying to go to testproxy.php from different ip address. Some ip are from poland and others from hongkong. Am I getting attacked by someone. What are my options to protect myself.
Here are other outputs from log file:
I, [2016-03-14T03:09:24.945467 #15399] INFO -- : Started GET "/clientaccesspolicy.xml" for 107.22.223.242 at 2016-03-14 03:09:24 -0400
F, [2016-03-14T03:09:24.949328 #15399] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/clientaccesspolicy.xml"):
Different ip address:
I, [2016-03-14T16:03:47.793731 #15399] INFO -- : Started GET "/testproxy.php" for 178.216.200.48 at 2016-03-14 16:03:47 -0400
F, [2016-03-14T16:03:47.818519 #15399] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/testproxy.php"):
search.php
I, [2016-03-14T19:41:14.261843 #15399] INFO -- : Started GET "/forum/search.php" for 164.132.161.67 at 2016-03-14 19:41:14 -0400
F, [2016-03-14T19:41:14.266563 #15399] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/forum/search.php"):
forum/index.php
I, [2016-03-15T10:54:55.254785 #26469] INFO -- : Started GET "/forum/index.php" for 164.132.161.56 at 2016-03-15 10:54:55 -0400
F, [2016-03-15T10:54:55.266456 #26469] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/forum/index.php"):
phpmyadim/scripts/setup.php
I, [2016-03-15T13:21:36.862918 #26469] INFO -- : Started GET "/phpMyAdmin/scripts/setup.php" for 103.25.73.234 at 2016-03-15 13:21:36 -0400
F, [2016-03-15T13:21:36.867050 #26469] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/phpMyAdmin/scripts/setup.php"):
another setup.php
I, [2016-03-15T13:21:37.452097 #26469] INFO -- : Started GET "/pma/scripts/setup.php" for 103.25.73.234 at 2016-03-15 13:21:37 -0400
F, [2016-03-15T13:21:37.453647 #26469] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/pma/scripts/setup.php"):
myadmin/scripts/setup.php
I, [2016-03-15T13:21:38.034283 #26469] INFO -- : Started GET "/myadmin/scripts/setup.php" for 103.25.73.234 at 2016-03-15 13:21:38 -0400
F, [2016-03-15T13:21:38.041563 #26469] FATAL -- :
ActionController::RoutingError (No route matches [GET] "/myadmin/scripts/setup.php"):
and lots of other stuff. Please tell me how can I protect myself from those attacks.
This is commonplace when you are running a public server. Here is an excerpt of my home server's auth.log:
Mar 14 19:22:36 hotdog sshd[65937]: Received disconnect from 181.214.92.11: 11: Bye Bye [preauth]
Mar 14 19:22:37 hotdog sshd[65939]: Invalid user ubnt from 181.214.92.11
Mar 14 19:22:37 hotdog sshd[65939]: input_userauth_request: invalid user ubnt [preauth]
Mar 14 19:22:37 hotdog sshd[65939]: Received disconnect from 181.214.92.11: 11: Bye Bye [preauth]
Mar 14 19:22:38 hotdog sshd[65941]: Invalid user support from 181.214.92.11
Mar 14 19:22:38 hotdog sshd[65941]: input_userauth_request: invalid user support [preauth]
Mar 14 19:22:38 hotdog sshd[65941]: Received disconnect from 181.214.92.11: 11: Bye Bye [preauth]
Mar 14 19:22:39 hotdog sshd[65943]: Invalid user oracle from 181.214.92.11
Mar 14 19:22:39 hotdog sshd[65943]: input_userauth_request: invalid user oracle [preauth]
Mar 14 19:22:39 hotdog sshd[65943]: Received disconnect from 181.214.92.11: 11: Bye Bye [preauth]
Mar 14 19:22:40 hotdog sshd[65945]: Received disconnect from 181.214.92.11: 11: Bye Bye [preauth]
Mar 14 19:24:04 hotdog sshd[65947]: fatal: Read from socket failed: Operation timed out [preauth]
Mar 14 20:01:19 hotdog sshd[66032]: Received disconnect from 183.3.202.102: 11: [preauth]
Mar 14 20:40:17 hotdog sshd[66092]: Invalid user cacti from 199.217.117.71
Mar 14 20:40:17 hotdog sshd[66092]: input_userauth_request: invalid user cacti [preauth]
Mar 14 20:40:17 hotdog sshd[66092]: Connection closed by 199.217.117.71 [preauth]
Mar 14 21:32:09 hotdog sshd[66188]: Received disconnect from 183.3.202.102: 11: [preauth]
Mar 14 22:01:59 hotdog sshd[66256]: Invalid user user1 from 199.217.117.71
Mar 14 22:01:59 hotdog sshd[66256]: input_userauth_request: invalid user user1 [preauth]
Mar 14 22:02:00 hotdog sshd[66256]: Connection closed by 199.217.117.71 [preauth]
Mar 14 22:17:57 hotdog sshd[66280]: Did not receive identification string from 14.182.117.161
As you can see people are constantly trying to break into my server, by guessing a username. Since the server only accepts publickey login, not password, I believe myself to be fairly secure from these particular attacks.
The same applies to your PHP files. They are trying to find a php endpoint which they can run some canned exploit on. You can use tools like fail2ban which help with rate-limiting. But really these attacks will always be present on a public server. The only way is to ensure your software can resist attacks.
Some general common-sense tips:
Don't run more services than you need, as any one service could open your server to attack. Check which ports you have open with nmap.
Check that your apache/nginx config doesn't allow execute of more (PHP) files than necessary.
Update your software continuously. Most of these attacks are automated and thus rely on published exploits in common packages.
I have the IP address 183.3.202.102 and some others from the same subnet quite frequently appear in the log of one of my honeypots.
It suddenly stopped though. I guess someone finally submitted an abuse report and had them banned.

invalid authencity token (rails, heroku, puma)

I was playing around with my rails app (puma server, deployed on heroku) in production, and got the authenticity error when submitted a long form (when I render the form auth token seems to be okay, the value is visible).
Usually the form works well. It was only the second time this has happened. In both cases a long time (~1hour) passed between rendering and submitting the form.
I couldn't find any answer to this problem. Is there a time limit for submitting the form based on when the form was rendered? Or is this a browser issue? It would be important to have this form open for a while since it's pretty long and users may want to think before they fill out the and submit the form.
Mar 01 07:36:32 appfaskyn app/web.1: Started POST "/products" for 141.101.96.195 at 2016-03-01 15:36:31 +0000
Mar 01 07:36:32 appfaskyn app/web.1: Processing by ProductsController#create as HTML
Mar 01 07:36:32 appfaskyn app/web.1: Can't verify CSRF token authenticity
Mar 01 07:36:32 appfaskyn app/web.1: ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken):
Mar 01 07:36:32 appfaskyn app/web.1: vendor/bundle/ruby/2.2.0/gems/actionpack-4.2.4/lib/action_controller/metal/request_forgery_protection.rb:181:in `handle_unverified_request'
Mar 01 07:36:32 appfaskyn app/web.1: vendor/bundle/ruby/2.2.0/gems/actionpack-4.2.4/lib/action_controller/metal/request_forgery_protection.rb:209:in `handle_unverified_request'
Mar 01 07:36:32 appfaskyn app/web.1: vendor/bundle/ruby/2.2.0/gems/devise-3.5.6/lib/devise/controllers/helpers.rb:257:in `handle_unverified_request'
Mar 01 07:36:32 appfaskyn app/web.1: vendor/bundle/ruby/2.2.0/gems/actionpack-4.2.4/lib/action_controller/metal/request_forgery_protection.rb:204:in `verify_authenticity_token'

Unicorn on Heroku throwing weird errors which causes H13 server errors. What's killing my unicorns?

I have a Rails 4 app running on Heroku that has suddenly started to throw a lot of H13 errors. I've traced it through the logs and see a lot of errors with unicorn complaining about memoized object is not a string. The processes are then reaped by a SIGIOT. Here's the exact error:
unicorn_http.rl:295: write_value: Assertion `rb_type((VALUE)(f)) == RUBY_T_STRING && "memoized object is not a string"' failed.
The errors can also look like this:
unicorn_http.rl:296: write_value: Assertion `(!!((!(((VALUE)(f) & RUBY_IMMEDIATE_MASK) || !!(((VALUE)(f) & ~((VALUE)RUBY_Qnil)) == 0)) && (int)(((struct RBasic*)(f))->flags & RUBY_T_MASK) != RUBY_T_NODE)?(((struct RBasic*)(f))->flags&((((VALUE)1)<<11))):((((int)(long)(f))&RUBY_FIXNUM_FLAG)||((((int)(long)(f))&RUBY_FLONUM_MASK) == RUBY_FLONUM_FLAG)))) && "unfrozen object"' failed.
It happens on many different routes and I can't seem to figure out what it going on. I run a very stock unicorn config:
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 2)
timeout 29
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And here is snippet of Heroku logs with the error:
Sep 15 15:38:42 #{app} heroku/router: at=error code=H13 desc="Connection closed without response" method=GET path="#{path}" host=host.org request_id=6af16b9e-02ff-494e-8c8c-11f060f3b8fe fwd="109.63.24.000" dyno=web.1 connect=1ms service=15ms status=503 bytes=1187
Sep 15 15:38:42 #{app} app/web.1: unicorn worker[1] -p 4635 -c ./config/unicorn.rb: unicorn_http.rl:295: write_value: Assertion `rb_type((VALUE)(f)) == RUBY_T_STRING && "memoized object is not a string"' failed.
Sep 15 15:38:42 #{app} app/web.1: I, [2014-09-15T22:38:41.878914 #675] INFO -- : worker=1 ready
Sep 15 15:38:42 #{app} app/web.1: E, [2014-09-15T22:38:41.835934 #2] ERROR -- : reaped #<Process::Status: pid 672 SIGIOT (signal 6)> worker=1
...
Sep 15 15:26:52 #{app} app/web.1: unicorn worker[0] -p 4635 -c ./config/unicorn.rb: unicorn_http.rl:295: write_value: Assertion `rb_type((VALUE)(f)) == RUBY_T_STRING && "memoized object is not a string"' failed.
Sep 15 15:26:52 #{app} app/web.1: E, [2014-09-15T22:26:52.483577 #2] ERROR -- : reaped #<Process::Status: pid 487 SIGIOT (signal 6)> worker=0
Sep 15 15:26:52 #{app} heroku/router: at=error code=H13 desc="Connection closed without response" method=GET path="#{path}" host=host.org request_id=92fbf67f-56b1-4144-917a-c3d29171f631 fwd="76.234.45.000" dyno=web.1 connect=4ms service=18ms status=503 bytes=1208
Request queueing is normally <10ms on this app and this seems to just have started recently but I haven't changed the unicorn setup in ages. The app is a Heroku app running Rails 4.1.1 with Ruby 2.0.0 on Unicorn 4.8.2. It uses memecache with dalli to do some minimal caching. The requests fail with the H13 error immediately so I'm not hitting the 29 second timeout set in the unicorn.rb config.

I'm having huge Rails log is it normal?

I'm having a huge rails application log. is it normal ?'
768 megs for the production log !!
root#demo3:/home/canvas/canvas/log# ls -lh
total 960M
-rw-r--r-- 1 canvas canvas 192M Sep 28 12:37 delayed_job.log
-rw-rw-r-- 1 canvas canvas 265 Sep 22 08:57 development.log
-rw-r--r-- 1 canvas canvas 910K Sep 28 12:36 newrelic_agent.log
-rw-r--r-- 1 canvas canvas 768M Sep 28 12:37 production.log
-rw-r--r-- 1 root root 26K Sep 28 11:00 super_delayed_job_err.log
-rw-r--r-- 1 root root 113K Sep 22 14:07 super_delayed_job.log
Snippet from the log file:
[- 1e1f92f0-293e-0132-2906-00163c067c2e] Cache hit: _account_lookup2/1 ({})
[- 1e1f92f0-293e-0132-2906-00163c067c2e] Cache hit: settings_for_plugin2/sessions ({})
[- 208bd370-293e-0132-2906-00163c067c2e]
Processing UsersController#user_dashboard (for 54.248.250.232 at 2014-09-28 14:06:04) [GET]
[- 208bd370-293e-0132-2906-00163c067c2e] Parameters: {"controller"=>"users", "action"=>"user_dashboard"}
[- 208bd370-293e-0132-2906-00163c067c2e] Redirected to http://subdomain.example.com/login
[- 208bd370-293e-0132-2906-00163c067c2e] Filter chain halted as [:require_user] rendered_or_redirected.
[- 208bd370-293e-0132-2906-00163c067c2e] Completed in 3ms (DB: 0) | 302 Found [http://demo3.iqraalms.com/]
[- 208bd370-293e-0132-2906-00163c067c2e] [STAT] 903612 903612 0 903612 0.010000000000000231 0
Any idea how to optimise it ?
You could raise the log level to get less data (warn, error or fatal) in your config file as described in rails guide on debugging. Or as sjaime pointed out in his comment, logrotate is a utility that will solve this problem for you (compress your log every day/week/month or when it reaches a certain size, delete/email/keep old archives, ...).
One thing that will blow up your log tremendously are for example asset errors (missing fonts is a classic there). Make sure you have non of these. Other than that, with log level info and a couple of users on your site, your log will grow quickly.

Hide rendering of partials from rails logs

I believe the default behavior of rails logging on production is to not output the rendering of all partials. This should log in development but not on production.
However, I'm seeing this in production and I'm not sure how to remove it. My logs are too noisy. My production environment is Heroku running Unicorn and using Papertrail to view my logs. I know Unicorn does some wonky stuff with logs and to get them working properly in the first place I had to add this to my production.rb:
config.logger = Logger.new(STDOUT)
config.logger.level = Logger.const_get('INFO')
( Explained here: http://help.papertrailapp.com/kb/configuration/unicorn )
But even with log_level INFO I'm seeing huge blocks of these in all my logs:
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.7ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (2.1ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (4.8ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.3ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (0.4ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (4.4ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.3ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (0.3ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (1.8ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.4ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (4.6ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (2.1ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.3ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (0.4ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (4.1ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.2ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (1.8ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (6.0ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.5ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (0.8ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_category.html.erb (1.9ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_caption.html.erb (0.3ms)
Jun 25 22:15:15 tacktile app/web.1: Rendered photos/pieces/_rights.html.erb (0.7ms)
For Rails 4 (at least):
Try this in your config/environments/development.rb
config.action_view.logger = nil
Use Lograge, it removes rendering times for partials
I got the following answer from a papertrail:
I think the quickest way to deal with this is to use our log filtering
functionality. That'll let you drop anything that matches a regex and
will save you from having to make any app configuration changes.
In the longer term, you'll probably want to silence these messages at
source. Lograge is probably your best bet for that. You may find it
also removes a few other bits but give it a go and let me know what
you think.
I know this is probably irrelevant to you at the moment, but for
future use you might also find some other useful tips here. It covers
lograge, removing static asset requests and unnecessary actions,
Let me know if you need help with anything mentioned above.
Instead of completely disabling actionview logging (as described in another answer), I opted to change the logging level for rendering to DEBUG. This way, it can be easily omitted from production logs by setting the log level to INFO or higher.
Note, this is for rails 5.2. I'm unsure if it would work on other versions.
module ViewLoggingOverride
def info(progname = nil, &block)
logger.debug(progname, &block) if logger
end
end
ActionView::LogSubscriber.include(ViewLoggingOverride)
Relevant rails code:
https://github.com/rails/rails/blob/5-2-stable/actionview/lib/action_view/log_subscriber.rb
https://github.com/rails/rails/blob/5-2-stable/activesupport/lib/active_support/log_subscriber.rb#L93-L99
This is so common that, in Papertrail emails notifying you of a logging spike, there is a link with this exact example.
This is the link.
I tweaked the regex a bit:
/\A\s{3}Rendered \w+\/_.+\.erb \(\d+\.\d+ms\)\z/
PS: I always found it odd they are printed at info level in the first place.

Resources