Stop logging Delayed Job active record entries in Rails - ruby-on-rails

My project uses rails 4.2.7 with delayed_job (4.0.6) and delayed_job_active_record (4.0.3). The problem we're facing is that Delayed Job logs its sql queries every minute or so, which makes main log files less transparent and eats up our Logmatic subscription limits. Delayed Job logs to its own log normally, but its activerecord queries end up in the main log. Most of the entries look like this:
(0.8ms) SELECT COUNT(*) FROM `delayed_jobs` WHERE (failed_at is not NULL)
(0.5ms) SELECT COUNT(*) FROM `delayed_jobs` WHERE (locked_by is not NULL)
(0.7ms) SELECT COUNT(*) AS count_all, `delayed_jobs`.`queue` AS delayed_jobs_queue FROM `delayed_jobs` WHERE (run_at <= '2017-02-12 23:18:18.094189' and failed_at is NULL) GROUP BY `delayed_jobs`.`queue`
(0.8ms) SELECT COUNT(*) AS count_all, `delayed_jobs`.`priority` AS delayed_jobs_priority FROM `delayed_jobs` WHERE (run_at <= '2017-02-12 23:18:18.095817' and failed_at is NULL) GROUP BY `delayed_jobs`.`priority`
The previous developer introduced this delayed_job silencer described here:
How to ignore Delayed Job query logging in development on Rails
but it doesn't seem to work, since the queries are still logged in staging logs and in Logmatic.
Is there a way to stop logging these (perhaps redirect them to another file if they can't be silenced completely)? Assume we still want to log such entries that are not delayed_job-related.

You can change the logfile in which the DJ worker log in a initializer like this:
Delayed::Worker.logger = Logger.new(File.join(Rails.root, 'log', 'delayed_job.log'))
This should clean your main log.
Edit: Just seen that this will not affect the AR logs.
Have you tried setting the DJ log level to info instead of debug?

Related

ActiveRecord simple query crashing server

I have a simple query that for some reason is hanging, causing heroku to hit max memory, and crashing the server. I have never seen this behavior before so I am looking for suggestions on what might be causing this:
#city = params[:city] ? City.find(params[:city]) : City.first
SELECT "cities".* FROM "cities" WHERE "cities"."id" = ? LIMIT 1 [["id", "1"]]
Simple, but for some reason in all environments (dev, staging, and prod) it causes that weird behavior.
Connecting to sqlite db in dev, amazon rds mysql in stag and prod. (sqlite3 gem, mysql2 gem, ruby 2.0.0, rails 4.0.0)

Why can't I delete and rebuild entire search database in Heroku?

On localhost and my Heroku staging app, pg_search is working just fine.
On my Heroku product app, pg_search is still showing search results for my Footnote model (which I no longer want to include) in addition to the Reference model (which I still want to include).
I have...
# /app/models/reference.rb
class Reference < ActiveRecord::Base
has_paper_trail
include PgSearch
multisearchable :against => [:source_text, :citation]
end
I have removed include PgSearch from my /app/models/footnote.rb.
My process starts with manually allocating some workers to the app. (Which I normally do automatically using workless.)
Then I do:
$ heroku run rails console --app testivate
Running `rails console` attached to terminal... up, run.2840
Connecting to database specified by DATABASE_URL
Loading production environment (Rails 3.2.11)
irb(main):001:0> PgSearch::Footnote.delete_all
NameError: uninitialized constant PgSearch::Footnote
irb(main):002:0> PgSearch::Reference.delete_all
irb(main):003:0> PgSearch::Multisearch.rebuild(Reference)
(1.3ms) BEGIN
SQL (56.2ms) DELETE FROM "pg_search_documents" WHERE "pg_search_documents"."searchable_type" = 'Reference'
(1279.8ms) INSERT INTO "pg_search_documents" (searchable_type, searchable_id, content, created_at, updated_at)
SELECT 'Reference' AS searchable_type,
"references".id AS searchable_id,
(
coalesce("references".source_text::text, '') || ' ' || coalesce("references".citation::text, '')
) AS content,
'2013-04-02 01:05:09.330250' AS created_at,
'2013-04-02 01:05:09.330250' AS updated_at
FROM "references"
(33.8ms) COMMIT
=> #<PG::Result:0x00000006c849b0>
irb(main):004:0>
The last step returns its result, #<PG::Result:0x00000006c849b0>, instantly, despite the fact that I've just asked pg_search to index 53 documents of between 3000 and 15,000 words each.
Can I assume the rebuild process is underway? Is there some way of confirming when it's complete so I can scale back the worker processes? (And do I need to allocate worker processes to this anyway?)
BTW, the following approach also takes just seconds, which doesn't seem right:
$ heroku run rake pg_search:multisearch:rebuild[Reference] --app testivate
Running `rake pg_search:multisearch:rebuild[Reference]` attached to terminal... up, run.3303
Connecting to database specified by DATABASE_URL
(1.5ms) BEGIN
SQL (95.4ms) DELETE FROM "pg_search_documents" WHERE "pg_search_documents"."searchable_type" = 'Reference'
(1070.3ms) INSERT INTO "pg_search_documents" (searchable_type, searchable_id, content, created_at, updated_at)
SELECT 'Reference' AS searchable_type,
"references".id AS searchable_id,
(
coalesce("references".source_text::text, '') || ' ' || coalesce("references".citation::text, '')
) AS content,
'2013-04-02 01:10:07.355707' AS created_at,
'2013-04-02 01:10:07.355707' AS updated_at
FROM "references"
(103.3ms) COMMIT
After all these attempts, I still get instances of the Footnote model in my search results. How can I get rid of them?
It's a bit harsh, but heroku run rake db:migrate:redo did the trick.

Smart way to disable/enable logs on demand in Rails

I'm running a Rails app (v 3.1.10) on a Heroku Cedar stack with Papertrail add-on going crazy because of the size of the logs.
My app is really verbose and the logs are getting huge (really huge):
Sometimes because I serialize a lots of data in one field and that makes a huge SQL request. In my model I have many:
serialize :a_game_data, Hash
serialize :another_game_data, Hash
serialize :a_big_set_of_game_data, Hash
[...]
Thanks to my AS3 Flash app working with bigs sets of json...
Sometimes because there's a lots of partials to render:
Rendered shared/_flash_message.html.erb (0.1ms)
Rendered shared/_header_cart_info.html.erb (2.7ms)
Rendered layouts/_header.html.erb (19.4ms)
[...]
It's not the big issue here, but I've added this case too because Jamiew handle it, see below...
Sometimes because there's lots of sql queries on the same page:
User Load (2.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
Course Load (5.3ms) SELECT "courses".* FROM "courses" WHERE (id = '1' OR pass_token = NULL)
Session Load (1.3ms) SELECT "sessions".* FROM "sessions" WHERE "sessions"."id" = 1 LIMIT 1
Training Load (1.3ms) SELECT "trainings".* FROM "trainings" WHERE "trainings"."id" = 1 LIMIT 1
[...]
It's a big (too) complex App we've got here... yeah...
Sometimes because there's a lots of params:
Parameters: {"_myapp_session"=>"BkkiJTBhYWI1MUVlaVdtbE9Eb1Y2I5BjsAVEkiEF9jc3JmX3Rva2VlYVWZyM2I0dEZaR1YwNXFjZhZTQ1uBjsARkkiUkiD3Nlc3Npb25faWQGOgZFRhcmRlbi51c2yN1poVm8vdWo3YTlrdUZzVTA9BjsARkkiH3dAh7CMTQ0Yzc4ZDJmYzg5ZjZjOGQ5NVyLmFkbWluX3VzZXIua2V5BjsAVFsISSIOQWRtaW5Vc2VyBjsARlsGaQZJIiIkMmEkMTAkcmgvQ2Rwc0lrYzFEbGJFRG9jMnZvdQY7AFRJIhl3YXJkZW4udXNlci51c2VyLmtleQY7AFRbCEkiCVVzZXIGOwBGWwZpBkkiIiQyYSQxMCRBUFBST2w0aWYxQmhHUVd0b0V5TjFPBjsAVA==--e4b53a73f6b622cfe7550b2ee12678712e2973c7", "authenticity_token"=>"EeiWmlODoYXUfr3b4tFZGV05qr7ZhVo/uj7a9kuFsU0=", "utf8"=>"✓", "locale"=>"fr", "id"=>"1", "a"=>1, "a"=>1, "a"=>1, "a"=>1, "a"=>1, "a"=>1, [...] Hey! You've reach the end of the line but it's not the end of the parameters...}
The AS3 Flash app send big json data to the controller...
I didn't mention the (in)famous "Assets pipeline logging problem" because now I'm using the quiet_assets gem to handle this:
https://github.com/evrone/quiet_assets
So... what did I try?
1: Dennis Reimann's middleware solution:
http://dennisreimann.de/blog/silencing-the-rails-log-on-a-per-action-basis/
2: Spagalocco's gem (inspired by solution #1):
https://github.com/spagalloco/silencer
3: jamiew's monkeypatches (inspired by solution #1 + a bonus):
https://gist.github.com/1558325
Nothing is really working as expected but it's getting close.
I would rather use a method in my ApplicationController like this:
def custom_logging(opts={}, show_logs=true)
disable_logging unless show_logs
remove_sql_requests_from_logs if opts[:remove_sql_requests]
remove_rendered_from_logs if opts[:remove_rendered]
remove_params_from_logs if opts[:remove_params]
[...]
end
...and call it in any controller method: custom_logging({:remove_sql_requests=>1, :remove_rendered=>1})
You got the idea.
So, is there any good resource online to handle this?
Many thanks for your advices...
I"m the author of the silencer gem mentioned above. Are you looking to filter logging in general or for a particular action? The silencer gem handles the latter problem. While you can certainly use it in different ways, it's mostly intended for particular actions.
It sounds like what you are looking for less verbose logging. I would recommend you take a look at lograge. I use that in production in most of my Rails apps and have found it to be quite useful.
If you need something more specialized, you may want to look at implementing your own LogSubscriber which is essentially the lograge solution.
Set your log level in the Heroku enviroment
View your current log level:
heroku config
You most likely have "Info", which is just a lot of noise
Change it to warn or error
heroku config:add LOG_LEVEL=WARN
Also, when viewing the logs, only specify the "app" server
heroku logs --source app
I personally, append --tail to see the logs live.
heroku logs --source app --tail

Heroku Delayed Job error

Google wasn't very helpful on this one, so I was hoping someone here might have an idea.
My app works fine on server (and was recently working fine on heroku), but an hour or so ago, when I went to open a certain page on it (one that involves displaying information that is affected by a delayed_job I have running), I got a heroku error, and the logs say (among many other things):
dyno=web.1 queue=0 wait=5ms service=119ms status=500 bytes=643
2012-11-07T23:17:44+00:00 app[worker.1]: (1.3ms) SELECT COUNT(*) AS count_all, priority AS priority FROM "delayed_jobs" WHERE (run_at < '2012-11-07 23:17:44.830238' and failed_at is NULL) GROUP BY priority
2012-11-07T23:17:44+00:00 app[worker.1]: (2.5ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (failed_at is not NULL)
2012-11-07T23:17:44+00:00 app[worker.1]: (1.2ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (locked_by is not NULL)
Obviously, a problem with the delayed_job, but I'm not sure where to start looking, particularly considering how it was working before and still functions on my server.
Any ideas what the problem is or how to start debugging?
It's likely in the job that's being process in DJ, which may not inject an error message into your log. You can search your log for specific worker messages and try to see the error. Problem with even do this is that you need to ascertain when the job will be running, or else you might be looking in the wrong part of your logs.
heroku run logs || grep worker
Secondly, you need to figure out why your view rendered an error. Since your view is rendered by your app, and not your worker, something is out of sync in your app. Figure out exactly what is wrong, and that may point to what your worker did/did not do.

Cucumber, Capybara & Rails 2.3.2 - FactoryGirl not committing records to database, but works from console?

I have a strange problem, when I run my (only / first) cucumber test, part of which creates a new entry in my Countries table using:
Factory.create(:country)
the models don't get committed to my database (MySql 5) & my test fails as the view tries to load this data. Here is a snippet from my test.log
[4;36;1mSQL (0.1ms)[0m [0;1mSAVEPOINT active_record_1[0m
[4;35;1mCountry Create (0.1ms)[0m [0mINSERT INTO `countries` (`name`, `country_code`, `currency_code`) VALUES('Ireland', 'IE', 'EUR')[0m
[4;36;1mSQL (0.1ms)[0m [0;1mRELEASE SAVEPOINT active_record_1[0m
[4;35;1mSQL (0.1ms)[0m [0mSAVEPOINT active_record_1[0m
However when I load up the rails console and run exactly the same command i.e. Factory.create(:country), the records get committed to the database. Here is the output from test.log
[4;36;1mSQL (3.5ms)[0m [0;1mBEGIN[0m
[4;35;1mCountry Create (0.2ms)[0m [0mINSERT INTO `countries` (`name`, `country_code`, `currency_code`) VALUES('Ireland', 'IE', 'EUR')[0m
[4;36;1mSQL (1.1ms)[0m [0;1mCOMMIT[0m
From env.rb
Cucumber::Rails::World.use_transactional_fixtures = false
Any advise is very much appreciated, I've spent the last two days trying to figure this out but had no success.
The second line of what you posted from your test.log file shows the record being inserted. I see that you've turned off transactional fixtures, which is fine.
Are you using the database_cleaner gem by chance? If so, it's going to essentially roll your database back to its original state after the tests are done. This means that, unless you pause your tests while they're running, and after data is inserted, you'll never see it in the DB because it gets removed after the test suite is run.
I don't know that this is what's causing the issue at the root of the thing, but it would definitely explain why, when you run just that one command from the console, that you see it in the DB, but you don't after you run your tests. The test data is supposed to be removed after the test suite executes, so your tests have a fresh, consistent starting point for each run. It's an important part of making sure that your tests run in the same environment every time, and therefore can be counted upon to give you reliable results.

Resources