Google wasn't very helpful on this one, so I was hoping someone here might have an idea.
My app works fine on server (and was recently working fine on heroku), but an hour or so ago, when I went to open a certain page on it (one that involves displaying information that is affected by a delayed_job I have running), I got a heroku error, and the logs say (among many other things):
dyno=web.1 queue=0 wait=5ms service=119ms status=500 bytes=643
2012-11-07T23:17:44+00:00 app[worker.1]: (1.3ms) SELECT COUNT(*) AS count_all, priority AS priority FROM "delayed_jobs" WHERE (run_at < '2012-11-07 23:17:44.830238' and failed_at is NULL) GROUP BY priority
2012-11-07T23:17:44+00:00 app[worker.1]: (2.5ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (failed_at is not NULL)
2012-11-07T23:17:44+00:00 app[worker.1]: (1.2ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (locked_by is not NULL)
Obviously, a problem with the delayed_job, but I'm not sure where to start looking, particularly considering how it was working before and still functions on my server.
Any ideas what the problem is or how to start debugging?
It's likely in the job that's being process in DJ, which may not inject an error message into your log. You can search your log for specific worker messages and try to see the error. Problem with even do this is that you need to ascertain when the job will be running, or else you might be looking in the wrong part of your logs.
heroku run logs || grep worker
Secondly, you need to figure out why your view rendered an error. Since your view is rendered by your app, and not your worker, something is out of sync in your app. Figure out exactly what is wrong, and that may point to what your worker did/did not do.
Related
I am developing a website for a journal in Rails and on one of my pages has a list of every single issue that has been published in descending order. I also have a select box for users to filter the issues by year as they don't have names but hopefully, it will help them to find what they are looking for more quickly if the article within an issue isn't uploaded to the site separately.
In order to create the options for the filter box, I made the following function to return a list of all the unique years for the issues (an Issue has a date field that is the publish date of the issue, in case old issues that precede the website need to be uploaded).
Issue.select("date").order('date desc').map{ |i| i.date.year }.uniq
This function works excellently on my own machine but when I deploy it on Heroku (a free account), it gives the following error message when I check the logs.
2017-08-15T15:19:42.521061+00:00 app[web.1]: Started GET "/issues" for 83.136.45.169 at 2017-08-15 15:19:42 +0000
2017-08-15T15:19:42.522804+00:00 app[web.1]: Processing by IssuesController#index as HTML
2017-08-15T15:19:42.524822+00:00 app[web.1]: Issue Load (0.9ms) SELECT "issues"."date" FROM "issues" ORDER BY date desc
2017-08-15T15:19:42.525378+00:00 app[web.1]: Completed 500 Internal Server Error in 2ms (ActiveRecord: 0.9ms)
2017-08-15T15:19:42.525925+00:00 app[web.1]:
2017-08-15T15:19:42.525926+00:00 app[web.1]: NoMethodError (undefined method `year' for nil:NilClass):
2017-08-15T15:19:42.525927+00:00 app[web.1]: app/controllers/issues_controller.rb:12:in `block in index'
2017-08-15T15:19:42.525927+00:00 app[web.1]: app/controllers/issues_controller.rb:12:in `index'
I have made no changes to the database since my last push. I'm not sure how to further debug this situation.
The error is not caused by heroku, but by the data you have in the database on heroku. You seem to have records in the Issue table that were created without a date.
To avoid this, use this query:
Issue.where.not(date: nil).select("date").order('date desc').map{ |i| i.date.year }.uniq
I think the query above works only with Rails 5.
If you use a previous version, you can do this:
Issue.select("date").order('date desc').map{ |i| i.date&.year }.uniq.compact
Notice the i.date&.year and the compact. The & will not execute the following method if date is nil.
However, it will probably add nil objects to your array, resulting in something like this:
[year1, year2, nil, year3]
compact will remove nil objects, to obtain this:
[year1, year2, year3]
More information:
http://mitrev.net/ruby/2015/11/13/the-operator-in-ruby/
My project uses rails 4.2.7 with delayed_job (4.0.6) and delayed_job_active_record (4.0.3). The problem we're facing is that Delayed Job logs its sql queries every minute or so, which makes main log files less transparent and eats up our Logmatic subscription limits. Delayed Job logs to its own log normally, but its activerecord queries end up in the main log. Most of the entries look like this:
(0.8ms) SELECT COUNT(*) FROM `delayed_jobs` WHERE (failed_at is not NULL)
(0.5ms) SELECT COUNT(*) FROM `delayed_jobs` WHERE (locked_by is not NULL)
(0.7ms) SELECT COUNT(*) AS count_all, `delayed_jobs`.`queue` AS delayed_jobs_queue FROM `delayed_jobs` WHERE (run_at <= '2017-02-12 23:18:18.094189' and failed_at is NULL) GROUP BY `delayed_jobs`.`queue`
(0.8ms) SELECT COUNT(*) AS count_all, `delayed_jobs`.`priority` AS delayed_jobs_priority FROM `delayed_jobs` WHERE (run_at <= '2017-02-12 23:18:18.095817' and failed_at is NULL) GROUP BY `delayed_jobs`.`priority`
The previous developer introduced this delayed_job silencer described here:
How to ignore Delayed Job query logging in development on Rails
but it doesn't seem to work, since the queries are still logged in staging logs and in Logmatic.
Is there a way to stop logging these (perhaps redirect them to another file if they can't be silenced completely)? Assume we still want to log such entries that are not delayed_job-related.
You can change the logfile in which the DJ worker log in a initializer like this:
Delayed::Worker.logger = Logger.new(File.join(Rails.root, 'log', 'delayed_job.log'))
This should clean your main log.
Edit: Just seen that this will not affect the AR logs.
Have you tried setting the DJ log level to info instead of debug?
I'm running a Rails app (v 3.1.10) on a Heroku Cedar stack with Papertrail add-on going crazy because of the size of the logs.
My app is really verbose and the logs are getting huge (really huge):
Sometimes because I serialize a lots of data in one field and that makes a huge SQL request. In my model I have many:
serialize :a_game_data, Hash
serialize :another_game_data, Hash
serialize :a_big_set_of_game_data, Hash
[...]
Thanks to my AS3 Flash app working with bigs sets of json...
Sometimes because there's a lots of partials to render:
Rendered shared/_flash_message.html.erb (0.1ms)
Rendered shared/_header_cart_info.html.erb (2.7ms)
Rendered layouts/_header.html.erb (19.4ms)
[...]
It's not the big issue here, but I've added this case too because Jamiew handle it, see below...
Sometimes because there's lots of sql queries on the same page:
User Load (2.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
Course Load (5.3ms) SELECT "courses".* FROM "courses" WHERE (id = '1' OR pass_token = NULL)
Session Load (1.3ms) SELECT "sessions".* FROM "sessions" WHERE "sessions"."id" = 1 LIMIT 1
Training Load (1.3ms) SELECT "trainings".* FROM "trainings" WHERE "trainings"."id" = 1 LIMIT 1
[...]
It's a big (too) complex App we've got here... yeah...
Sometimes because there's a lots of params:
Parameters: {"_myapp_session"=>"BkkiJTBhYWI1MUVlaVdtbE9Eb1Y2I5BjsAVEkiEF9jc3JmX3Rva2VlYVWZyM2I0dEZaR1YwNXFjZhZTQ1uBjsARkkiUkiD3Nlc3Npb25faWQGOgZFRhcmRlbi51c2yN1poVm8vdWo3YTlrdUZzVTA9BjsARkkiH3dAh7CMTQ0Yzc4ZDJmYzg5ZjZjOGQ5NVyLmFkbWluX3VzZXIua2V5BjsAVFsISSIOQWRtaW5Vc2VyBjsARlsGaQZJIiIkMmEkMTAkcmgvQ2Rwc0lrYzFEbGJFRG9jMnZvdQY7AFRJIhl3YXJkZW4udXNlci51c2VyLmtleQY7AFRbCEkiCVVzZXIGOwBGWwZpBkkiIiQyYSQxMCRBUFBST2w0aWYxQmhHUVd0b0V5TjFPBjsAVA==--e4b53a73f6b622cfe7550b2ee12678712e2973c7", "authenticity_token"=>"EeiWmlODoYXUfr3b4tFZGV05qr7ZhVo/uj7a9kuFsU0=", "utf8"=>"✓", "locale"=>"fr", "id"=>"1", "a"=>1, "a"=>1, "a"=>1, "a"=>1, "a"=>1, "a"=>1, [...] Hey! You've reach the end of the line but it's not the end of the parameters...}
The AS3 Flash app send big json data to the controller...
I didn't mention the (in)famous "Assets pipeline logging problem" because now I'm using the quiet_assets gem to handle this:
https://github.com/evrone/quiet_assets
So... what did I try?
1: Dennis Reimann's middleware solution:
http://dennisreimann.de/blog/silencing-the-rails-log-on-a-per-action-basis/
2: Spagalocco's gem (inspired by solution #1):
https://github.com/spagalloco/silencer
3: jamiew's monkeypatches (inspired by solution #1 + a bonus):
https://gist.github.com/1558325
Nothing is really working as expected but it's getting close.
I would rather use a method in my ApplicationController like this:
def custom_logging(opts={}, show_logs=true)
disable_logging unless show_logs
remove_sql_requests_from_logs if opts[:remove_sql_requests]
remove_rendered_from_logs if opts[:remove_rendered]
remove_params_from_logs if opts[:remove_params]
[...]
end
...and call it in any controller method: custom_logging({:remove_sql_requests=>1, :remove_rendered=>1})
You got the idea.
So, is there any good resource online to handle this?
Many thanks for your advices...
I"m the author of the silencer gem mentioned above. Are you looking to filter logging in general or for a particular action? The silencer gem handles the latter problem. While you can certainly use it in different ways, it's mostly intended for particular actions.
It sounds like what you are looking for less verbose logging. I would recommend you take a look at lograge. I use that in production in most of my Rails apps and have found it to be quite useful.
If you need something more specialized, you may want to look at implementing your own LogSubscriber which is essentially the lograge solution.
Set your log level in the Heroku enviroment
View your current log level:
heroku config
You most likely have "Info", which is just a lot of noise
Change it to warn or error
heroku config:add LOG_LEVEL=WARN
Also, when viewing the logs, only specify the "app" server
heroku logs --source app
I personally, append --tail to see the logs live.
heroku logs --source app --tail
I have a Ruby on Rails app that works fine locally with a sqlite3 database and can save and retrieve records without issue.
When deployed to Heroku at http://moviedata.herokuapp.com/ using a postgresql database, records are not saving even though it looks like the logs say they are. Records read from the db fine and data is displayed as expected.
The tailed logs for adding a record are:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]: Started POST "/" for 50.53.6.156 at 2012-08-21 19:51:31 +0000
2012-08-21T19:51:31+00:00 app[web.1]: Parameters: {"utf8"=>"✓", "authenticity_token"=>"+BYQLzhrfDkUVW8UaHikHpmtGHxpeQ/yF4VByHh9m1I=", "movie"=>{"title"=>"The Running Man", "description"=>"A documentary about a public execution game show.", "year"=>"1987", "genre"=>"science fiction"}, "commit"=>"Create Movie"}
2012-08-21T19:51:31+00:00 app[web.1]: Processing by MoviesController#index as HTML
2012-08-21T19:51:31+00:00 app[web.1]: Rendered movies/index.html.erb within layouts/application (5.1ms)
2012-08-21T19:51:31+00:00 app[web.1]: Completed 200 OK in 9ms (Views: 6.7ms | ActiveRecord: 0.9ms)
2012-08-21T19:51:31+00:00 heroku[router]: POST moviedata.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=17ms status=200 bytes=3479
The 'heroku pg' command shows the same number of rows (11) on the postgres database after a record is added.
This is a simple app I built to learn Rails and the Heroku platform. To reproduce this, just visit http://moviedata.herokuapp.com/ and click "New Movie", enter some junk data in the form, and hit "create movie". The record should be saved and show up in the list on the front page, but it doesn't.
Is there perhaps something I have to turn on, configure, or activate in order to be able to write to the postgres database? Seems very strange to me that it could be read from but not written to. Any better way to troubleshoot than the logs?
Locally I'm using Ruby 1.9.3, Rails, 3.2.8, PostgreSQL 9.1.5, SQLite 3.7.9, Heroku Toolbelt 2.30.3.
Edit/Update: I switched the local version to use psql. It also experiences the same problem where records are not saved. With the user set to log_statement='all' The log in at /var/log/postgresql/posgresql-9.1.main.log shows lots of selects, but when the record add is attempted, the log shows the database never being hit.
Foreman shows the data being posted, like so:
22:38:03 web.1 | Started POST "/" for 127.0.0.1 at 2012-08-21 22:38:02 -0700
22:38:03 web.1 | Processing by MoviesController#index as HTML
22:38:03 web.1 | Parameters: {"utf8"=>"✓", "authenticity_token"=>"0AyxRbwl/Kgi05uI1KX8uxVUJjx9ylAA1ltdWgmunm4=", "movie"=>{"title"=>"Army of Darkness", "description"=>"A man fights the living dead using a boomstick.", "year"=>"1997", "genre"=>"horror"}, "commit"=>"Create Movie"}
22:38:03 web.1 | Movie Load (0.8ms) SELECT "movies".* FROM "movies" ORDER BY title
22:38:03 web.1 | Rendered movies/index.html.erb within layouts/application (14.9ms)
A failed commit does sound like a great explanation. I'm not yet sure how to check whether the driver is set to commit or to see how/when a commit might have failed.
This is a very simple application, with no load balancing or complex configuration and most of the code was generated by the 'generate scaffold' command, but it's entirely possible that there's some constraint that's being violated somewhere before the db is ever hit. Perhaps there's a way to crank the Foreman (or Rails) log level up to 11? I also tried using thin instead and scoured the log files in the log/ folder and didn't find anything other than what's logged above.
This sounds a lot like a transaction issue, where you aren't COMMITting your transactions after you do work, so the changes are lost. If your SQLite driver defaults to COMMITting transactions that're closed without an explicit COMMIT or rollback, and your Pg driver defaults to ROLLBACK, you'd get the behaviour described. The same will happen if the SQLite defaults to autocomitting each statement by default, and the Pg driver driver defaults to opening a transaction.
This is one of the many good reasons to use the same local database for testing as you're going to deploy to when you want to go live.
If you were on a normal Pg instance I'd tell you to enable log_statement = 'all' in postgresql.conf, reload Pg, and watch the logs. You can't do that on Heroku, but you do have access to the Pg logs with heroku logs --ps postgres. Try running ALTER USER my_heroku_user SET log_statement = 'all';, re-testing, and examining the logs.
Alternately, install Pg locally.
Other less likely possibilities that come to mind:
You're using long-running SERIALIZABLE transactions for reads, so their snapshot never gets updated. Pretty unlikely.
Permissions on database objects are causing INSERTs, UPDATEs, etc to fail, and your app is ignoring the resulting errors. Again, unlikely.
You have DO INSTEAD rules that don't do what you expect, or BEFORE triggers that return NULL, thus silently turning operations into no-ops. Seems unlikely if you're testing with SQLite.
You're writing to a different DB than you're reading from. Not impossible in setups that're attempting to read from a cluster of hot standbys, etc.
I have a strange problem, when I run my (only / first) cucumber test, part of which creates a new entry in my Countries table using:
Factory.create(:country)
the models don't get committed to my database (MySql 5) & my test fails as the view tries to load this data. Here is a snippet from my test.log
[4;36;1mSQL (0.1ms)[0m [0;1mSAVEPOINT active_record_1[0m
[4;35;1mCountry Create (0.1ms)[0m [0mINSERT INTO `countries` (`name`, `country_code`, `currency_code`) VALUES('Ireland', 'IE', 'EUR')[0m
[4;36;1mSQL (0.1ms)[0m [0;1mRELEASE SAVEPOINT active_record_1[0m
[4;35;1mSQL (0.1ms)[0m [0mSAVEPOINT active_record_1[0m
However when I load up the rails console and run exactly the same command i.e. Factory.create(:country), the records get committed to the database. Here is the output from test.log
[4;36;1mSQL (3.5ms)[0m [0;1mBEGIN[0m
[4;35;1mCountry Create (0.2ms)[0m [0mINSERT INTO `countries` (`name`, `country_code`, `currency_code`) VALUES('Ireland', 'IE', 'EUR')[0m
[4;36;1mSQL (1.1ms)[0m [0;1mCOMMIT[0m
From env.rb
Cucumber::Rails::World.use_transactional_fixtures = false
Any advise is very much appreciated, I've spent the last two days trying to figure this out but had no success.
The second line of what you posted from your test.log file shows the record being inserted. I see that you've turned off transactional fixtures, which is fine.
Are you using the database_cleaner gem by chance? If so, it's going to essentially roll your database back to its original state after the tests are done. This means that, unless you pause your tests while they're running, and after data is inserted, you'll never see it in the DB because it gets removed after the test suite is run.
I don't know that this is what's causing the issue at the root of the thing, but it would definitely explain why, when you run just that one command from the console, that you see it in the DB, but you don't after you run your tests. The test data is supposed to be removed after the test suite executes, so your tests have a fresh, consistent starting point for each run. It's an important part of making sure that your tests run in the same environment every time, and therefore can be counted upon to give you reliable results.