I have a Ruby on Rails app that works fine locally with a sqlite3 database and can save and retrieve records without issue.
When deployed to Heroku at http://moviedata.herokuapp.com/ using a postgresql database, records are not saving even though it looks like the logs say they are. Records read from the db fine and data is displayed as expected.
The tailed logs for adding a record are:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]: Started POST "/" for 50.53.6.156 at 2012-08-21 19:51:31 +0000
2012-08-21T19:51:31+00:00 app[web.1]: Parameters: {"utf8"=>"✓", "authenticity_token"=>"+BYQLzhrfDkUVW8UaHikHpmtGHxpeQ/yF4VByHh9m1I=", "movie"=>{"title"=>"The Running Man", "description"=>"A documentary about a public execution game show.", "year"=>"1987", "genre"=>"science fiction"}, "commit"=>"Create Movie"}
2012-08-21T19:51:31+00:00 app[web.1]: Processing by MoviesController#index as HTML
2012-08-21T19:51:31+00:00 app[web.1]: Rendered movies/index.html.erb within layouts/application (5.1ms)
2012-08-21T19:51:31+00:00 app[web.1]: Completed 200 OK in 9ms (Views: 6.7ms | ActiveRecord: 0.9ms)
2012-08-21T19:51:31+00:00 heroku[router]: POST moviedata.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=17ms status=200 bytes=3479
The 'heroku pg' command shows the same number of rows (11) on the postgres database after a record is added.
This is a simple app I built to learn Rails and the Heroku platform. To reproduce this, just visit http://moviedata.herokuapp.com/ and click "New Movie", enter some junk data in the form, and hit "create movie". The record should be saved and show up in the list on the front page, but it doesn't.
Is there perhaps something I have to turn on, configure, or activate in order to be able to write to the postgres database? Seems very strange to me that it could be read from but not written to. Any better way to troubleshoot than the logs?
Locally I'm using Ruby 1.9.3, Rails, 3.2.8, PostgreSQL 9.1.5, SQLite 3.7.9, Heroku Toolbelt 2.30.3.
Edit/Update: I switched the local version to use psql. It also experiences the same problem where records are not saved. With the user set to log_statement='all' The log in at /var/log/postgresql/posgresql-9.1.main.log shows lots of selects, but when the record add is attempted, the log shows the database never being hit.
Foreman shows the data being posted, like so:
22:38:03 web.1 | Started POST "/" for 127.0.0.1 at 2012-08-21 22:38:02 -0700
22:38:03 web.1 | Processing by MoviesController#index as HTML
22:38:03 web.1 | Parameters: {"utf8"=>"✓", "authenticity_token"=>"0AyxRbwl/Kgi05uI1KX8uxVUJjx9ylAA1ltdWgmunm4=", "movie"=>{"title"=>"Army of Darkness", "description"=>"A man fights the living dead using a boomstick.", "year"=>"1997", "genre"=>"horror"}, "commit"=>"Create Movie"}
22:38:03 web.1 | Movie Load (0.8ms) SELECT "movies".* FROM "movies" ORDER BY title
22:38:03 web.1 | Rendered movies/index.html.erb within layouts/application (14.9ms)
A failed commit does sound like a great explanation. I'm not yet sure how to check whether the driver is set to commit or to see how/when a commit might have failed.
This is a very simple application, with no load balancing or complex configuration and most of the code was generated by the 'generate scaffold' command, but it's entirely possible that there's some constraint that's being violated somewhere before the db is ever hit. Perhaps there's a way to crank the Foreman (or Rails) log level up to 11? I also tried using thin instead and scoured the log files in the log/ folder and didn't find anything other than what's logged above.
This sounds a lot like a transaction issue, where you aren't COMMITting your transactions after you do work, so the changes are lost. If your SQLite driver defaults to COMMITting transactions that're closed without an explicit COMMIT or rollback, and your Pg driver defaults to ROLLBACK, you'd get the behaviour described. The same will happen if the SQLite defaults to autocomitting each statement by default, and the Pg driver driver defaults to opening a transaction.
This is one of the many good reasons to use the same local database for testing as you're going to deploy to when you want to go live.
If you were on a normal Pg instance I'd tell you to enable log_statement = 'all' in postgresql.conf, reload Pg, and watch the logs. You can't do that on Heroku, but you do have access to the Pg logs with heroku logs --ps postgres. Try running ALTER USER my_heroku_user SET log_statement = 'all';, re-testing, and examining the logs.
Alternately, install Pg locally.
Other less likely possibilities that come to mind:
You're using long-running SERIALIZABLE transactions for reads, so their snapshot never gets updated. Pretty unlikely.
Permissions on database objects are causing INSERTs, UPDATEs, etc to fail, and your app is ignoring the resulting errors. Again, unlikely.
You have DO INSTEAD rules that don't do what you expect, or BEFORE triggers that return NULL, thus silently turning operations into no-ops. Seems unlikely if you're testing with SQLite.
You're writing to a different DB than you're reading from. Not impossible in setups that're attempting to read from a cluster of hot standbys, etc.
Related
I am developing a website for a journal in Rails and on one of my pages has a list of every single issue that has been published in descending order. I also have a select box for users to filter the issues by year as they don't have names but hopefully, it will help them to find what they are looking for more quickly if the article within an issue isn't uploaded to the site separately.
In order to create the options for the filter box, I made the following function to return a list of all the unique years for the issues (an Issue has a date field that is the publish date of the issue, in case old issues that precede the website need to be uploaded).
Issue.select("date").order('date desc').map{ |i| i.date.year }.uniq
This function works excellently on my own machine but when I deploy it on Heroku (a free account), it gives the following error message when I check the logs.
2017-08-15T15:19:42.521061+00:00 app[web.1]: Started GET "/issues" for 83.136.45.169 at 2017-08-15 15:19:42 +0000
2017-08-15T15:19:42.522804+00:00 app[web.1]: Processing by IssuesController#index as HTML
2017-08-15T15:19:42.524822+00:00 app[web.1]: Issue Load (0.9ms) SELECT "issues"."date" FROM "issues" ORDER BY date desc
2017-08-15T15:19:42.525378+00:00 app[web.1]: Completed 500 Internal Server Error in 2ms (ActiveRecord: 0.9ms)
2017-08-15T15:19:42.525925+00:00 app[web.1]:
2017-08-15T15:19:42.525926+00:00 app[web.1]: NoMethodError (undefined method `year' for nil:NilClass):
2017-08-15T15:19:42.525927+00:00 app[web.1]: app/controllers/issues_controller.rb:12:in `block in index'
2017-08-15T15:19:42.525927+00:00 app[web.1]: app/controllers/issues_controller.rb:12:in `index'
I have made no changes to the database since my last push. I'm not sure how to further debug this situation.
The error is not caused by heroku, but by the data you have in the database on heroku. You seem to have records in the Issue table that were created without a date.
To avoid this, use this query:
Issue.where.not(date: nil).select("date").order('date desc').map{ |i| i.date.year }.uniq
I think the query above works only with Rails 5.
If you use a previous version, you can do this:
Issue.select("date").order('date desc').map{ |i| i.date&.year }.uniq.compact
Notice the i.date&.year and the compact. The & will not execute the following method if date is nil.
However, it will probably add nil objects to your array, resulting in something like this:
[year1, year2, nil, year3]
compact will remove nil objects, to obtain this:
[year1, year2, year3]
More information:
http://mitrev.net/ruby/2015/11/13/the-operator-in-ruby/
I'm a newbie in Rails, and I had a problem with my Rails project. I used Log4r for my logger. I also used config for Log4r at How to configure Log4r with Rails 3.0.x? . But when I send some request at the same time, the output of Log4r has error. It is unordered. :(
Example about error output log file:
Started GET "/task_results.json" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by TaskResultsController#index as JSON
Started GET "/favicon.ico" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by ApplicationController#missing_page as
Parameters: {"path"=>"favicon"}
Completed 404 Not Found in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms)
[1m [36m (994.0ms) [0m [1mSELECT task_result_id,task_id,worker_id,product_id,passed,date_time,details FROM task_results ORDER BY task_result_id DESC; [0m
Completed 200 OK in 8656ms (Views: 0.2ms | ActiveRecord: 994.0ms)
And I wanna ask that how can I config Log4r to synchronize output? Or how can I fix my problem?
ruby is single-threaded, so I assume you are running your rails app on a server like unicorn?
The only solution I can think of is timestamps (with micro seconds) and not using the multi-line Rails log output which is not suitable for anything but single requests in dev environments. But even this does not guarantee "ordered" logs because they can be written when the request was served or at a random time afterwards when the buffer was flushed again.
This is an in-depth tutorial on how to use Log4r that also shows how to get custom timestamps into your logs and have all the important information on a single line. This prevents logs from other requests writing inside your main request and you can have a timestamp resolution of microseconds and order afterwards manually. However the order should not matter if you are running concurrent app-servers because the order is not guaranteed there anyway.
I want to know what queries are being run when a User is interacting with different pages on the website? I'd also like to know how long each query took. How and where can I see that?
There's a log folder in every rails application with a .log file for each of the environment you've run your application in. You'll see these kind of stats in it:
Started GET "/assets/jquery.js?body=1" for 127.0.0.1 at 2012-11-29 11:09:47 -0400
Served asset /jquery.js - 200 OK (3ms)
Started GET "/users/auth/google/callback?_method=post&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2012-10-26T19%3A29%3A52ZkvQlFeFr5Rk78g&openid.return_to=http%3A%2F%2Flocalhost%3A3000%2Fusers%2Fauth%2Fgoogle%2Fcallback%3F_method%3Dpost&openid.assoc_handle=AMlYA9VjNZ-QIIMe5bhvtPLBsAdm5xMltOa7MEwUoW4Opx9tXd_khhcS&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ext1%2Cext1.mode%2Cext1.type.ext2%2Cext1.value.ext2%2Cext1.type.ext0%2Cext1.value.ext0%2Cext1.type.ext3%2Cext1.value.ext3&openid.sig=ehFAJ1m8nPces8%2Bj6Ud%2FicpuohY%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawljL9RKBE7iUQHk94UhJQ-4sDOTawUfpNc&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawljL9RKBE7iUQHk94UhJQ-4sDOTawUfpNc&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_response&openid.ext1.type.ext2=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffirst&openid.ext1.value.ext2=Fernando&openid.ext1.type.ext0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.value.ext0=fernando%40findhorsesforsale.net&openid.ext1.type.ext3=http%3A%2F%2Faxschema.org%2FnamePerson%2Flast&openid.ext1.value.ext3=Mendez" for 127.0.0.1 at 2012-10-26 15:29:52 -0400
Started GET "/run_dates/current_pdf_generation_status.js?pdf_generation_id=e3ef90b05844012f4fc2723c91dfe57c&_=1332637701414" for 127.0.0.1 at 2012-03-24 21:08:29 -0400
Processing by RunDatesController#current_pdf_generation_status as JS
Parameters: {"pdf_generation_id"=>"e3ef90b05844012f4fc2723c91dfe57c", "_"=>"1332637701414"}
User Load (0.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
RunDate Load (0.1ms) SELECT "run_dates".* FROM "run_dates" WHERE (pdf_generation_id = 'e3ef90b05844012f4fc2723c91dfe57c') LIMIT 1
Rendered run_dates/current_pdf_generation_status.js.erb (0.1ms)
Completed 200 OK in 4ms (Views: 2.0ms | ActiveRecord: 0.3ms)
.....
Have a look at development.log if you are in development mode, it will show you all the queries with the time taken to ran that query. For production, see production.log and for staging look at staging.log
You can delete that file rails will create it again if you want to look at specific queries. but i would suggest instead of deleting, when you are hitting the page, print something different like '-----------------' that will differentiate other queries from the queries of the page you want to see.
If you are the User, you can use some browser plugin (e.g. Firebug)
If you are not the user, you can check the log to see what queries are done.
Google wasn't very helpful on this one, so I was hoping someone here might have an idea.
My app works fine on server (and was recently working fine on heroku), but an hour or so ago, when I went to open a certain page on it (one that involves displaying information that is affected by a delayed_job I have running), I got a heroku error, and the logs say (among many other things):
dyno=web.1 queue=0 wait=5ms service=119ms status=500 bytes=643
2012-11-07T23:17:44+00:00 app[worker.1]: (1.3ms) SELECT COUNT(*) AS count_all, priority AS priority FROM "delayed_jobs" WHERE (run_at < '2012-11-07 23:17:44.830238' and failed_at is NULL) GROUP BY priority
2012-11-07T23:17:44+00:00 app[worker.1]: (2.5ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (failed_at is not NULL)
2012-11-07T23:17:44+00:00 app[worker.1]: (1.2ms) SELECT COUNT(*) FROM "delayed_jobs" WHERE (locked_by is not NULL)
Obviously, a problem with the delayed_job, but I'm not sure where to start looking, particularly considering how it was working before and still functions on my server.
Any ideas what the problem is or how to start debugging?
It's likely in the job that's being process in DJ, which may not inject an error message into your log. You can search your log for specific worker messages and try to see the error. Problem with even do this is that you need to ascertain when the job will be running, or else you might be looking in the wrong part of your logs.
heroku run logs || grep worker
Secondly, you need to figure out why your view rendered an error. Since your view is rendered by your app, and not your worker, something is out of sync in your app. Figure out exactly what is wrong, and that may point to what your worker did/did not do.
I have two rails applications (both now on Rails 3.1.1), and they work nicely. However, I have a dependence between the two. Application A uses data of application B by linking to it. These links are created automatically, but they have to be computed by doing a lookup to the data of application B. I'm working on Windows 7 with Ruby 1.9.2 and Thin as web server, and this will not be changed :-(
I have tried the following:
Use just a RESTful resource, so defined a controller, called its action (get_xml_obj with some params in it), read the needed values from the XML. Worked, but needs around 0.5s to 1s per call.
Replaced it by ActiveResource#find which worked as well, but with the same performance as the solution before.
I have installed nginx and configured it so, that the connection are keepalive, so that the connection handling should be much faster. But noticed no difference at all when calling B from A.
When I compare the time spent, these are typical examples (here with 4 references in one web page):
Application A:
Started GET "/tasks/search_task/1803" for 127.0.0.1 at 2011-11-02 14:11:04 +0100
Processing by TasksController#search_task as HTML
Parameters: {"id"=>"1803"}
Rendered tasks/_tooltip.html.haml (4529.5ms)
Completed 200 OK in 4532ms (Views: 4527.5ms | ActiveRecord: 2.0ms)
cache: [GET /tasks/search_task/1865] miss
Application B:
cache: [GET /service/get_xml_obj?key=notice&value=rails] miss
Started GET "/service/get_xml_obj?key=notice&value=rails" for 127.0.0.1 at 2011-
11-02 14:11:05 +0100
Processing by ServiceController#get_xml_obj as */*
Parameters: {"key"=>"notice", "value"=>"rails"}
Completed 200 OK in 6ms (Views: 3.0ms | ActiveRecord: 1.0ms)
and 3 other calls with a similar length (< 10ms).
So is there something I can do to tune the retrieval (without accessing the database directly)? Do you know of any good documentation how to measure and tune the web server and middleware? These are only personal applications, so there is no way of deploying them on a decent server. I use a cache for the retrieved information, so it gets better over time, but 1 second is too much to wait for. And there may be more than 1 or 2 links in a page I want to render.
Ok, I finally gave up and implemented the following:
Added file b.rb to my models directory in application A.
Included there all raw models, where the base models (used sti) are defined like that:
class Notice < ActiveRecord::Base
self.establish_connection(
:adapter => "sqlite3",
:database => "../b/db/dev.db"
)
end
...
I am now able to ask: Notice.where(:key => 'rails') which results in a real Rails model object.
The whole thing was implemented in around 20 minutes, and now there is no difference in including no link from application A to B to include 5 links.
At some point in time, I would like to know what is the slow part in using RESTful resources here ...