I'm a newbie in Rails, and I had a problem with my Rails project. I used Log4r for my logger. I also used config for Log4r at How to configure Log4r with Rails 3.0.x? . But when I send some request at the same time, the output of Log4r has error. It is unordered. :(
Example about error output log file:
Started GET "/task_results.json" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by TaskResultsController#index as JSON
Started GET "/favicon.ico" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by ApplicationController#missing_page as
Parameters: {"path"=>"favicon"}
Completed 404 Not Found in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms)
[1m [36m (994.0ms) [0m [1mSELECT task_result_id,task_id,worker_id,product_id,passed,date_time,details FROM task_results ORDER BY task_result_id DESC; [0m
Completed 200 OK in 8656ms (Views: 0.2ms | ActiveRecord: 994.0ms)
And I wanna ask that how can I config Log4r to synchronize output? Or how can I fix my problem?
ruby is single-threaded, so I assume you are running your rails app on a server like unicorn?
The only solution I can think of is timestamps (with micro seconds) and not using the multi-line Rails log output which is not suitable for anything but single requests in dev environments. But even this does not guarantee "ordered" logs because they can be written when the request was served or at a random time afterwards when the buffer was flushed again.
This is an in-depth tutorial on how to use Log4r that also shows how to get custom timestamps into your logs and have all the important information on a single line. This prevents logs from other requests writing inside your main request and you can have a timestamp resolution of microseconds and order afterwards manually. However the order should not matter if you are running concurrent app-servers because the order is not guaranteed there anyway.
Related
I never wrote any complex regular expression before, and what I need seems to be (at least) a bit complicated.
I need a Regex to find matches for the following:
Here below show the logs for this i need regexp plesase help Thanking you in advance
Started GET \"/\" for 1x2.x6.1xx.2x at 2016-10-20 11:04:00 +0200
Processing by WelcomeController#index as HTML
Current user: anonymous
Redirected to http://example.pro.local/login?back_url=http%xx%xx%2Fexample.pro.local%2F
Filter chain halted as :check_if_login_required rendered or redirected
Completed 302 Found in 3.4ms (ActiveRecord: 1.9ms)"
Extracting information from unstructured logs with regex is tedious and brittle.
Instead it is preferable to make the application output logs in a structured format (as suggested by #ndn).
Consider using lograge and/or logstasher in your Rails application to output structured logs.
When the Rails logs says something like
Completed 200 OK in 454.8ms (Views: 117.9ms | ActiveRecord: 199.7ms | Solr: 0.0ms)
What kind of time is being displayed? CPU time, wall time, or something else?
http://guides.rubyonrails.org/v3.2/performance_testing.html#request-logging mentions that time is measured, but not what kind of time. I haven't found any other documentation in the Rails Guides about logging, apart from how to generate messages in the Rails logger.
Wall-time.
Check the implementation of the Notification Instrumentation:
https://github.com/rails/rails/blob/2746a227fbb7e56bd51ab47fa97919f206972ab2/activesupport/lib/active_support/notifications/instrumenter.rb
and the implementation of the LogSubscriber:
https://github.com/rails/rails/blob/b5eb2423b6e431ba53e3836d58449e7e810096b4/actionpack/lib/action_controller/log_subscriber.rb
and this:
https://github.com/rails/rails/blob/7f18ea14c893cb5c9f04d4fda9661126758332b5/activesupport/lib/active_support/subscriber.rb
it is using Time.now, which is wall-time.
I want to know what queries are being run when a User is interacting with different pages on the website? I'd also like to know how long each query took. How and where can I see that?
There's a log folder in every rails application with a .log file for each of the environment you've run your application in. You'll see these kind of stats in it:
Started GET "/assets/jquery.js?body=1" for 127.0.0.1 at 2012-11-29 11:09:47 -0400
Served asset /jquery.js - 200 OK (3ms)
Started GET "/users/auth/google/callback?_method=post&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2012-10-26T19%3A29%3A52ZkvQlFeFr5Rk78g&openid.return_to=http%3A%2F%2Flocalhost%3A3000%2Fusers%2Fauth%2Fgoogle%2Fcallback%3F_method%3Dpost&openid.assoc_handle=AMlYA9VjNZ-QIIMe5bhvtPLBsAdm5xMltOa7MEwUoW4Opx9tXd_khhcS&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ext1%2Cext1.mode%2Cext1.type.ext2%2Cext1.value.ext2%2Cext1.type.ext0%2Cext1.value.ext0%2Cext1.type.ext3%2Cext1.value.ext3&openid.sig=ehFAJ1m8nPces8%2Bj6Ud%2FicpuohY%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawljL9RKBE7iUQHk94UhJQ-4sDOTawUfpNc&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawljL9RKBE7iUQHk94UhJQ-4sDOTawUfpNc&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_response&openid.ext1.type.ext2=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffirst&openid.ext1.value.ext2=Fernando&openid.ext1.type.ext0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.value.ext0=fernando%40findhorsesforsale.net&openid.ext1.type.ext3=http%3A%2F%2Faxschema.org%2FnamePerson%2Flast&openid.ext1.value.ext3=Mendez" for 127.0.0.1 at 2012-10-26 15:29:52 -0400
Started GET "/run_dates/current_pdf_generation_status.js?pdf_generation_id=e3ef90b05844012f4fc2723c91dfe57c&_=1332637701414" for 127.0.0.1 at 2012-03-24 21:08:29 -0400
Processing by RunDatesController#current_pdf_generation_status as JS
Parameters: {"pdf_generation_id"=>"e3ef90b05844012f4fc2723c91dfe57c", "_"=>"1332637701414"}
User Load (0.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
RunDate Load (0.1ms) SELECT "run_dates".* FROM "run_dates" WHERE (pdf_generation_id = 'e3ef90b05844012f4fc2723c91dfe57c') LIMIT 1
Rendered run_dates/current_pdf_generation_status.js.erb (0.1ms)
Completed 200 OK in 4ms (Views: 2.0ms | ActiveRecord: 0.3ms)
.....
Have a look at development.log if you are in development mode, it will show you all the queries with the time taken to ran that query. For production, see production.log and for staging look at staging.log
You can delete that file rails will create it again if you want to look at specific queries. but i would suggest instead of deleting, when you are hitting the page, print something different like '-----------------' that will differentiate other queries from the queries of the page you want to see.
If you are the User, you can use some browser plugin (e.g. Firebug)
If you are not the user, you can check the log to see what queries are done.
I have a Ruby on Rails app that works fine locally with a sqlite3 database and can save and retrieve records without issue.
When deployed to Heroku at http://moviedata.herokuapp.com/ using a postgresql database, records are not saving even though it looks like the logs say they are. Records read from the db fine and data is displayed as expected.
The tailed logs for adding a record are:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]: Started POST "/" for 50.53.6.156 at 2012-08-21 19:51:31 +0000
2012-08-21T19:51:31+00:00 app[web.1]: Parameters: {"utf8"=>"✓", "authenticity_token"=>"+BYQLzhrfDkUVW8UaHikHpmtGHxpeQ/yF4VByHh9m1I=", "movie"=>{"title"=>"The Running Man", "description"=>"A documentary about a public execution game show.", "year"=>"1987", "genre"=>"science fiction"}, "commit"=>"Create Movie"}
2012-08-21T19:51:31+00:00 app[web.1]: Processing by MoviesController#index as HTML
2012-08-21T19:51:31+00:00 app[web.1]: Rendered movies/index.html.erb within layouts/application (5.1ms)
2012-08-21T19:51:31+00:00 app[web.1]: Completed 200 OK in 9ms (Views: 6.7ms | ActiveRecord: 0.9ms)
2012-08-21T19:51:31+00:00 heroku[router]: POST moviedata.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=17ms status=200 bytes=3479
The 'heroku pg' command shows the same number of rows (11) on the postgres database after a record is added.
This is a simple app I built to learn Rails and the Heroku platform. To reproduce this, just visit http://moviedata.herokuapp.com/ and click "New Movie", enter some junk data in the form, and hit "create movie". The record should be saved and show up in the list on the front page, but it doesn't.
Is there perhaps something I have to turn on, configure, or activate in order to be able to write to the postgres database? Seems very strange to me that it could be read from but not written to. Any better way to troubleshoot than the logs?
Locally I'm using Ruby 1.9.3, Rails, 3.2.8, PostgreSQL 9.1.5, SQLite 3.7.9, Heroku Toolbelt 2.30.3.
Edit/Update: I switched the local version to use psql. It also experiences the same problem where records are not saved. With the user set to log_statement='all' The log in at /var/log/postgresql/posgresql-9.1.main.log shows lots of selects, but when the record add is attempted, the log shows the database never being hit.
Foreman shows the data being posted, like so:
22:38:03 web.1 | Started POST "/" for 127.0.0.1 at 2012-08-21 22:38:02 -0700
22:38:03 web.1 | Processing by MoviesController#index as HTML
22:38:03 web.1 | Parameters: {"utf8"=>"✓", "authenticity_token"=>"0AyxRbwl/Kgi05uI1KX8uxVUJjx9ylAA1ltdWgmunm4=", "movie"=>{"title"=>"Army of Darkness", "description"=>"A man fights the living dead using a boomstick.", "year"=>"1997", "genre"=>"horror"}, "commit"=>"Create Movie"}
22:38:03 web.1 | Movie Load (0.8ms) SELECT "movies".* FROM "movies" ORDER BY title
22:38:03 web.1 | Rendered movies/index.html.erb within layouts/application (14.9ms)
A failed commit does sound like a great explanation. I'm not yet sure how to check whether the driver is set to commit or to see how/when a commit might have failed.
This is a very simple application, with no load balancing or complex configuration and most of the code was generated by the 'generate scaffold' command, but it's entirely possible that there's some constraint that's being violated somewhere before the db is ever hit. Perhaps there's a way to crank the Foreman (or Rails) log level up to 11? I also tried using thin instead and scoured the log files in the log/ folder and didn't find anything other than what's logged above.
This sounds a lot like a transaction issue, where you aren't COMMITting your transactions after you do work, so the changes are lost. If your SQLite driver defaults to COMMITting transactions that're closed without an explicit COMMIT or rollback, and your Pg driver defaults to ROLLBACK, you'd get the behaviour described. The same will happen if the SQLite defaults to autocomitting each statement by default, and the Pg driver driver defaults to opening a transaction.
This is one of the many good reasons to use the same local database for testing as you're going to deploy to when you want to go live.
If you were on a normal Pg instance I'd tell you to enable log_statement = 'all' in postgresql.conf, reload Pg, and watch the logs. You can't do that on Heroku, but you do have access to the Pg logs with heroku logs --ps postgres. Try running ALTER USER my_heroku_user SET log_statement = 'all';, re-testing, and examining the logs.
Alternately, install Pg locally.
Other less likely possibilities that come to mind:
You're using long-running SERIALIZABLE transactions for reads, so their snapshot never gets updated. Pretty unlikely.
Permissions on database objects are causing INSERTs, UPDATEs, etc to fail, and your app is ignoring the resulting errors. Again, unlikely.
You have DO INSTEAD rules that don't do what you expect, or BEFORE triggers that return NULL, thus silently turning operations into no-ops. Seems unlikely if you're testing with SQLite.
You're writing to a different DB than you're reading from. Not impossible in setups that're attempting to read from a cluster of hot standbys, etc.
I am trying to do some intense optimization. I know of Delayed Jobs. But I do not want to even spend the time to create the Delayed Job before I send the response. I want to respond to the user and then worry about creating the Delayed Job after the response has been sent.
Is there some gem that does something like that?
If it is lightweight, you can create a thread:
def show
#record = Record.find(params[:id])
Thread.new do
Rails.logger.info "#{Time.now}"
sleep 10
Rails.logger.info "#{Time.now}"
end
end
My log output with the above (and the respond renders instantly)
Started GET "/" for 127.0.0.1 at 2012-04-17 14:15:29 -0500
Processing by HomeController#public as HTML
2012-04-17 14:15:29 -0500
Rendered home/public.html.haml within layouts/application (0.4ms)
Rendered layouts/_navbar.html.haml (8.9ms)
Rendered layouts/_flash.html.haml (0.1ms)
Rendered layouts/_footer.html.haml (0.1ms)
Completed 200 OK in 210ms (Views: 202.3ms | ActiveRecord: 0.0ms)
2012-04-17 14:15:39 -0500
Delayed job. At some point, you have to store some information about the process you want to do after the response has been sent, and it's exactly what delayed job is for : store information about a process to run in the background.
Any other gem would do basically the same thing, which is store a bit of information about what to do. What's the problem with delayed job? if you have any specific issue with delayed job, you better get it solved than use any other unproved methods.