I have a weird issue on a test server...
basically my app is running fine, yet if i check production.log, it is for some reason stuck at yesterday, when i had an error on the app... since then, i have fixed it, deployed again, but the log still won't be updated. It's beel like that since yesterday night.
so if i try
tail -f log/production.log
the last log i see is from yesterday... what's going on? this is so weird O___o
here's from my log:
Started GET "/one" for xx.xx.xx.xx at 2012-06-04 09:14:30 -0400
Processing by ParagraphsController#one as HTML
(0.5ms) SELECT id FROM paragraphs WHERE length = 1
Paragraph Load (0.3ms) SELECT "paragraphs".* FROM "paragraphs" WHERE "paragraphs"."id" = $1 LIMIT 1 [["id", 1]]
Rendered paragraphs/one.html.erb within layouts/application (0.1ms)
Completed 200 OK in 3ms (Views: 1.0ms | ActiveRecord: 0.8ms)
tail: cannot open `1' for reading: No such file or directory
any help is greatly appreciated!
Perhaps your log file has rotated? tail -f will follow a file, not a filename. If you want to follow the filename, then you should use tail -F instead. (Note the capital letter F.) This way, when the file is rotated, you'll get the new log file rather than stare at the old file.
Related
In Ruby on Rails, when I run the rails server, the very first request seems to be extremely slow, the logs show that the slowness comes from the view rendering:
2017-08-14 10:24:12.707 [ 22139] [INFO ] Completed 200 OK in 18547ms (Views: 18501.6ms | ActiveRecord: 3.7ms)
I assume it's because it needs to connect to the database. The next request is of course, fast(er):
2017-08-14 11:01:54.937 [ 25662] [INFO ] Completed 200 OK in 765ms (Views: 714.0ms | ActiveRecord: 8.3ms)
I assume this has something to do with the cache, and it already has a database connection. I've tried to restart my server, restart the database, clear my browser cache and rake db:sessions:clear, but I am unable to get the first request to go slow again in development.
Here's where things get interesting. Every single time I run the cucumber tests, the very first request is always incredibly slow:
2017-08-14 11:19:52.879 [ 27729] [INFO ] Completed 200 OK in 38326ms (Views: 38306.8ms | ActiveRecord: 6.1ms)
It's even longer than it is in development for unknown reasons.
What is different between restarting the Rails server and re-running a test that makes the first request of the tests so slow? What steps can I take to troubleshoot such an issue?
(It's no fun waiting 30 seconds every time we want to run one of our cucumber tests)
Unfortunately the answer was extremely isolated to our code, but I wanted to share the answer in-case anyone else ever ran into this situation.
I noticed that if I ran rake tmp:cache:clear the first request in the browser would be really slow again. I investigated that command and saw it cleared out the #{Rails.root}/tmp directory.
I then found this line in the Cucumber env.rb:
Dir.foreach("#{Rails.root}/tmp") { |f|
FileUtils.rm_rf("#{Rails.root}/tmp/#{f}")
}
That appeared to be the culprit the entire time. I don't know why that was added (3 years ago...)
How can I remove this active-model-serializers message from my logs?
[active_model_serializers] Rendered ActiveModel::Serializer::CollectionSerializer with ActiveModelSerializers::Adapter::JsonApi
In your config/initializers/active_model_serializer.rb:
require 'active_model_serializers'
ActiveSupport::Notifications.unsubscribe(ActiveModelSerializers::Logging::RENDER_EVENT)
This properly unsubscribes you from the rendering event, as opposed to just disabling all logging, etc.
from: https://github.com/rails-api/active_model_serializers/blob/ab98c4a664f26077e5b3c90ea6bcbe129ec2d0b9/docs/general/logging.md
I haven't found anything in AMS configuration to disable logs, however, there are several other ways of achieving this by redefining ActiveModelSerializers.logger (source)
in your config/initializers/active_model_serializer.rb:
1) Increase the log level so that nothing will get logged:
ActiveModelSerializers.logger.level = Logger::Severity::UNKNOWN
or
2) Write AMS log to /dev/null
ActiveModelSerializers.logger = ActiveSupport::TaggedLogging.new(ActiveSupport::Logger.new('/dev/null'))
Using Rails 3.1.1 on Heroku, Dalli & Memcachier.
Production.rb
config.cache_store = :dalli_store
config.action_controller.perform_caching = true
Gemfile
gem 'memcachier'
gem 'dalli'
Controller#Show
unless fragment_exist?("gift-show--" + #gift.slug)
# Perform complicated database query at 4-5 sec
end
View
<% cache('gift-show--' + #gift.slug, :expires_in => 3456000) do # 40 days %>
# Create an HTML
<% end %>
Log output when a cached page is loaded
2012-10-17T03:15:43+00:00 app[web.2]: Started GET "/present/baka-kaka-set" for 23.20.90.66 at 2012-10-17 03:15:43 +0000
2012-10-17T03:15:43+00:00 app[web.2]: Could not find fragment for gift-show--baka-kaka-set # my log comment
2012-10-17T03:15:44+00:00 heroku[router]: GET www.mydomain.com/present/baka-kaka-set dyno=web.2 queue=0 wait=0ms service=195ms status=200 bytes=17167
2012-10-17T03:15:43+00:00 app[web.2]: cache: [GET /present/baka-kaka-set] miss
2012-10-17T03:15:43+00:00 app[web.2]: Processing by GiftsController#show as */*
2012-10-17T03:15:43+00:00 app[web.2]: Parameters: {"id"=>"baka-kaka-set"}
2012-10-17T03:15:43+00:00 app[web.2]: Exist fragment? views/gift-show--baka-kaka-set (1.5ms)
2012-10-17T03:15:43+00:00 app[web.2]: Read fragment views/gift-show--baka-kaka-set (1.5ms)
2012-10-17T03:15:43+00:00 app[web.2]: Write fragment views/gift-show--baka-kaka-set (4.0ms)
Each page is pretty much static once it has been created, thus the long expiry time (40 days).
If I load a page like this it seems to be written to the cache, I can verify that by reloading the page and see that it bypasses the controller (as I expect) and deliver the page quite fast.
My problem
The problem is that if I return to the page a few minutes later it has already been deleted from the cache! fragment_exist?('gift-show--gift-baka-kaka-set') returns false (so does Rails.cache.exist?('views/gift-show--gift-baka-kaka-set'))
I can see in the Memcachier analytics that the number of keys are decreasing. Even when I run a script that load each page (to create fragment caches) the number of keys in Memcachier does not increase in the same rate.
I am at about 34% of memory usage in Memcachier, so I am not close to the limit.
My questions
Am I doing something complete wrong? What should I rather do?
Could it be that I am writing to two different caches or something?
The last line in the log file confuses me a bit. It seems like that even after a fragment is read, a new one is still being written? Is that weird?
UPDATE 1: I realized I had commented out the line:
# Set expire header of 30 days for static files
config.static_cache_control = "public, max-age=2592000"
before. Could that have caused the problem? Are cached actions considered 'static assets'?
Turned out to be a bug in Memcachier. Memcachier support came to this conclusion and it works fine with Memcache. Memcachier will try to fix the bug.
Not too sure how to debug this. Any tips would be greatly appreciated.
Basically, I just did a large commit, and now my server can't boot up because of a Sunspot-solr issue.
I notice it when I try to manually reindex.
This is the return :
Processing MainController#index (for 69.114.195.64 at 2011-08-02 06:47:21) [GET]
Parameters: {"action"=>"index", "controller"=>"main"}
HomepageBackground Load (0.2ms) SELECT * FROM `homepage_backgrounds`
HomepageBackground Columns (23.4ms) SHOW FIELDS FROM `homepage_backgrounds`
HomepageBackground Load (0.8ms) SELECT * FROM `homepage_backgrounds` ORDER BY RAND() LIMIT 1
SQL (30.2ms) SHOW TABLES
Organization Columns (1.8ms) SHOW FIELDS FROM `organizations`
Solr Select (Error) {:q=>"*:*", :start=>0, :fq=>["type:Organization", "published_b:true", "updated_at_d:[2010\\-08\\-02T13\\:47\\:21Z TO *]"], :rows=>1000000}
Timeout::Error (execution expired):
/usr/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill'
vendor/gems/right_http_connection-1.2.4/lib/net_fix.rb:51:in `rbuf_fill'
/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
UPDATE
Ok so I reverted and rebased to the last working commit. And I still got the same error. So then I ps aux | grep solr, and found five instances of solr running. Strange, I thought, and killed every single one of them. Blam server was back up and running strong. So now I'm trying my new commits again, but with my eye on these feral sunspot instances.
This problem was caused by feral sunspot-solr instances running amuck. Nothing kill -9 couldn't handle. Problem solved.
I have a Rails app that is generating duplicate requests for every request in development. The app is running Rails 2.3.5 with my primary development machine running Ubuntu 10.4. However, the same code runs fine without showing duplicate requests on my OS X 10.6 box. It also runs in Production mode on either machine without problems.
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Notice that the requests are exactly the same, even down to the timestamps.
I have tried using Ruby 1.8.7 & 1.9.1 as well as swapping between Mongrel & Webrick and it always processes each request twice (or at least it generates two log entries). I tried removing most of the routes to see if I had something weird going on, but the problem persists. I tried different browsers (Chrome, Safari, eLinks) from different machines to see if that would help, but the problem persists. I removed all of my gems and only replaced the necessary ones but to no avail.
Does anyone have any idea why Rails would cause duplicate requests like this? I am about at my wits end and am grasping at straws. The only bright spark is that this behavior does not happen under the Production environment, only Development.
When people come to this question from google it's important they disambiguate their problem between duplicate logs that look like this:
A
A
B
B
C
C
From duplicate logs that look like this:
A
B
C
A
B
C
The former is likely from duplicate LOGGING. The later is likely from duplicate REQUESTS. In the case it is the latter as shown by the Question Asker (OP), you should strongly consider #www's answer of hunting down a <img src="#"> or a similar self-referential url tag. I spent hours trying to figure out why my application was appearing to make two duplicate requests, and after reading #www's answer (or #aelor's on Double console output?), I found
%link{href: "", rel: "shortcut icon"}/
in my code! It was causing every page of my production app to be double rendering!!!! So bad for performance and so annoying!
Check your code see if there is something like this inside it:
I have got the same problem just now becase of a tag
<img src="#">
this will cause rails to make duplicate requests!
This was happening to me in rails 4.2.3 after installing the heroku rails_12factor gem which depends on rails_stdout_logging
The "answer" to the problem was to move to a new directory and fetch the original code from Github. After getting everything configured and setup in the new directory the application works as it should with no duplicate requests. I still don't know why the code in the original directory borked out; I even diff'ed the directories and the only outliers were the log files.
I'm answering my own question here for the sanity of others that might experience the same problem.
I resolved this problem by commenting the following line in app/config/environments/development.rb:
config.middleware.use Rails::Rack::LogTailer
I do not remember exactly the reason to use this setting
I solve this same problem here by cleaning all precompiled assets with:
rake assets:clean
I've tried to delete the app folder and then checkout him back from GitHub but didnt work.
hope this help.
thanks.
This started happening to me on development after playing around with some custom middleware I wrote.
Running rake assets:clean:all solved it.
This small workaround solved my issue. Follow these steps:
Under Rails External Libraries, Search for railties module.
Go to this path: /lib/commands/server.rb
In this file comment this line,
Rails.logger.extend(ActiveSupport::Logger.broadcast(console))
This command will switch off broadcasting, and just restart your rails server. You will not see any repeated logs anymore. Happy coding.