Why is the first Rails request extremely slow in testing? - ruby-on-rails

In Ruby on Rails, when I run the rails server, the very first request seems to be extremely slow, the logs show that the slowness comes from the view rendering:
2017-08-14 10:24:12.707 [ 22139] [INFO ] Completed 200 OK in 18547ms (Views: 18501.6ms | ActiveRecord: 3.7ms)
I assume it's because it needs to connect to the database. The next request is of course, fast(er):
2017-08-14 11:01:54.937 [ 25662] [INFO ] Completed 200 OK in 765ms (Views: 714.0ms | ActiveRecord: 8.3ms)
I assume this has something to do with the cache, and it already has a database connection. I've tried to restart my server, restart the database, clear my browser cache and rake db:sessions:clear, but I am unable to get the first request to go slow again in development.
Here's where things get interesting. Every single time I run the cucumber tests, the very first request is always incredibly slow:
2017-08-14 11:19:52.879 [ 27729] [INFO ] Completed 200 OK in 38326ms (Views: 38306.8ms | ActiveRecord: 6.1ms)
It's even longer than it is in development for unknown reasons.
What is different between restarting the Rails server and re-running a test that makes the first request of the tests so slow? What steps can I take to troubleshoot such an issue?
(It's no fun waiting 30 seconds every time we want to run one of our cucumber tests)

Unfortunately the answer was extremely isolated to our code, but I wanted to share the answer in-case anyone else ever ran into this situation.
I noticed that if I ran rake tmp:cache:clear the first request in the browser would be really slow again. I investigated that command and saw it cleared out the #{Rails.root}/tmp directory.
I then found this line in the Cucumber env.rb:
Dir.foreach("#{Rails.root}/tmp") { |f|
FileUtils.rm_rf("#{Rails.root}/tmp/#{f}")
}
That appeared to be the culprit the entire time. I don't know why that was added (3 years ago...)

Related

Slow cache read on first cache fetch in Rails

I am seeing some very slow cache reads in my rails app. Both redis (redis-rails) and memcached (dalli) produced the same results.
It looks like it is only the first call to Rails.cache that causes the slowness (averaging 500ms).
I am using skylight to instrument my app and see a graph like:
I have a Rails.cache.fetch call in this code, but when I benchmark it I see it average around 8ms, which matches what memcache-top shows for my average call time.
I thought this might be dalli connections opening slowly, but benchmarking that didnt show anything slow either. I'm at a loss for what else to check into.
Does anyone have any good techniques for tracking this sort of thing down in a rails app?
Edit #1
Memcache server is stored in ENV['MEMCACHE_SERVERS'], all the servers are in the us-east-1 datacenter.
Cache config looks like:
config.cache_store = :dalli_store, nil, { expires_in: 1.day, compress: true }
I ran something like:
100000.times { Rails.cache.fetch('something') }
and calculated the average timings and got something on the order of 8ms when running on one of my webservers.
Testing my theory of the first request is slow, I opened a console on my web server and ran the following as the first command.
irb(main):002:0> Benchmark.ms { Rails.cache.fetch('someth') { 1 } }
Dalli::Server#connect my-cache.begfpc.0001.use1.cache.amazonaws.com:11211
=> 12.043342
Edit #2
Ok, I split out the fetch into a read and write, and tracked them independently with statsd. It looks like the averages sit around what I would expect, but the max times on the read are very spiky and get up into the 500ms range.
http://s16.postimg.org/5xlmihs79/Screen_Shot_2014_12_19_at_6_51_16_PM.png

rails activerecord transaction blocks don't seem to not commit

I've a rails app that I'm crashing on purpose.. it's local and I'm just hitting ctrl + c and killing it mid way through processing records..
To my mind the records in the block shouldn't have been committed.. Is this a postgres "error" or a rails "error", or a dave ERROR?
ActiveRecord::Base.transaction do
UploadStage.where("id in (#{ids.join(',')})").update_all(:status => 2);
records.each do |record|
record.success = process_line(record.id, klas, record.hash_value).to_s[0..250]
record.status = 1000
record.save();
end
end
I generate my ids by reading out all the records where the status is 1.
Nothing but this function sets the status to 1000..
If the action crashes for what ever reason, I'd expect there to be no records in the database with status = 2...
This is not what I'm seeing though. Half the records have status 1000, the other half have status 2.. .
Am I missing something?
How can I make sure there are no 2's if the app crashes?
EDIT:
I found this link http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/
As I suspected and as confirmed by dave's update, it looks like ActiveRecord will commit a half-finished transaction under some circumstances when you kill a thread. Woo, safe! See dave's link for detailed explanation and mitigation options.
If you're simulating hard crash (host OS crash or plug-pull), control-C is absolutely not the right approach. Use Control-\ to send a SIGQUIT, which is generally not handled, or use kill -KILL to hard-kill the process with no opportunity to do cleanup. Control-C sends SIGINT which is a gentle signal that's usually attached to a clean shutdown handler.
In general, if you're debugging issues like this, you should enable detailed query logging and see what Rails is doing. Use log_statement = 'all' in postgresql.conf then examine the PostgreSQL logs.

production.log in rails 3 is stuck

I have a weird issue on a test server...
basically my app is running fine, yet if i check production.log, it is for some reason stuck at yesterday, when i had an error on the app... since then, i have fixed it, deployed again, but the log still won't be updated. It's beel like that since yesterday night.
so if i try
tail -f log/production.log
the last log i see is from yesterday... what's going on? this is so weird O___o
here's from my log:
Started GET "/one" for xx.xx.xx.xx at 2012-06-04 09:14:30 -0400
Processing by ParagraphsController#one as HTML
(0.5ms) SELECT id FROM paragraphs WHERE length = 1
Paragraph Load (0.3ms) SELECT "paragraphs".* FROM "paragraphs" WHERE "paragraphs"."id" = $1 LIMIT 1 [["id", 1]]
Rendered paragraphs/one.html.erb within layouts/application (0.1ms)
Completed 200 OK in 3ms (Views: 1.0ms | ActiveRecord: 0.8ms)
tail: cannot open `1' for reading: No such file or directory
any help is greatly appreciated!
Perhaps your log file has rotated? tail -f will follow a file, not a filename. If you want to follow the filename, then you should use tail -F instead. (Note the capital letter F.) This way, when the file is rotated, you'll get the new log file rather than stare at the old file.

Server is timing out because of Sunspot-Solr reindex'ing problem

Not too sure how to debug this. Any tips would be greatly appreciated.
Basically, I just did a large commit, and now my server can't boot up because of a Sunspot-solr issue.
I notice it when I try to manually reindex.
This is the return :
Processing MainController#index (for 69.114.195.64 at 2011-08-02 06:47:21) [GET]
Parameters: {"action"=>"index", "controller"=>"main"}
HomepageBackground Load (0.2ms) SELECT * FROM `homepage_backgrounds`
HomepageBackground Columns (23.4ms) SHOW FIELDS FROM `homepage_backgrounds`
HomepageBackground Load (0.8ms) SELECT * FROM `homepage_backgrounds` ORDER BY RAND() LIMIT 1
SQL (30.2ms) SHOW TABLES
Organization Columns (1.8ms) SHOW FIELDS FROM `organizations`
Solr Select (Error) {:q=>"*:*", :start=>0, :fq=>["type:Organization", "published_b:true", "updated_at_d:[2010\\-08\\-02T13\\:47\\:21Z TO *]"], :rows=>1000000}
Timeout::Error (execution expired):
/usr/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill'
vendor/gems/right_http_connection-1.2.4/lib/net_fix.rb:51:in `rbuf_fill'
/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
UPDATE
Ok so I reverted and rebased to the last working commit. And I still got the same error. So then I ps aux | grep solr, and found five instances of solr running. Strange, I thought, and killed every single one of them. Blam server was back up and running strong. So now I'm trying my new commits again, but with my eye on these feral sunspot instances.
This problem was caused by feral sunspot-solr instances running amuck. Nothing kill -9 couldn't handle. Problem solved.

How do I get the same JSON response I get in development, in production?

I'm using jQuery-ui's autocomplete on a search form. In development the request hits my index page which returns a JSON response:
The response looks like so:
[{"id":0,"listing_id":0,"category_id":0,"title":"Natural Woven Linen
Ring Sling","description":"This natural woven linen sling is perfect
for keeping you and baby comfortable in any climate. With light
comfort, it will conform to you and your child\u0026#39;s body and
become softer over time.\nSlings are a great way to keep your baby
close and your hands free.\nWearing your baby increases bonding by
encouraging skin to skin contact and closeness with parents and
caregivers. Baby slings mimic the womb environment, making baby feel
safe and secure. Baby\u0026#39;s needs are easily met when held close,
which means less crying. Baby slings are great for discreet
breastfeeding no matter where you are. \nThis sling is suitable for
babies between 7-35 Pounds.\nBe sure to exercise caution when wearing
your baby.\nKeep baby\u0026#39;s face visible at all
times.\nPractice wearing your sling before putting baby
inside.\nAvoid any unsafe activities while wearing your baby, such
as:\nSmoking, drinking hot drinks, running, exercising, cooking, or
drinking
alcohol.","price":"65.00","currency_code":"CAD","quantity":1,"tags":["sling
rings","ring sling","toddler","newborn","natural linen","woven
wrap","baby carrier","woven sling","babywearing","baby wrap","baby
sling"],"category_path":["Bags and
Purses"],"taxonomy_path":["Accessories","Baby Accessories","Baby
Carriers \u0026 Wraps"],"materials":["aluminum rings","european
linen","cotton
thread"],"featured_rank":null,"url":"https://www.etsy.com/listing/272579256/natural-woven-linen-ring-sling?utm_source=etsyinventorymerger\u0026utm_medium=api\u0026utm_campaign=api","views":19,"num_favorers":0,"shipping_template_id":6281647,"shipping_profile_id":null,"images":["https://img1.etsystatic.com/135/0/6276910/il_170x135.987731269_rwab.jpg","https://img1.etsystatic.com/139/0/6276910/il_170x135.987731277_1q29.jpg","https://img1.etsystatic.com/140/0/6276910/il_170x135.987731279_q5lv.jpg"],"created_at":"2016-03-28T20:01:41.722Z","updated_at":"2016-03-28T20:04:52.721Z"},{"id":18,"listing_id":269532744,"category_id":269532744,"title":"Woven
Cotton Whimsical Waves Ring Sling","description":"This sling is
lightweight, yet sturdy and made from 100% cotton. The shoulder is
sewn to comfortably keep your arms free and baby\u0026#39;s weight
evenly distributed. Great for any climate and perfect for any outfit.
\n\nSlings are a great way to keep your baby close and your hands
free.\nWearing your baby increases bonding by encouraging skin to skin
contact and closeness with parents and caregivers. Baby slings mimic
the womb environment, making baby feel safe and secure.
Baby\u0026#39;s needs are easily met when held close, which means less
crying. Baby slings are great for discreet breastfeeding no matter
where you are. \nThis sling is suitable for babies between 7-35
Pounds.\nBe sure to exercise caution when wearing your baby.\nKeep
baby\u0026#39;s face visible at all times.\nPractice wearing your
sling before putting baby inside.\nAvoid any unsafe activities while
wearing your baby, such as:\nSmoking, drinking hot drinks, running,
exercising, cooking, or drinking
alcohol.","price":"65.00","currency_code":"CAD","quantity":1,"tags":["new
mom gift","baby shower gift","newborn sling","baby wrap","ring sling
tail","woven cotton sling","woven ring sling","baby carrier","ring
sling","canadian made"],"category_path":["Bags and
Purses"],"taxonomy_path":["Accessories","Baby Accessories","Baby
Carriers \u0026 Wraps"],"materials":["cotton","aluminum rings","cotton
thread"],"featured_rank":1,"url":"https://www.etsy.com/listing/269532744/woven-cotton-whimsical-waves-ring-sling?utm_source=etsyinventorymerger\u0026utm_medium=api\u0026utm_campaign=api","views":42,"num_favorers":3,"shipping_template_id":6281647,"shipping_profile_id":null,"images":["https://img1.etsystatic.com/115/0/6276910/il_170x135.927557949_lp3o.jpg","https://img1.etsystatic.com/113/0/6276910/il_170x135.927557945_8km2.jpg","https://img1.etsystatic.com/117/0/6276910/il_170x135.927557953_nyef.jpg","https://img0.etsystatic.com/112/0/6276910/il_170x135.927814742_9wo0.jpg","https://img1.etsystatic.com/127/0/6276910/il_170x135.927557973_223q.jpg"],"created_at":"2016-03-28T20:01:45.104Z","updated_at":"2016-03-28T20:04:56.129Z"}]
This is my controller:
def index
respond_to do |format|
format.html
format.json do
#etsy_products = EtsyProduct.search(params[:term])
render json: #etsy_products, status: :ok, message: 'Success'
end
end
end
My script receives the return response and formats it accordingly:
$ ->
$('#etsy_products_search').autocomplete(
minLength: 0
source: '/'
focus: (event, ui) ->
$('#etsy_products_search').val ui.item.title
false
select: (event, ui) ->
$('#etsy_products_search').val ui.item.title
$('#etsy_products_search-description').html ui.item.description
false
).autocomplete('instance')._renderItem = (ul, item) ->
$('<li>')
.attr({'title': item.description, 'data-toggle': 'tooltip', 'data-thumbnail': item.images[0], 'data-etsy-url': item.url})
.append(item.title).appendTo ul
My development log shows this:
Started GET "/?term=woven" for ::1 at 2016-03-28 17:16:49 -0400
Processing by HomeController#index as JSON
Parameters: {"term"=>"woven"}
EtsyProduct Load (0.8ms) SELECT "etsy_products".* FROM "etsy_products" WHERE (title ILIKE '%woven%')
Completed 200 OK in 18ms (Views: 13.6ms | ActiveRecord: 2.5ms)
I deployed by app, then tested it in production. The JSON response is empty, and my production log looks like so:
I, [2016-03-28T17:19:13.552941 #25285] INFO -- : Started GET "/?term=woven" for (ip) at 2016-03-28 17:19:13 -0400
I, [2016-03-28T17:19:13.558963 #25285] INFO -- : Processing by HomeController#index as JSON
I, [2016-03-28T17:19:13.559220 #25285] INFO -- : Parameters: {"term"=>"woven"}
D, [2016-03-28T17:19:13.565312 #25285] DEBUG -- : EtsyProduct Load (1.0ms) SELECT "etsy_products".* FROM "etsy_products" WHERE (title ILIKE '%woven%')
I, [2016-03-28T17:19:13.566088 #25285] INFO -- : Completed 200 OK in 7ms (Views: 2.0ms | ActiveRecord: 1.0ms)
If I go into console on the Production server, and run my search script it returns all the entries as it should. The trouble seems to be when it passes the response back to the request. It comes back empty. I'm sure it's something stupid that I've overlooked, but I can't seem to find the right answer on Google, or, (which is more likely) I'm asking the wrong question.
Turns out I was completely on the wrong track. The issue wasn't with the response, but actually with the database. You see, when I checked to ensure the database had records, I was doing the standard "rails c" method on the production server. I'm using a rake task to collect records from an API and write them to the database. So long story short, the rake task was actually writing to a development database that was on my server, and not to the production. When I ran "rails c" it was giving me access to that database and not to the production database. Urgh! Still a lot to learn. Hopefully this will help another newb in the future in case they get walled like I did.

Resources