Cached fragment does not stay in (Memcachier) cache - ruby-on-rails

Using Rails 3.1.1 on Heroku, Dalli & Memcachier.
Production.rb
config.cache_store = :dalli_store
config.action_controller.perform_caching = true
Gemfile
gem 'memcachier'
gem 'dalli'
Controller#Show
unless fragment_exist?("gift-show--" + #gift.slug)
# Perform complicated database query at 4-5 sec
end
View
<% cache('gift-show--' + #gift.slug, :expires_in => 3456000) do # 40 days %>
# Create an HTML
<% end %>
Log output when a cached page is loaded
2012-10-17T03:15:43+00:00 app[web.2]: Started GET "/present/baka-kaka-set" for 23.20.90.66 at 2012-10-17 03:15:43 +0000
2012-10-17T03:15:43+00:00 app[web.2]: Could not find fragment for gift-show--baka-kaka-set # my log comment
2012-10-17T03:15:44+00:00 heroku[router]: GET www.mydomain.com/present/baka-kaka-set dyno=web.2 queue=0 wait=0ms service=195ms status=200 bytes=17167
2012-10-17T03:15:43+00:00 app[web.2]: cache: [GET /present/baka-kaka-set] miss
2012-10-17T03:15:43+00:00 app[web.2]: Processing by GiftsController#show as */*
2012-10-17T03:15:43+00:00 app[web.2]: Parameters: {"id"=>"baka-kaka-set"}
2012-10-17T03:15:43+00:00 app[web.2]: Exist fragment? views/gift-show--baka-kaka-set (1.5ms)
2012-10-17T03:15:43+00:00 app[web.2]: Read fragment views/gift-show--baka-kaka-set (1.5ms)
2012-10-17T03:15:43+00:00 app[web.2]: Write fragment views/gift-show--baka-kaka-set (4.0ms)
Each page is pretty much static once it has been created, thus the long expiry time (40 days).
If I load a page like this it seems to be written to the cache, I can verify that by reloading the page and see that it bypasses the controller (as I expect) and deliver the page quite fast.
My problem
The problem is that if I return to the page a few minutes later it has already been deleted from the cache! fragment_exist?('gift-show--gift-baka-kaka-set') returns false (so does Rails.cache.exist?('views/gift-show--gift-baka-kaka-set'))
I can see in the Memcachier analytics that the number of keys are decreasing. Even when I run a script that load each page (to create fragment caches) the number of keys in Memcachier does not increase in the same rate.
I am at about 34% of memory usage in Memcachier, so I am not close to the limit.
My questions
Am I doing something complete wrong? What should I rather do?
Could it be that I am writing to two different caches or something?
The last line in the log file confuses me a bit. It seems like that even after a fragment is read, a new one is still being written? Is that weird?
UPDATE 1: I realized I had commented out the line:
# Set expire header of 30 days for static files
config.static_cache_control = "public, max-age=2592000"
before. Could that have caused the problem? Are cached actions considered 'static assets'?

Turned out to be a bug in Memcachier. Memcachier support came to this conclusion and it works fine with Memcache. Memcachier will try to fix the bug.

Related

heroku unicorn timeout doesn't work when downloading files

I'm running a rails 4 app on Heroku with Unicorn.
This app creates a fairly large xls file
To give it enough time, I increased the timeout via config/unicorn.rb:
timeout 240
This doesn't work when I run the page generating the xls file, a timeout occurs after 3001ms.
Why?
This is my log:
2014-04-29T13:43:27.323190+00:00 app[web.1]: I, [2014-04-29T13:43:27.323084 #8] INFO -- : Started GET "/live_sales.xls?action=index&controller=live_sales" for 81.244.45.218 at 2014-04-29 13:43:27 +0000
2014-04-29T13:43:27.340252+00:00 app[web.1]: D, [2014-04-29T13:43:27.340190 #8] DEBUG -- : logged in? true
2014-04-29T13:43:27.339294+00:00 app[web.1]: I, [2014-04-29T13:43:27.339181 #8] INFO -- : Processing by LiveSalesController#index as XLS
2014-04-29T13:43:27.345465+00:00 app[web.1]: D, [2014-04-29T13:43:27.345383 #8] DEBUG -- : ItemRef Load (3.1ms) SELECT gamme FROM "item_refs" WHERE (gamme is not null and gamme <> '') GROUP BY gamme
2014-04-29T13:43:28.577687+00:00 app[web.1]: D, [2014-04-29T13:43:28.577526 #8] DEBUG -- : LiveSale Load (1210.0ms) SELECT invoice_date, item_refs.item_brand, item_refs.item_label, item_refs.gamme, item_refs.sub_gamme, item_refs.id, item_refs.item_reference, item_refs.date_published, item_refs.date_unpublished, item_refs.price_purchase, item_refs.current_stock, item_refs.item_published,
2014-04-29T13:43:28.577694+00:00 app[web.1]: sum(price_purchase * current_stock) as stock_value,sum(quantity) as quantity,
2014-04-29T13:43:28.577697+00:00 app[web.1]: sum(price_purchase * quantity) as total_purchase FROM "live_sales" left join item_refs on item_refs.id = live_sales.item_ref_id and item_refs.item_published is true and item_refs.disabled is false WHERE ((invoice_date between '2014-03-29 13:43:27 +0000' and '2014-04-29 13:43:27 +0000')) GROUP BY invoice_date, item_refs.item_brand, item_refs.item_label, item_refs.gamme, item_refs.sub_gamme, item_refs.id, item_refs.item_reference, item_refs.date_published, item_refs.date_unpublished, item_refs.price_purchase, item_refs.current_stock, item_refs.item_published ORDER BY invoice_date asc,
2014-04-29T13:43:57.331088+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path=/live_sales.xls?action=index&controller=live_sales host=azzanalytics.herokuapp.com request_id=6e67b74e-3db8-4853-b5c6-c1f884eaa31e fwd="81.244.45.218" dyno=web.1 connect=1ms service=30001ms status=503 bytes=0
Thanks!
the suggestion is: use a background process!
there is no way around it! it is a best practice in web-apps to return to the client as fast as possible, because it frees up resources. when you have just one dyno running at heroku and you have multiple requests, they will get blocked for your timeout and no user is able to access your page. you can easily have denial of service cases when you have such longrunning processes. heroku timed out at 30 second regardless of you override it on your thin server
in case you do not want to do background processes because of the cost, have a look at freemium: https://github.com/phoet/freemium or delayed_job

production.log in rails 3 is stuck

I have a weird issue on a test server...
basically my app is running fine, yet if i check production.log, it is for some reason stuck at yesterday, when i had an error on the app... since then, i have fixed it, deployed again, but the log still won't be updated. It's beel like that since yesterday night.
so if i try
tail -f log/production.log
the last log i see is from yesterday... what's going on? this is so weird O___o
here's from my log:
Started GET "/one" for xx.xx.xx.xx at 2012-06-04 09:14:30 -0400
Processing by ParagraphsController#one as HTML
(0.5ms) SELECT id FROM paragraphs WHERE length = 1
Paragraph Load (0.3ms) SELECT "paragraphs".* FROM "paragraphs" WHERE "paragraphs"."id" = $1 LIMIT 1 [["id", 1]]
Rendered paragraphs/one.html.erb within layouts/application (0.1ms)
Completed 200 OK in 3ms (Views: 1.0ms | ActiveRecord: 0.8ms)
tail: cannot open `1' for reading: No such file or directory
any help is greatly appreciated!
Perhaps your log file has rotated? tail -f will follow a file, not a filename. If you want to follow the filename, then you should use tail -F instead. (Note the capital letter F.) This way, when the file is rotated, you'll get the new log file rather than stare at the old file.

Server is timing out because of Sunspot-Solr reindex'ing problem

Not too sure how to debug this. Any tips would be greatly appreciated.
Basically, I just did a large commit, and now my server can't boot up because of a Sunspot-solr issue.
I notice it when I try to manually reindex.
This is the return :
Processing MainController#index (for 69.114.195.64 at 2011-08-02 06:47:21) [GET]
Parameters: {"action"=>"index", "controller"=>"main"}
HomepageBackground Load (0.2ms) SELECT * FROM `homepage_backgrounds`
HomepageBackground Columns (23.4ms) SHOW FIELDS FROM `homepage_backgrounds`
HomepageBackground Load (0.8ms) SELECT * FROM `homepage_backgrounds` ORDER BY RAND() LIMIT 1
SQL (30.2ms) SHOW TABLES
Organization Columns (1.8ms) SHOW FIELDS FROM `organizations`
Solr Select (Error) {:q=>"*:*", :start=>0, :fq=>["type:Organization", "published_b:true", "updated_at_d:[2010\\-08\\-02T13\\:47\\:21Z TO *]"], :rows=>1000000}
Timeout::Error (execution expired):
/usr/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill'
vendor/gems/right_http_connection-1.2.4/lib/net_fix.rb:51:in `rbuf_fill'
/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
UPDATE
Ok so I reverted and rebased to the last working commit. And I still got the same error. So then I ps aux | grep solr, and found five instances of solr running. Strange, I thought, and killed every single one of them. Blam server was back up and running strong. So now I'm trying my new commits again, but with my eye on these feral sunspot instances.
This problem was caused by feral sunspot-solr instances running amuck. Nothing kill -9 couldn't handle. Problem solved.

Rails app logging duplicate requests

I have a Rails app that is generating duplicate requests for every request in development. The app is running Rails 2.3.5 with my primary development machine running Ubuntu 10.4. However, the same code runs fine without showing duplicate requests on my OS X 10.6 box. It also runs in Production mode on either machine without problems.
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Notice that the requests are exactly the same, even down to the timestamps.
I have tried using Ruby 1.8.7 & 1.9.1 as well as swapping between Mongrel & Webrick and it always processes each request twice (or at least it generates two log entries). I tried removing most of the routes to see if I had something weird going on, but the problem persists. I tried different browsers (Chrome, Safari, eLinks) from different machines to see if that would help, but the problem persists. I removed all of my gems and only replaced the necessary ones but to no avail.
Does anyone have any idea why Rails would cause duplicate requests like this? I am about at my wits end and am grasping at straws. The only bright spark is that this behavior does not happen under the Production environment, only Development.
When people come to this question from google it's important they disambiguate their problem between duplicate logs that look like this:
A
A
B
B
C
C
From duplicate logs that look like this:
A
B
C
A
B
C
The former is likely from duplicate LOGGING. The later is likely from duplicate REQUESTS. In the case it is the latter as shown by the Question Asker (OP), you should strongly consider #www's answer of hunting down a <img src="#"> or a similar self-referential url tag. I spent hours trying to figure out why my application was appearing to make two duplicate requests, and after reading #www's answer (or #aelor's on Double console output?), I found
%link{href: "", rel: "shortcut icon"}/
in my code! It was causing every page of my production app to be double rendering!!!! So bad for performance and so annoying!
Check your code see if there is something like this inside it:
I have got the same problem just now becase of a tag
<img src="#">
this will cause rails to make duplicate requests!
This was happening to me in rails 4.2.3 after installing the heroku rails_12factor gem which depends on rails_stdout_logging
The "answer" to the problem was to move to a new directory and fetch the original code from Github. After getting everything configured and setup in the new directory the application works as it should with no duplicate requests. I still don't know why the code in the original directory borked out; I even diff'ed the directories and the only outliers were the log files.
I'm answering my own question here for the sanity of others that might experience the same problem.
I resolved this problem by commenting the following line in app/config/environments/development.rb:
config.middleware.use Rails::Rack::LogTailer
I do not remember exactly the reason to use this setting
I solve this same problem here by cleaning all precompiled assets with:
rake assets:clean
I've tried to delete the app folder and then checkout him back from GitHub but didnt work.
hope this help.
thanks.
This started happening to me on development after playing around with some custom middleware I wrote.
Running rake assets:clean:all solved it.
This small workaround solved my issue. Follow these steps:
Under Rails External Libraries, Search for railties module.
Go to this path: /lib/commands/server.rb
In this file comment this line,
Rails.logger.extend(ActiveSupport::Logger.broadcast(console))
This command will switch off broadcasting, and just restart your rails server. You will not see any repeated logs anymore. Happy coding.

How do I get the same JSON response I get in development, in production?

I'm using jQuery-ui's autocomplete on a search form. In development the request hits my index page which returns a JSON response:
The response looks like so:
[{"id":0,"listing_id":0,"category_id":0,"title":"Natural Woven Linen
Ring Sling","description":"This natural woven linen sling is perfect
for keeping you and baby comfortable in any climate. With light
comfort, it will conform to you and your child\u0026#39;s body and
become softer over time.\nSlings are a great way to keep your baby
close and your hands free.\nWearing your baby increases bonding by
encouraging skin to skin contact and closeness with parents and
caregivers. Baby slings mimic the womb environment, making baby feel
safe and secure. Baby\u0026#39;s needs are easily met when held close,
which means less crying. Baby slings are great for discreet
breastfeeding no matter where you are. \nThis sling is suitable for
babies between 7-35 Pounds.\nBe sure to exercise caution when wearing
your baby.\nKeep baby\u0026#39;s face visible at all
times.\nPractice wearing your sling before putting baby
inside.\nAvoid any unsafe activities while wearing your baby, such
as:\nSmoking, drinking hot drinks, running, exercising, cooking, or
drinking
alcohol.","price":"65.00","currency_code":"CAD","quantity":1,"tags":["sling
rings","ring sling","toddler","newborn","natural linen","woven
wrap","baby carrier","woven sling","babywearing","baby wrap","baby
sling"],"category_path":["Bags and
Purses"],"taxonomy_path":["Accessories","Baby Accessories","Baby
Carriers \u0026 Wraps"],"materials":["aluminum rings","european
linen","cotton
thread"],"featured_rank":null,"url":"https://www.etsy.com/listing/272579256/natural-woven-linen-ring-sling?utm_source=etsyinventorymerger\u0026utm_medium=api\u0026utm_campaign=api","views":19,"num_favorers":0,"shipping_template_id":6281647,"shipping_profile_id":null,"images":["https://img1.etsystatic.com/135/0/6276910/il_170x135.987731269_rwab.jpg","https://img1.etsystatic.com/139/0/6276910/il_170x135.987731277_1q29.jpg","https://img1.etsystatic.com/140/0/6276910/il_170x135.987731279_q5lv.jpg"],"created_at":"2016-03-28T20:01:41.722Z","updated_at":"2016-03-28T20:04:52.721Z"},{"id":18,"listing_id":269532744,"category_id":269532744,"title":"Woven
Cotton Whimsical Waves Ring Sling","description":"This sling is
lightweight, yet sturdy and made from 100% cotton. The shoulder is
sewn to comfortably keep your arms free and baby\u0026#39;s weight
evenly distributed. Great for any climate and perfect for any outfit.
\n\nSlings are a great way to keep your baby close and your hands
free.\nWearing your baby increases bonding by encouraging skin to skin
contact and closeness with parents and caregivers. Baby slings mimic
the womb environment, making baby feel safe and secure.
Baby\u0026#39;s needs are easily met when held close, which means less
crying. Baby slings are great for discreet breastfeeding no matter
where you are. \nThis sling is suitable for babies between 7-35
Pounds.\nBe sure to exercise caution when wearing your baby.\nKeep
baby\u0026#39;s face visible at all times.\nPractice wearing your
sling before putting baby inside.\nAvoid any unsafe activities while
wearing your baby, such as:\nSmoking, drinking hot drinks, running,
exercising, cooking, or drinking
alcohol.","price":"65.00","currency_code":"CAD","quantity":1,"tags":["new
mom gift","baby shower gift","newborn sling","baby wrap","ring sling
tail","woven cotton sling","woven ring sling","baby carrier","ring
sling","canadian made"],"category_path":["Bags and
Purses"],"taxonomy_path":["Accessories","Baby Accessories","Baby
Carriers \u0026 Wraps"],"materials":["cotton","aluminum rings","cotton
thread"],"featured_rank":1,"url":"https://www.etsy.com/listing/269532744/woven-cotton-whimsical-waves-ring-sling?utm_source=etsyinventorymerger\u0026utm_medium=api\u0026utm_campaign=api","views":42,"num_favorers":3,"shipping_template_id":6281647,"shipping_profile_id":null,"images":["https://img1.etsystatic.com/115/0/6276910/il_170x135.927557949_lp3o.jpg","https://img1.etsystatic.com/113/0/6276910/il_170x135.927557945_8km2.jpg","https://img1.etsystatic.com/117/0/6276910/il_170x135.927557953_nyef.jpg","https://img0.etsystatic.com/112/0/6276910/il_170x135.927814742_9wo0.jpg","https://img1.etsystatic.com/127/0/6276910/il_170x135.927557973_223q.jpg"],"created_at":"2016-03-28T20:01:45.104Z","updated_at":"2016-03-28T20:04:56.129Z"}]
This is my controller:
def index
respond_to do |format|
format.html
format.json do
#etsy_products = EtsyProduct.search(params[:term])
render json: #etsy_products, status: :ok, message: 'Success'
end
end
end
My script receives the return response and formats it accordingly:
$ ->
$('#etsy_products_search').autocomplete(
minLength: 0
source: '/'
focus: (event, ui) ->
$('#etsy_products_search').val ui.item.title
false
select: (event, ui) ->
$('#etsy_products_search').val ui.item.title
$('#etsy_products_search-description').html ui.item.description
false
).autocomplete('instance')._renderItem = (ul, item) ->
$('<li>')
.attr({'title': item.description, 'data-toggle': 'tooltip', 'data-thumbnail': item.images[0], 'data-etsy-url': item.url})
.append(item.title).appendTo ul
My development log shows this:
Started GET "/?term=woven" for ::1 at 2016-03-28 17:16:49 -0400
Processing by HomeController#index as JSON
Parameters: {"term"=>"woven"}
EtsyProduct Load (0.8ms) SELECT "etsy_products".* FROM "etsy_products" WHERE (title ILIKE '%woven%')
Completed 200 OK in 18ms (Views: 13.6ms | ActiveRecord: 2.5ms)
I deployed by app, then tested it in production. The JSON response is empty, and my production log looks like so:
I, [2016-03-28T17:19:13.552941 #25285] INFO -- : Started GET "/?term=woven" for (ip) at 2016-03-28 17:19:13 -0400
I, [2016-03-28T17:19:13.558963 #25285] INFO -- : Processing by HomeController#index as JSON
I, [2016-03-28T17:19:13.559220 #25285] INFO -- : Parameters: {"term"=>"woven"}
D, [2016-03-28T17:19:13.565312 #25285] DEBUG -- : EtsyProduct Load (1.0ms) SELECT "etsy_products".* FROM "etsy_products" WHERE (title ILIKE '%woven%')
I, [2016-03-28T17:19:13.566088 #25285] INFO -- : Completed 200 OK in 7ms (Views: 2.0ms | ActiveRecord: 1.0ms)
If I go into console on the Production server, and run my search script it returns all the entries as it should. The trouble seems to be when it passes the response back to the request. It comes back empty. I'm sure it's something stupid that I've overlooked, but I can't seem to find the right answer on Google, or, (which is more likely) I'm asking the wrong question.
Turns out I was completely on the wrong track. The issue wasn't with the response, but actually with the database. You see, when I checked to ensure the database had records, I was doing the standard "rails c" method on the production server. I'm using a rake task to collect records from an API and write them to the database. So long story short, the rake task was actually writing to a development database that was on my server, and not to the production. When I ran "rails c" it was giving me access to that database and not to the production database. Urgh! Still a lot to learn. Hopefully this will help another newb in the future in case they get walled like I did.

Resources