I am using rails to develop a social website for posting, tagging etc. I am also using a gem public_activity and this is used to render the page. While loading the page, sometimes the post are shown twice when there is only some posts. But when I see the log of server I found queries are repeated twice ie, taken data from both database and cache and resulting in loading those posts twice.
Server log
SELECT `posts`.* FROM `posts` WHERE `posts`.`id` = 836 LIMIT 1
CACHE (0.0ms) SELECT `posts`.* FROM `posts` WHERE `posts`.`id` = 836 LIMIT 1 [["id", 836]]
Please help to solve the issue . Thanks in advance.
Related
I am not able to grasp how the ActiveRecord preload method is of use.
When I do, for example, User.preload(:posts), it does runs two queries but what is returned is just the same as User.all. The second query does not seem to affect the result.
User Load (3.2ms) SELECT "users".* FROM "users"
Post Load (1.2ms) SELECT "posts".* FROM "posts" WHERE "posts"."user_id" IN (1, 2, 3)
Can someone explain?
Thanks!
Output is the same, but when you'll call user.posts, Rails will not load your posts from the database next time:
users = User.preload(:posts).limit(5) # users collection, 2 queries to the database
# User Load ...
# Post Load ...
users.flat_map(&:posts) # users posts array, no loads
users.flat_map(&:posts) # users posts array, no loads
You can do it as mutch times as you want, Rails just 'remember' your posts in RAM. The idea is that you go to the database only once.
How do I enable caching across requests/actions?
My stack:
Rails 4.2.7
Postgres 9.5
I notice the following in my Rails logs
Country Load (0.4ms) SELECT "countries".* FROM "countries" WHERE "countries"."iso" = $1 LIMIT 1 [["iso", "US"]]
State Load (1.4ms) SELECT "states".* FROM "states" WHERE "states"."iso" = $1 AND "states"."country_id" = $2 LIMIT 1 [["iso", "OH"], ["country_id", 233]]
Country Load (0.3ms) SELECT "countries".* FROM "countries" WHERE "countries"."iso" = $1 LIMIT 1 [["iso", "US"]]
State Load (1.1ms) SELECT "states".* FROM "states" WHERE "states"."iso" = $1 AND "states"."country_id" = $2 LIMIT 1 [["iso", "OH"], ["country_id", 233]]
Country Load (3.6ms) SELECT "countries".* FROM "countries" WHERE "countries"."iso" = $1 LIMIT 1 [["iso", "US"]]
State Load (1.2ms) SELECT "states".* FROM "states" WHERE "states"."iso" = $1 AND "states"."country_id" = $2 LIMIT 1 [["iso", "OH"], ["country_id", 233]]
Note that the same queries are being run multiple times against my database in rapid succession. Is there some way I can indicate to Rails that certain tables/models are very unlikely to change, and so certain lookups can be cached on the app server?
Is there some way I can indicate to Rails that certain tables/models
are very unlikely to change, and so certain lookups can be cached on
the app server?
Absolutely, model caching seems to be a perfect fit here.
Here's the article which will give you a good overview of setting up different types of caching.
Also, check out official guides on caching.
Basically, you want to look into model caching.
You can write an extention and use it in models, which are to be cached:
module ModelCachingExtention
extend ActiveSupport::Concern
included do
class << self
# The first time you call Model.all_cached it will cache the collection,
# each consequent call will not fire the DB query
def all_cached
Rails.cache.fetch(['cached_', name.underscore.to_s, 's']) { all.load }
end
end
after_commit :clear_cache
private
# Making sure, that data is in consistent state by removing the cache
# everytime, the table is touched (eg some record is edited/created/destroyed etc).
def clear_cache
Rails.cache.delete(['cached_', self.class.name.underscore.to_s, 's'])
end
end
end
Then use it in model:
class Country < ActiveRecord::Base
include ModelCachingExtention
end
Now, when using Country.all_cached you will have the cached collection returned with zero db queries (once it is cached).
Within a single request queries are already cached by Rails. You will see it within your logfile with the prefix CACHE. Some operations, like inserting a new record within your request leads to a clear_query_cache and all cache entries are gone.
If you want that your query cache has a longer life time, you have to do it on your own. You can use Rails caching features for this:
Rails.cache.fetch("countries iso: #{iso}", expires_in: 1.hour) do
Country.where(iso: iso).all
end
You can also use memcache. With memcache you can share cached data between multiple servers.
You have to set it up within your specific environment config.
config.cache_store = :mem_cache_store, "cache-1.example.com", "cache-2.example.com"
I am using Rails 3.2.x and Thin 1.5.0 and when initially loading my app, after not loading it for say 24 hours, it takes VERY long. At first I thought it was just my macbook - because it was in sleep mode and the first time it was just taking forever for whatever reason.
But, I realized that it does the same on Heroku and it also does the same for other people. Like when they haven't visited the Heroku site for a while, the first time they load it (not EVERY single time, but some times) it takes FOREVER.
According to the log, it seems that the compilation of my stylesheets take forever. What I am confused about though, is when I push to Heroku it should compile the assets during the push...right? So, in theory, that shouldn't be what is slowing it down in production. Or am I missing something?
Although, in recent times, Heroku has been rejecting pushes so I have had to enable this:
# Don't initialize app on pre-compile
config.assets.initialize_on_precompile = false
So I am not sure if that is what is contributing to it.
Started GET "/" for 127.0.0.1 at 2013-02-19 02:44:14 -0500
Processing by HomeController#index as HTML
Category Load (56.6ms) SELECT "categories".* FROM "categories" LIMIT 6
EXPLAIN (14.6ms) EXPLAIN QUERY PLAN SELECT "categories".* FROM "categories" LIMIT 6
EXPLAIN for: SELECT "categories".* FROM "categories" LIMIT 6
0|0|0|SCAN TABLE categories (~1000000 rows)
Banner Load (44.4ms) SELECT "banners".* FROM "banners" INNER JOIN "banner_types" ON "banner_types"."id" = "banners"."banner_type_id" WHERE (banner_types.name = 'Featured')
Banner Load (0.3ms) SELECT "banners".* FROM "banners" INNER JOIN "banner_types" ON "banner_types"."id" = "banners"."banner_type_id" WHERE (banner_types.name = 'Side')
Product Load (3.4ms) SELECT "products".* FROM "products"
Vendor Load (15.9ms) SELECT "vendors".* FROM "vendors"
User Load (50.0ms) SELECT "users".* FROM "users"
EXPLAIN (0.1ms) EXPLAIN QUERY PLAN SELECT "users".* FROM "users"
EXPLAIN for: SELECT "users".* FROM "users"
0|0|0|SCAN TABLE users (~1000000 rows)
Vendor Load (0.3ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 12 LIMIT 1
Vendor Load (0.2ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 11 LIMIT 1
Vendor Load (0.2ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 10 LIMIT 1
CACHE (0.0ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 12 LIMIT 1
CACHE (0.0ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 12 LIMIT 1
CACHE (0.0ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."id" = 10 LIMIT 1
Rendered home/_popular_products.html.erb (303.0ms)
Rendered home/_popular_stores.html.erb (2.4ms)
Rendered home/index.html.erb within layouts/application (570.6ms)
Compiled main.css (20360ms) (pid 86898)
Compiled application.css (2366ms) (pid 86898)
Rendered layouts/_login_nav.html.erb (1.0ms)
Rendered layouts/_navigation.html.erb (1.7ms)
Rendered layouts/_header.html.erb (47.3ms)
Rendered layouts/_messages.html.erb (0.2ms)
Rendered layouts/_footer.html.erb (0.5ms)
Completed 200 OK in 38402ms (Views: 30707.1ms | ActiveRecord: 1830.8ms)
Thoughts?
Edit 1:
Please note that this question hasn't been adequately answered. The responses both talk about the issue on Heroku - which is part of the question. However, they fail to address this issue happening in development.
Heroku will spin down your app to save resources, if it's not accessed for a certain amount of time. Thus the slowness is caused by having to start up the entire app again.
See Dyno Idling on https://devcenter.heroku.com/articles/dynos for more information.
To get around this, you can use services such as the New Relic addon, that will ping your app every so often, to stop the spin down.
You also should be compiling your assets on deploy, not on demand. Heroku should do this by default - what did you change to stop it happening?
If you only have 1 dyno on Heroku (free plan) then it will go idle after a while to save resources. Then when someone accesses your app after a while it will start up again.
So the first person to access your app will have to wait for a while.
In development mode, I would like to be able to see in the console where an SQL query was fired.
What is currently showing up in my console (dumb example query)
User Load (1.7ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 65 LIMIT 1
What I would like to see
application_controller.rb:68
User Load (1.7ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 65 LIMIT 1
For future Googlers: The gem active_record_query_trace can also do that and more.
checkout https://github.com/RISCfuture/sql_origin for a gem that provides that.
I am brand new to RoR land, coming from 20 years of non-dynamic languages, and working on an app I did not originate, and still trying to get my head around all the things that happen through convention (i.e. 'magic' until you know what caused it) and trying to debug an issue.
My question isn't specific to the problem I'm tracking down, but rather wanting to know this; is this debug output really telling me that 10 separate calls are happening to SQL?
Processing OwnersController#stest (for 127.0.0.1 at 2011-06-20 11:08:26) [GET]
Parameters: {"action"=>"stest", "controller"=>"owners"}
User Columns (4.8ms) SHOW FIELDS FROM `users`
User Load (0.4ms) SELECT * FROM `users` WHERE (`users`.`id` = 45241) LIMIT 1
Owner Columns (4.8ms) SHOW FIELDS FROM `users`
Provider Load (121.6ms) SELECT * FROM `providers` WHERE (`providers`.owner_id = 45241) LIMIT 1
Provider Columns (4.1ms) SHOW FIELDS FROM `providers`
FeedItem Load (43.2ms) SELECT * FROM `feed_items` WHERE (((4 & item_id) > 0)) AND ((feed_items.created_at >= '2011-06-13 15:08:27') AND (feed_items.event = 5)) GROUP BY initiator_type, initiator_id ORDER BY id ASC
Rendering template within layouts/front_end
Rendering owners/stest
FeedItem Columns (2.0ms) SHOW FIELDS FROM `feed_items`
User Load (0.4ms) SELECT * FROM `users` WHERE (`users`.`id` = 45268)
Parent Columns (4.7ms) SHOW FIELDS FROM `users`
Rendered feed_items/_user_saved_provider_search (23.4ms)
User Load (0.4ms) SELECT * FROM `users` WHERE (`users`.`id` = 45269)
Rendered feed_items/_user_saved_provider_search (4.3ms)
User Load (0.4ms) SELECT * FROM `users` WHERE (`users`.`id` = 45236)
Rendered feed_items/_user_saved_provider_search (3.7ms)
InHome Columns (3.7ms) SHOW FIELDS FROM `providers`
ZipCode Load (32.5ms) SELECT * FROM `zip_codes` WHERE (`zip_codes`.`city` = 'Plano') LIMIT 1
City Columns (3.1ms) SHOW FIELDS FROM `cities`
City Load (0.4ms) SELECT * FROM `cities` WHERE (`cities`.`name` = 'Plano') LIMIT 1
CACHE (0.0ms) SELECT * FROM `zip_codes` WHERE (`zip_codes`.`city` = 'Plano') LIMIT 1
CACHE (0.0ms) SELECT * FROM `cities` WHERE (`cities`.`name` = 'Plano') LIMIT 1
Rendered layouts/_extra_links (1.7ms)
Completed in 552ms (View: 81, DB: 230) | 200 OK [http://0.0.0.0/owners/stest]
By my count there are 8 Queries, 7 Describes and 2 Cached queries.
If your dealing with an application that is actually being bottlenecked by database queries there are a few ways to optimize the number of queries generated. Setting up scopes and using includes can reduce the number of queries when fetching models across relations.
Rails also won't normally issue the same query twice, instead it caches it, hence why the lines start with CACHE for those last two queries.