I am using gem rabl for building my api response (https://github.com/nesquena/rabl).
In my controller, I have given like this
#cache_key = "rabl/spree/api/home/v2/index-en"
and in my index.rabl file, I gave,
cache #cache_key, expires_in: 15.minutes
I am using memcache for caching using the client Dalli and have deployed the application in heroku. Everything is working as expected. But when I am trying
Rails.cache.fetch('rabl/spree/api/home/v2/index-en')
I am not able to see any value. Why is it like this ? Does rabl append any value to cache key ?
Related
Is there any way to cause Sinatra to transparently handle cached data correctly that was written by Rails, eg, to implicitly handle the ActiveSupport class that Rails uses when it stores data?
A Heroku-hosted Rails 4 app and a Sinatra app used a shared Memcachier store to share certain ephemeral data.
If a key/value is created on Sinatra, everything works normally:
# on Sinatra:
session.cache.set("foo", "from sinatra", 100)
will set a key that either the Sinatra app or the Rails app can read, and either app will automatically read nil once it expires in 100 seconds. And both Rails and Sinatra report the class of the data value to be String.
However, if the data is set by the Rails app:
# on Rails:
Rails.cache.write("foo", "from rails", expires_in: 100)
the Rails app read returns a String (as expected), but Sinatra app get returns class ActiveSupport::Cache::Entry
# on Sinatra
d = settings.cache.get("foo")
=> #<ActiveSupport::Cache::Entry:0x7f7ea70 #value="from rails", #created_at=1598330468.8312092, #expires_in=100.0>
and if the Sinatra app fetches the same key after the expiry, the same data is returned (not nil).
It is certainly possible on the Sinatra app to brute-force it with a new method that get's the data, and manually handles expiration and data issues using ActiveSupport::Cache::Entry methods, as outlined below.
But it seems like there ought to be some way to have the ActiveSupport::Cache::Entry code handle those details automatically by telling Sinatra to use ActiveSupport::Cache for both setting and getting data?
# pseudo code for Sinatra app
def new_get(key)
x = settings.cache.get(key)
if x.class == ActiveSupport::Cache::Entry
if x.expired?
x.delete
return nil
else return x.value
else return x
end
Currently the memcache store is configured as per Heroku online documentation:
require 'dalli'
set :cache, Dalli::Client.new(
(ENV["MEMCACHIER_SERVERS"] || "").split(","),
{:username => ENV["MEMCACHIER_USERNAME"],
:password => ENV["MEMCACHIER_PASSWORD"],
:failover => true, # default is true
:socket_timeout => 1.5, # default is 0.5
:socket_failure_delay => 0.2, # default is 0.01
:down_retry_delay => 60 # default is 60
})
EDIT - SOLVED Per accepted answer the key to Sinatra/Rails data interoperability was to explicitly configure the cache store to use ActiveSupport, which still automagically uses the Dalli gem to manage the connection to the memcachier service.
set :cache, ActiveSupport::Cache::MemCacheStore.new(
... # init parameters
)}
which also means using the settings.cache.read/write methods (vs the settings.cache.get/set methods used when you configure using Dalli::Client.new).
ADDITIONAL EDIT:
When using the cache inside a model, you cannot access settings.cache directly, need to use Sinatra::Application.settings.cache.read()
I am not a Sinatra user, Used something like the below in Rails.
cache_store = ActiveSupport::Cache::MemCacheStore.new('localhost')
# For writing the keys:
cache_store.write("foo", "from rails", expires_in: 100)
# For reading the keys
cache_store.read("foo")
EDIT
It seems you are using gem dalli, You can also use the DalliStore instead of MemCacheStore like the below:
cache_store = ActiveSupport::Cache::DalliStore.new('localhost')
# For writing the keys:
cache_store.write("foo", "from rails", expires_in: 100)
# For reading the keys
cache_store.read("foo")
I setup rack cache with redis
config.action_dispatch.rack_cache = true
And it works but sometimes (way to often), caching doesn't work as expected. Event though cache headers are properly set Cache-Control:max-age=20, public, s-maxage=600 I see that response headers contain X-Rack-Cache:miss which means that URL was not in cache and it also didn't store the server response into cache store.
URL is like this (has 1 GET param for language):
http://localhost:3000/js/JAYy-euKaergqRsTlrn67w/events.js?lng=es
and if I add extra params or change the lng param to eg 'de' it than stores the response as expected. It seems like it acts a bit randomly.
I noticed this only on development environment - on production i seems like it always work as expected. What could be the reason?
On redmine 1.2/rails 2.3.11 I'm rendering a repository markdown file as html (as redmine_markdown_extra_viewer does), and now I'am trying to cache the result, which should be updated on each commit.
So I have a git hook that fetch the repo changes, and i'd like it to also clear the corresponding cache entries.
Cache generation (in a RepositoriesController::entry override):
cache_key =['repositories_md', #project.id.to_s, #path.to_s].join('/')
puts cache_key
#content = cache_store.fetch cache_key do
Kramdown::Document.new(#repository.cat(#path, #rev)).to_html
end
render :action => "entry_markdown"
The hook that should clear the cache, but has no effect:
# This is ok
ruby script/runner "Repository.fetch_changesets"
# This not
ruby script/runner "Rails.cache.delete_matched(/repositories_md\/.*/)"
So it doesn't work and i don't even know if i've taken the right direction to implement that. Any input much appreciated!
Which cache backend are you using?
If it's memcached or anything other than the FileStore or the MemoryStore, the delete_matched method is not supported.
You're probably better off letting them expire and just replace their cached contents as they get updated.
The problem is when using a Regular Expression as a fragment name, try using a String as a fragment name. Maybe get verbose. I had a similar problem with Dalli(with Memcached), and that was the reason.
We have an online store running on Rails 3 Spree platform. Recently customers started reporting weird errors during checkout and after analyzing production logs I found the following error:
Errno::ENAMETOOLONG (File name too long - /var/www/store/tmp/cache/UPS-R43362140-US-NJ-FlorhamPark07932-1025786194_1%7C1025786087_1%7C1025786089_15%7C1025786146_4%7C1025786147_3%7C1025786098_3%7C1025786099_4%7C1025786100_2%7C1025786114_1%7C1025786120_1%7C1025786121_1%7C1025786181_1%7C1025786182_1%7C1025786208_120110412-2105-1e14pq5.lock)
I'm not sure why this file name is so long and if this error is specific to Rails or Spree. Also I'm not very familiar with Rails caching system. I would appreciate any help on how I can resolve this problem.
I'm guessing you are using spree_active_shipping, as that looks like a cache id for a UPS shipping quote. This will happen when someone creates an order that has a lot of line items in it. With enough line items this will of course create a very large filename for the cache, thus giving you the error.
One option would be to use memcache or redis for your Rails.cache instead of using the filesystem cache. Another would be to modify the algorithm that generates the cache_key within app/models/active_shipping.rb in the spree_active_shipping gem.
The latter option would probably be best, and you could simply have the generated cache key run through a hash like MD5 or SHA1. This way you'll get predictable cache key lengths.
Really this should be fixed within spree_active_shipping though, it shouldn't be generating unpredictably long cache keys, even if you a key-value store is used, that's wasted memory.
It is more related to your file system. Either set up a file system which supports longer file names or change the software to make better (md5?timestamp?unique id?) file names.
May be this help:
config.assets.digest and config.assets.debug can't both be true
It's a bug : https://github.com/rails/jquery-rails/issues/33
I am using rails 3.2.x and having same issue. I end up generating MD5 digest in the view helper method used to generate cache key.
FILENAME_MAX_SIZE = 200
def cache_key(prefix, params)
params = Array.wrap(params) if params.instance_of?(String)
key = "#{prefix}/" << params.entries.sort { |a,b| a[0].to_s <=> b[0].to_s }.map { |k,v| "#{k}:#{v}"}.join(',').to_s
if URI.encode_www_form_component(key).size > FILENAME_MAX_SIZE
key = Digest::MD5.hexdigest(key)
end
key
end
Here I have to check length of URI encoded key value using URI.encode_www_form_component(key).size because as you can see in my case, cache key is generated using : and , separators. Rails encodes the key before caching the results.
I took reference from the pull request.
Are you using paperclip gem? If yes, this issue is solved: https://github.com/thoughtbot/paperclip/issues/1246.
Please update your paperclip gem to the latest version.
I have uploaded a file on s3 using paperclip.. the file upload process works fine..
Now i wanted to download it. In my model i have set my :s3_host_alias.. now as the file is private.. so if i am trying to fetch the file using paperclip url method... it's giving me access denied error...
and if i am using S3Object.url_for method then the url return is s3.amazonaws.com/mybucket/path_of_file.
I don't want tht s3.amazonaws.com to be shown in the url so used :s3_host_alias in my model
and created a CNAME inmy DNS server... now if i am directly using #object.url then its giving the correct url but throws access denied error. because i guess the access_key and signature is not passed..
Is there a way to fetch private file from s3 using paperclip by using canonical url..
I don't use paperclip, but yes, you can sign a S3 request using a virtual hostname.
I had this problem using Paperclip and the AWS::S3 gem. Paperclip set up everything fine for non-authenticated requests. But falling back to AWS::S3 to generate an authenticated URL didn't use the S3 host alias.
You can pass AWS::S3 a server option on connect, but I didn't need or want a connection just to get the URL. I also couldn't see a way to set it via configuration (so it would apply outside of a connection). Even glancing at the source, it looks like it's non-configurable.
So, I created a monkey patch. My Ruby-fu (and maybe my OO-fu) aren't super high, so there may be a better way to do this, but it works for what I need. Basically, I pass url_for an :s3_host_alias param on the option hash, and then the monkey patch uses that if it's passed. If it's passed, it also has to remove the bucket from the path that's generated.
So....
You can create this 1-line file, RAILS_ROOT/initializers/load_patches.rb, to load all patches in RAILS_ROOT/lib:
Dir[File.join(Rails.root, 'lib', 'patches', '**', '*.rb')].sort.each { |patch| require(patch) }
Then create the file RAILS_ROOT/lib/patches/aws.rb with this code:
http://pastie.org/1622881
And you can call for an authenticated url with something along these lines (Configuration is a custom class for storing, natch, configuration values) :
AWS::S3::S3Object.url_for(media.path(style || media.default_style), media.bucket_name, :expires_in => expires_in, :use_ssl => false, :s3_host_alias => Configuration.s3_host_alias)