Rails Redis setting maxmemory and maxmemory-policy - ruby-on-rails

I'm trying to set maxmemory and maxmemory-policy in my cache_store configuration of my Rails app.
I did the following in my production.rb file:
redis_url = "redis://localhost:6379/0"
config.cache_store = :redis_store, redis_url, { :expires_in => 4.weeks ,
:namespace => 'rails-cache',
:maxmemory => '25gb',
'maxmemory-policy' => 'volatile-ttl'}
But the maxmemory doesn't seam to be working. When I do Rails.cache.methods I don't get any methods about memory or max.
I dont' see any examples on the web for Rails, the closest thing was handling redis maxmemory situations with rails when using rails caching but it doesn't give any examples.
I also cloned and grepped for maxmemory in the the redis-rb gem (https://github.com/redis/redis-rb), but nothing comes up. So it seem like it has not been implemented.

If you set the cache store to use redis-rb, and it hasn't implemented maxmemory, I don't see why it'd work.
In particular, it seems like you configure redis's maxmemory in the redis server's config, so I don't think you can do it through a connecting client (ie. redis-rb).

I believe you'd have to set it in redis.conf (or your AWS configuraton) http://redis.io/topics/lru-cache

Related

Why does Travis fail to connect use Redis cache_store when deploying to Heroku?

I use Redis caching in my Rails app:
config.cache_store = :redis_store, redis_url
When I push my Rails app straight to Heroku, it is deployed successfully. When using Travis, the Heroku deploy step fails because the asset precompilation attempts to connect to Redis.
Running: rake assets:precompile
rake aborted!
ArgumentError: invalid uri scheme ''
/tmp/build_7c5f167bf750cb2986dbb9c3510ea11e/vendor/bundle/ruby/2.1.0/gems/redis-3.2.0/lib/redis/client.rb:390:in `_parse_options'
I have tried various things: overriding RedisStore methods using rake tasks, moving the cache_store instantiation to the initialization phase, using Docker instead of sudo, changing Heroku build strategy and other travis.yml configurations etc.
I don't want to precompile locally, and I'd rather not change caching solution. Many other apps running on the cedar-14 stack use a very similar setup so the issue seems a bit peculiar.
Any suggestions how to resolve this Travis+Heroku deploy issue?
In my case, I solved this by changing the redis init to:
REDIS = Redis.new(:url => redis_url_string)
where previously I was parsing the URI and passing in the arguments as:
uri = URI.parse(redis_url)
REDIS = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password, :scheme => uri.scheme)
I wonder if the cache store init has a similar init implementation for redis (I haven't checked the source there).
We solved the problem by overriding the Redis::Store initialization. When using Travis to trigger the Heroku deploy, Redis Store tries to connect to Redis. This is probably due to the current version (Nov 2013) of the Redis Store gem not being compatible with the current asset pipeline implementation. The reason why this works when pushing straight to Heroku is unclear. It could be related to the order that assets are compiled, when using the Heroku build strategies as specified in the travis.yml file. Perhaps these issues will be resolved in future Redis Store versions.
This is the rake task used to avoid loading Redis when using Redis Store as cache store (lib/assets/tasks/assets.rake):
pt = Rake::Task['assets:environment']
Rake.application.send(:eval, "#tasks.delete('assets:environment')")
namespace :assets do
task :environment do
class Redis
class Store
def initialize(options = { })
puts ”Do nothing"
end
end
def initialize(options = { })
puts ”Do nothing"
end
end
pt.execute
end
end
This is not a very elegant solution, but it does the trick for now. Consider changing the caching solution instead.

Heroku S3 Env variables

I'm trying to use carrierwave to upload images to S3. It works locally but when I go to deploy to heroku I get the following error:
ArgumentError: Missing required arguments: aws_access_key_id, aws_secret_
access_key
The keys are definitely set because I can see them when I run heroku:config
I've searched every answer I could find on stack and I searched through every answer on the first 3 pages of Google. None of them have worked.
I know the uploading works so it's not the code that's a problem. What settings or variables do I have to set to make this work?
Please help, I can't move forward with my app until this is done (so I can deploy to heroku again without it being stopped because of this error.)
Some info:
Environment Variables
You've got a problem with the calling of your environment variables in
Heroku. ENV vars are basically variables stored in the OS /
environment, which means you've got to set them for each environment
you attempt to your deploy application
heroku config should should the ENV vars you've set. If you don't see ENV['AWS_ACCESS_KEY'] etc, it means you've not set them correctly, which as explained is as simple as calling the command heroku config:add YOUR_ENV_VAR=VAR
Figaro
I wanted to recommend using Figaro for this
This is a gem which basically stores local ENV vars in config/application.yml. This allows you to store ENV variables locally; but more importantly, allows you to sync them with Heroku using this command:
rake figaro:heroku
This will set your env vars on Heroku, allowing you to use them with Carrierwave as recommended in the other answers
It sounds like you have set the ENV variables on Heroku, but you need to hook those up to CarrierWave.
In config/initializers/fog.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => Rails.configuration.s3_access_key_id,
:aws_secret_access_key => Rails.configuration.s3_secret_access_key,
}
end
In your environments/<environment>.rb file
Rails.application.configure do
config.s3_access_key_id = ENV['S3_ACCESS_KEY_ID']
config.s3_secret_access_key = ENV['S3_SECRET_ACCESS_KEY']
end
This sets your Rails config to the ENV variables on Heroku which makes them available as Rails.configuration.<key>

How to setup dalli cache in test environment?

I'm going to use Dalli cache as key-value store.
Usually in production and development environments we have line
config.cache_store = :dalli_store
so then we can use Rails.cache construction to read from and write to cache.
But in the test environment usually, we don't have this config line.
What is the right way to set up a cache in a test environment in purpose to test my storing logic?
P.S. I'm using Linux(Ubuntu)
dalli is a client for the caching service (memcached)
set it globally whatever the environment, ie in your config/application.rb
config.cache_store = :dalli_store
caching being deactivated in the test environment is a common approach, check config/environments/test.rb
config.action_controller.perform_caching = false
so you can enable it for the test environment, but it could lead to some weird conflicts
best is probably to enable it on the go for a specific specs only:
before do # enable caching
#caching_state = ActionController::Base.perform_caching
ActionController::Base.perform_caching = true
end
after do # disable caching
ActionController::Base.perform_caching = #caching_state
end
I have assumed you are on Ubuntu and did a google of "ubuntu install memcached rails" and found several pages with details. Below are the key points.
To Install memecache
sudo apt-get install memcached
To restart memcahce
/etc/init.d/memcached restart

Rails caching with memcached, get not working

I have come into an existing Rails project which claims to use memcached. As a test I tried putting an object in the cache with
Rails.cache.write("gateway", #gateway)
Then retrieving it with
Rails.cache.read("gateway", #gateway)
however this returns nil, why is this?
This is in a development environment, memcached is installed and running and should be enabled by the entries config.cache_classes = true and config.action_controller.perform_caching = true.
Rails projects use memcached in various different ways but if you are working on a rails 3 project then I would suggest they may be using the 'dalli' gem which uses a memcached session store. So using the cache could instead be done something like this session[:gateway] = #gateway and the opposite #gateway = session[:gateway] the other way it is done is memcache.set('gateway',#gateway') and memcache.get('gateway')
Would be helpful to see the configuration code. check /config/initializers/session_store.rb for something like Rails.application.config.session_store :dalli_store ............
Also as said in the comments if you are in development caching may be turned off. Check your config/development.rb file for the following:
config.action_controller.perform_caching = false
the other thing is you need to have memcached installed on your os for linux this is sudo apt-get install memcached and can be checked by ps aux | grep memcache (this should show two proccesses the grep and memcache)
Update
Should also check out the rails caching guide

How to detect if a rails app is running under Unicorn?

I need to setup a connection to an external service in my Rails app. I do this in an initializer. The problem is that the service library uses threaded delivery (which I need, because I can't have it bogging down requests), but the Unicorn life cycle causes the thread to be killed and the workers never see it. One solution is to invoke a new connection on every request, but that is unnecessarily wasteful.
The optimal solution is to setup the up the connection in an after_fork block in the unicorn config. The problem there is that doesn't get invoked outside of unicorn, which means we can't test it in development/testing environments.
So the question is, what is the best way to determine whether a Rails app is running under Unicorn (either master or worker process)?
There is an environment variable that is accessible in Rails (I know it exists in 3.0 and 3.1), check the value of env['SERVER_SOFTWARE']. You could just put a regex or string compare against that value to determine what server you are running under.
I have a template in my admin that goes through the env variable and spits out its content.
Unicorn 4.0.1
env['SERVER_SOFTWARE'] => "Unicorn 4.0.1"
rails server (webrick)
env['SERVER_SOFTWARE'] => "WEBrick/1.3.1 (Ruby/1.9.3/2011-10-30)"
You can check for defined?(Unicorn) and in your Gemfile set: gem :unicorn, require: false
In fact you don't need Unicorn library loaded in you rails application.
Server is started by unicorn command from shell
Checking for Unicorn constant seems a good solution, BUT it depends very much on whether require: false is provided in the Gemfile. If it isn't (which is quite probable), the check might give a false positive.
I've solved it in a very straightforward manner:
# `config/unicorn.rb` (or alike):
ENV["UNICORN"] = 1
...
# `config/environments/development.rb` (or alike):
...
# Log to stdout if Web server is Unicorn.
if ENV["UNICORN"].to_i > 0
config.logger = Logger.new(STDOUT)
end
Cheers!
You could check to see if the Unicorn module has been defined with Object.constants.include?('Unicorn').
This is very specific to Unicorn, of course. A more general approach would be to have a method which sets up your connection and remembers it's already done so. If it gets called multiple times, it just returns doing nothing on subsequent calls. Then you call the method in after_fork and in a before_filter in your application controller. If it's been run in the after_fork it does nothing in the before_filter, if it hasn't been run yet it does its thing on the first request and nothing on subsequent requests.
Inside config/unicorn.rb
Define ENV variable as
ENV['RAILS_STDOUT_LOG']='1'
worker_processes 3
timeout 90
and then this variable ENV['RAILS_STDOUT_LOG'] will be accessible anywhere in your Rails app worker thread.
my issue:
I wanted to output all the logs(SQL queries) when on the Unicorn workers and not on any other workers on Heroku, so what I did is adding env variable in the unicorn configuration file
If you use unicorn_rails, below code will help
defined?(::Unicorn::Launcher)

Resources