Rails adding extra line in my redis cache - ruby-on-rails

Im using redis-rails in my project to store a cache of users, and I don't know why a extra line is added at begining of cache.
This is my config:
config.cache_store = :redis_store, {
host: ENV['REDIS_SERVER'] || 'localhost',
port: 6379,
db: 0,
namespace: ENV['CUSTOMER']
}
This is my code:
namespace :update_employees_cache do
desc "Update employees cache"
task update: :environment do
employees = []
Employee.where(active: true).each do |item|
employees.push({ id: item.id, name: item.name })
end
Rails.cache.write "employees", employees.to_json
end
end
This is the result
At line 1, o: ActiveSupport::Cache::Entry:#valueI"�
What is this?

After open a issue in the project repo I discovered that is the default behavior of rails wrapping the cache with that data.
In my case I need to avoid it, then is needed set row as true in configs.
config.cache_store = :redis_store, {
host: ENV['REDIS_SERVER'] || 'localhost',
port: 6379,
db: 0,
namespace: ENV['CUSTOMER'],
raw: true
}

Related

Can't connect to clustered Azure Redis Cache with redis-rb gem and public access

We've provisioned an Azure Redis Cache server using the Premium tier. The server is clustered, with 2 shards, and the server is configured to allow public access over the public internet through a firewall--allow-listing a set of known IPs.
Deploys of our Rails application two any of these known IPs fail with the error:
Redis::CannotConnectError: Redis client could not connect to any cluster nodes
Here is our Rails config:
# application.rb
if Rails.env.test?
config.cache_store = :redis_cache_store, {
url: config.nines[:redis_url],
expires_in: 90.minutes,
namespace: ENV['TEST_ENV_NUMBER'],
}
else
config.cache_store = :redis_store, {
namespace:ENV['TEST_ENV_NUMBER'],
cluster: [ Rails.application.config.nines[:redis_url] ],
replica: true, # allow reads from replicas
compress: true,
compress_threshold: 1024,
expires_in: 90.minutes
}
end
The config.nines[:redis_url] is set like this: rediss://:<pw>#<cache-name>.redis.cache.windows.net:6380/0
Then, we initialize the Redis connection in code like this:
if Rails.env.test?
::Redis.new :url => redis_url, :db => ENV['REDIS_DB']
else
::Redis.new(cluster: [ "#{redis_url}" ], db: ENV['REDIS_DB'])
end
We're using the redis-rb gem and redis-rails gem.
If anyone can point out what we're doing wrong, please do share!
Thank you User Peter Pan - Stack Overflow. Posting your suggestion as answer to help other community members.
You can try below code:
# Import the redis library for Ruby
require "redis"
# Create a redis client instance for connecting Azure Redis Cache
# At here, for enabling SSL, set the `:ssl` symbol with the
# symbol value `:true`, see https://github.com/redis/redis-rb#ssltls-support
redis = Redis.new(
:host => '<azure redis cache name>.redis.cache.windows.net',
:port => 6380,
:db => <the db index you selected like 10>,
:password => "<access key>",
:ssl => :true)
# Then, set key `foo` with value `bar` and return `OK`
status = redis.set('foo', 'bar')
puts status # => OK
# Get the value of key `foo`
foo = redis.get('foo')
puts foo # => bar
Reference: How to setup Azure Redis cache with Rails - Stack Overflow

Rails: Active Record Timeout

This is a piece of code in place. When I add this to the cron with timeout the entire array gets saved twice. When I remove timeout nothing gets saved
In this scenario we would want to save the array results (coming in from an api) with over 100k records to be saved to the db. I have used bulk insert and TinyTds gems here
ActiveRecord::Base.establish_connection(adapter: 'sqlserver', host: "xxx", username: "xxx", password: "xxx", database: "xxx", azure: true, port: 1433, timeout: 5000)
class Report < ActiveRecord::Base
self.primary_key = 'id'
end
my_array = [] #count of 100000 records
Report.bulk_insert(:account_owner_id) do |worker|
my_array.drop(2).each do |arr|
worker.add account_owner_id: arr[0]
end
end
You can try removing timeout and adding ignore: true to your bulk insert as shown here. There may be an insert that is failing.
Report.bulk_insert(:account_owner_id, ignore: true) do |worker|

How to connect to different collections in solr from one Rails application?

I am trying to create a multi-tenant architecture and I am hoping to use separate solr collections for different tenants.
I am using sunspot-solr to interact with the Solr Server.
The sunspot.yml file looks as below :
production:
solr:
hostname: 127.0.0.1
port: 8983
log_level: WARNING
path: /solr/collection_1
read_timeout: 2
open_timeout: 0.5
development:
solr:
hostname: 127.0.0.1
port: 8983
log_level: INFO
path: /solr/collection
test:
solr:
hostname: localhost
port: 8983
log_level: WARNING
path: /solr/collection
My Model looks like this:
class ContentMaster < ActiveRecord::Base
self.table_name = "content_master"
searchable do
text :title, :stored => true
string :id, :multiple => true, :stored => true
integer :status, :multiple => true
string :image_url, :stored => true
end
end
And this is how I am fetching the content from Solr:
#search = ContentMaster.search do
fulltext search_word
with(:status,1)
end
Now, Everything is working fine for a single collection
Please guide me on the following :
How do I configure my sunspot.yml file to incorporate multiple collection(core) paths.
What changes do I need to make(if any) in my models to index data
for different tenants.
What changes do I need to make(if any) in my data extraction mechanism for different tenants.
Any help is appreciated. Thanks...!!!

Sidekiq Redis database keys increasing over time

I am currently using Sidekiq with my Rails app in production along with an ElasticCache Redis database. I've noticed recently that when monitoring the CurrItems metric using the AWS tools, I see the number of items gradually increasing over time in an almost step-like way:
However, when I look at the jobs in queue in the Sidekiq dashboard, I don't see anything backing up at all. I see 0 jobs in queue, 0 busy, 0 scheduled.
The step-like increase seems to happen at a very particular time each day (right at the end of the day), which made me think it might be related to a chron job/clockwork process I have running. However, I only have 4 jobs that run once a day and none of them run during that time or even near that time. Just for good measure though, here is my clock.rb file (I have shorted all the job descriptions and class and method names for simplicity's sake):
module Clockwork
every(30.seconds, 'Task 1') { Class.method }
every(30.seconds, 'Task 2') { Class.method }
every(10.minutes, 'Task 3') { Class.method }
every(1.day, 'Task 4', :at => '06:00', :tz => 'EST') { Class.method }
every(10.minutes, 'Task 5') { Class.method }
every(1.day, 'Task 6', :at => '20:00', :tz => 'UTC') { Class.method }
every(1.day, 'Task 7', :at => '20:00', :tz => 'UTC') { Class.method }
every(1.day, 'Task 8', :at => '20:00', :tz => 'UTC') { Class.method }
every(1.hour, 'Task 9') {Class.method}
every(30.minutes, 'Task 10') {Class.method}
every(30.minutes, 'Task 11') {Class.method}
every(1.hour, 'Task 12') {Class.method}
end
I'm not quite sure where this is coming from. Maybe Sidekiq isn't removing the keys from the database once the job is complete?
Another potential helpful piece of information is that I'm running 4 workers/servers. Here is my Redis configuration:
if (Rails.env == "production" || Rails.env == "staging")
redis_domain = ENV['REDIS_DOMAIN']
redis_port = ENV['REDIS_PORT']
redis_url = "redis://#{redis_domain}:#{redis_port}"
Sidekiq.configure_server do |config|
ActiveRecord::Base.establish_connection(
adapter: "postgresql",
encoding: "unicode",
database: ENV["RDS_DB_NAME"],
pool: 25,
username: ENV["RDS_USERNAME"],
password: ENV["RDS_PASSWORD"],
host: ENV["RDS_HOST"],
port: 5432
)
config.redis = {
namespace: "sidekiq",
url: redis_url
}
end
Sidekiq.configure_client do |config|
config.redis = {
namespace: "sidekiq",
url: redis_url
}
end
end
Anyone know why this could be happening?
Historical job metrics are stored per-day, for the past 5 years. You are seeing those 4-6 keys/day. This gives you the nice metrics on the Web UI's Dashboard.

Sidekiq looks for redis on localhost instead of remote

I have this in my sidekiq initializer:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['REDIS_PORT_6379_TCP_ADDR']}:#{ENV['REDIS_PORT_6379_TCP_PORT']}/0", namespace: 'Tyresearch' }
end
When i boot sidekiq it starts fine on the correct host and port:
INFO: Booting Sidekiq 3.2.4 with redis options {:url=>"redis://172.17.0.6:6379/0", :namespace=>"Tyresearch"}
However when I try to launch a worker or visit the sidekiq admin panel I get this error:
Redis::CannotConnectError at /sidekiq
Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED)
So, for some reason it now tries to connect on 127.0.0.1 instead of 172.17.0.6
Here is my env (the app is made of linked docker containers managed by vagrant)
{"ADMIN_EMAIL"=>"user#example.com",
"ADMIN_NAME"=>"First User",
"ADMIN_PASSWORD"=>"changeme",
"BUNDLE_BIN_PATH"=>
"/opt/rubies/ruby-2.1.2/lib/ruby/gems/2.1.0/gems/bundler-1.7.4/bin/bundle",
"BUNDLE_GEMFILE"=>"/var/www/Gemfile",
"COLUMNS"=>"135",
"GEM_HOME"=>"/var/bundle/ruby/2.1.0",
"GEM_PATH"=>"",
"GMAIL_PASSWORD"=>"Your_Password",
"GMAIL_USERNAME"=>"Your_Username",
"HOME"=>"/home/web",
"HOSTNAME"=>"223b7ef7396f",
"LESSCLOSE"=>"/usr/bin/lesspipe %s %s",
"LESSOPEN"=>"| /usr/bin/lesspipe %s",
"LINES"=>"43",
"LS_COLORS"=>
"rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=
01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=0
1;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.
flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:
*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:",
"NODE_PATH"=>"/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript",
"PATH"=>
"/var/bundle/ruby/2.1.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/rubies/ruby-2.1.2/bin",
"POSTGRES_ENV_PASSWORD"=>"password",
"POSTGRES_ENV_USERNAME"=>"postgres",
"POSTGRES_ENV_VERSION"=>"9.3",
"POSTGRES_NAME"=>"/tyresearch/postgres",
"POSTGRES_PORT"=>"tcp://172.17.0.5:5432",
"POSTGRES_PORT_5432_TCP"=>"tcp://172.17.0.5:5432",
"POSTGRES_PORT_5432_TCP_ADDR"=>"172.17.0.5",
"POSTGRES_PORT_5432_TCP_PORT"=>"5432",
"POSTGRES_PORT_5432_TCP_PROTO"=>"tcp",
"PWD"=>"/var/www",
"REDIS_NAME"=>"/tyresearch/redis",
"REDIS_PORT"=>"tcp://172.17.0.6:6379",
"REDIS_PORT_6379_TCP"=>"tcp://172.17.0.6:6379",
"REDIS_PORT_6379_TCP_ADDR"=>"172.17.0.6",
"REDIS_PORT_6379_TCP_PORT"=>"6379",
"REDIS_PORT_6379_TCP_PROTO"=>"tcp",
"ROLES"=>"[\"admin\", \"user\", \"VIP\"]",
"RUBYLIB"=>
"/opt/rubies/ruby-2.1.2/lib/ruby/gems/2.1.0/gems/bundler-1.7.4/lib",
"RUBYOPT"=>"-rbundler/setup",
"SELENIUM_ENV_CHROME_DRIVER_VERSION"=>"2.12",
"SELENIUM_ENV_DEBCONF_NONINTERACTIVE_SEEN"=>"true",
"SELENIUM_ENV_DEBIAN_FRONTEND"=>"noninteractive",
"SELENIUM_ENV_DISPLAY"=>":20.0",
"SELENIUM_ENV_LANG"=>"en_US.UTF-8",
"SELENIUM_ENV_LANGUAGE"=>"en_US.UTF-8",
"SELENIUM_ENV_SCREEN_DEPTH"=>"24",
"SELENIUM_ENV_SCREEN_HEIGHT"=>"1020",
"SELENIUM_ENV_SCREEN_WIDTH"=>"1360",
"SELENIUM_ENV_SELENIUM_PORT"=>"4444",
"SELENIUM_ENV_TZ"=>"\"US/Pacific\"",
"SELENIUM_NAME"=>"/tyresearch/selenium",
"SELENIUM_PORT"=>"tcp://172.17.0.7:4444",
"SELENIUM_PORT_4444_TCP"=>"tcp://172.17.0.7:4444",
"SELENIUM_PORT_4444_TCP_ADDR"=>"172.17.0.7",
"SELENIUM_PORT_4444_TCP_PORT"=>"4444",
"SELENIUM_PORT_4444_TCP_PROTO"=>"tcp",
"SELENIUM_PORT_5900_TCP"=>"tcp://172.17.0.7:5900",
"SELENIUM_PORT_5900_TCP_ADDR"=>"172.17.0.7",
"SELENIUM_PORT_5900_TCP_PORT"=>"5900",
"SELENIUM_PORT_5900_TCP_PROTO"=>"tcp",
"SHLVL"=>"1",
"TERM"=>"xterm",
"_"=>"/opt/rubies/ruby-2.1.2/bin/bundle",
"_FIGARO_ADMIN_EMAIL"=>"user#example.com",
"_FIGARO_ADMIN_NAME"=>"First User",
"_FIGARO_ADMIN_PASSWORD"=>"changeme",
"_FIGARO_GMAIL_PASSWORD"=>"Your_Password",
"_FIGARO_GMAIL_PASSWORD"=>"Your_Password",
"_FIGARO_GMAIL_USERNAME"=>"Your_Username",
"_FIGARO_ROLES"=>"[\"admin\", \"user\", \"VIP\"]",
"_ORIGINAL_GEM_PATH"=>""}
Ok, I missed to read the documentation carefully and didn't configure sidekiq clientside:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['REDIS_PORT_6379_TCP_ADDR']}:#{ENV['REDIS_PORT_6379_TCP_PORT']}/0", namespace: 'Tyresearch' }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['REDIS_PORT_6379_TCP_ADDR']}:#{ENV['REDIS_PORT_6379_TCP_PORT']}/0", namespace: 'Tyresearch' }
end

Resources