User sessions invalid after changing password, but only with multiple threads - ruby-on-rails

I'm running into a strange problem with a feature in my Rails 4 + Devise 3.2 application which allows users to change their password via an AJAX POST to the following action, derived from the Devise wiki Allow users to edit their password. It seems that after the user changes their password and after one or more requests later, they are forcible logged out, and will continue to get forced logged out after signing back in.
# POST /update_my_password
def update_my_password
#user = User.find(current_user.id)
authorize! :update, #user ## CanCan check here as well
if #user.valid_password?(params[:old_password])
#user.password = params[:new_password]
#user.password_confirmation = params[:new_password_conf]
if #user.save
sign_in #user, :bypass => true
head :no_content
return
end
else
render :json => { "error_code" => "Incorrect password" }, :status => 401
return
end
render :json => { :errors => #user.errors }, :status => 422
end
This action actually works fine in development, but it fails in production when I'm running multi-threaded, multi-worker Puma instances. What is appearing to happen is that the user will remain logged in until one of their requests hits a different thread, and then they are logged out as Unauthorized with a 401 response status. The problem does not occur if I run Puma with a single thread and a single worker. The only way I can seem to allow the user the ability to stay logged in again with multiple threads is to restart the server (which is not a solution). This is rather strange, because I thought the session storage configuration I have would have handled it correctly. My config/initializers/session_store.rb file contains the following:
MyApp::Application.config.session_store(ActionDispatch::Session::CacheStore,
:expire_after => 3.days)
My production.rb config contains:
config.cache_store = :dalli_store, ENV["MEMCACHE_SERVERS"],
{
:pool_size => (ENV['MEMCACHE_POOL_SIZE'] || 1),
:compress => true,
:socket_timeout => 0.75,
:socket_max_failures => 3,
:socket_failure_delay => 0.1,
:down_retry_delay => 2.seconds,
:keepalive => true,
:failover => true
}
I am booting up puma via bundle exec puma -p $PORT -C ./config/puma.rb. My puma.rb contains:
threads ENV['PUMA_MIN_THREADS'] || 8, ENV['PUMA_MAX_THREADS'] || 16
workers ENV['PUMA_WORKERS'] || 2
preload_app!
on_worker_boot do
ActiveSupport.on_load(:active_record) do
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 16
ActiveRecord::Base.establish_connection(config)
end
end
So... what could be going wrong here? How can I update the session across all threads/workers when the password has changed, without restarting the server?

Since you're using Dalli as your session store you may be running up against this issue.
Multithreading Dalli
From the page:
"If you use Puma or another threaded app server, as of Dalli 2.7, you can use a pool of Dalli clients with Rails to ensure the Rails.cache singleton does not become a source of thread contention."

I suspect you're seeing that behavior due to the following issues:
devise defines the current_user helper method using an instance variable getting the value from warden.
in lib/devise/controllers/helpers.rb#58. Substitute user for mapping
def current_#{mapping}
#current_#{mapping} ||= warden.authenticate(:scope => :#{mapping})
end
Not having run into this myself, this is speculation, but hopefully it's helpful in some way. In a multi-threaded app, each request is routed to a thread which may be keeping the previous value of the current_user around due to caching, either in thread local storage or rack which may track data per thread.
One thread changes the underlying data (the password change), invalidating the previous data. The cached data shared among other threads is not updated, causing later accesses using the stale data to cause the forced logout. One solution might be to flag that the password changed, allowing the other threads to detect that change and handle it gracefully, without a forced logout.

I would suggest that after a user changes their password, log them out and clear their sessions, like so:
def update_password
#user = User.find(current_user.id)
if #user.update(user_params)
sign_out #user # Let them sign-in again
reset_session # This might not be needed?
redirect_to root_path
else
render "edit"
end
end
I believe your main issue is the way that sign_in updates the session combined with the multi-threads as you mentioned.

This is a gross, gross solution, but it appeared that the other threads would do ActiveRecord query caching of my User model, and the stale data returned would trigger an authentication failure.
By adapting a technique described in Bypassing ActiveRecord cache, I added the following to my User.rb file:
# this default scope avoids query caching of the user,
# which can be a big problem when multithreaded user password changing
# happens.
FIXNUM_MAX = (2**(0.size * 8 -2) -1)
default_scope {
r = Random.new.rand(FIXNUM_MAX)
where("? = ?", r,r)
}
I realize this has performance implications that pervade throughout my application, but it seems to be the only way I could get around the issue. I tried overriding many of the devise and warden methods which use this query, but without luck. Perhaps I'll look into filing a bug against devise/warden soon.

Related

ActiveStorage: Old urls still request deprecated :combine_options in variation_key

Recently I upgraded from Rails 6.0.3.4 to 6.1.3. ActiveStorage deprecated combine_options, which I cleared from my app. All fresh request work as expected.
Internet Bots (Facebook, Google, ...) cache urls to images hosted on a website (like mine). According to my Rollbar records they request these a couple of times a day.
The cached URL's that should load ActiveStorage attachments include an old variation_key in the URL. When the blob wants to load using the decoded variation_key, I see that combine_options is still present. This throws a 500 Internal Server Error with ArgumentError (Active Storage's ImageProcessing transformer doesn't support :combine_options, as it always generates a single ImageMagick command.):.
Is there any way I can stop these errors from showing up?
Rails version: 6.1.3.
Ruby version: 2.7.2p137
I have resolved this issue using some middleware. This will intercept all incoming requests, scan if they are ActiveStorage urls, find the ones with the deprecated combine_options and just return 404 not found. This code will also raise an error is the current environment is development, this way I don't accidentally introduce the deprecated code again.
For those of you who might have the same problem, here's the code.
application.rb
require_relative '../lib/stopper'
config.middleware.use ::Stopper
lib/stopper.rb
class Stopper
def initialize(app)
#app = app
end
def call(env)
req = Rack::Request.new(env)
path = req.path
if problematic_active_storage_url?(path)
if ENV["RACK_ENV"] == 'development'
raise "Problematic route, includes deprecated combine_options"
end
[404, {}, ['not found']]
else
#app.call(env)
end
end
def problematic_active_storage_url?(path)
if active_storage_path?(path) && !valid_variation_key?(variation_key_from_path(path))
return true
end
false
end
def active_storage_path?(path)
path.start_with?("/rails/active_storage/representations/")
end
def variation_key_from_path(path)
if path.start_with?("/rails/active_storage/representations/redirect/")
path.split('/')[6]
elsif path.start_with?("/rails/active_storage/representations/")
path.split('/')[5]
end
end
def valid_variation_key?(var_key)
if decoded_variation = ActiveStorage::Variation.decode(var_key)
if transformations = decoded_variation.transformations
if transformations[:combine_options].present?
return false
end
end
end
true
end
end
I thought the stopper was a great solution but eventually I wanted to get rid of it. Unforunately most of our old requests were stilling coming through months later and no one was honoring the 404s. So I decided to monkey patch based off the previous rails versions. This is was I did.
config/initalizers/active_storage.rb
Rails.application.config.after_initialize do
require 'active_storage'
ActiveStorage::Transformers::ImageProcessingTransformer.class_eval do
private
def operations
transformations.each_with_object([]) do |(name, argument), list|
if name.to_s == "combine_options"
list.concat argument.keep_if { |key, value| value.present? and key.to_s != "host" }.to_a
elsif argument.present?
list << [ name, argument ]
end
end
end
end
end

Rails session still alive after db:drop

I'm curious why session left alive after rake db:drop?
I have
def current_order
if !session[:order_id].nil?
#current_order = Order.find(session[:order_id])
else
Order.new
end
end
helper_method :current_order
and after rake db:drop it gives me exception
Couldn't find Order with 'id' = my session number
>> session[:order_id]
=> 11
that means session still there. But Why?
Rails by default uses ActionDispatch::Session::CookieStore to store the session. Thus cleaning out the database does not remove any session data as it is held by the client.
Instead you would remove the cookies in the browser (in development) or change the secret key base to invalidate existing cookies.
See:
Ruby on Rails Security Guide

Heroku timeout when sending emails

I am on Heroku with a custom domain, and I have the Redis add-on. I need help understanding how to create a background worker for email notifications. Users can inbox message each other, and I would like to send a email notification to the user for each new message received. I have the notifications working in development, but I am not good with creating background jobs which is required for Heroku, otherwise the server would timeout.
Messages Controller:
def create
#recipient = User.find(params[:user])
current_user.send_message(#recipient, params[:body], params[:subject])
flash[:notice] = "Message has been sent!"
if request.xhr?
render :json => {:notice => flash[:notice]}
else
redirect_to :conversations
end
end
User model:
def mailboxer_email(object)
if self.no_email
email
else
nil
end
end
Mailboxer.rb:
Mailboxer.setup do |config|
#Configures if you applications uses or no the email sending for Notifications and Messages
config.uses_emails = false
#Configures the default from for the email sent for Messages and Notifications of Mailboxer
config.default_from = "no-reply#domain.com"
#Configures the methods needed by mailboxer
config.email_method = :mailboxer_email
config.name_method = :name
#Configures if you use or not a search engine and which one are you using
#Supported enignes: [:solr,:sphinx]
config.search_enabled = false
config.search_engine = :sphinx
end
Sidekiq is definitely the way to go with Heroku. I don't think mailboxer supports background configuration out of the box. Thankfully, it's still really easy with sidekiq's queueing process.
Add gem 'sidekiq' to your gemfile and run bundle.
Create a worker file app/workers/message_worker.rb.
class MessageWorker
include Sidekiq::Worker
def perform(sender_id, recipient_id, body, subject)
sender = User.find(sender_id)
recipient = User.find(recipient_id)
sender.send_message(recipient, body, subject)
end
end
Update your Controller to Queue Up the Worker
Remove: current_user.send_message(#recipient, params[:body], params[:subject])
Add: MessageWorker.perform_async(current_user.id, #recipient.id, params[:body], params[:subject])
Note: You should never pass workers ActiveRecord objects. That's why I setup this method to pass the User ids and look them up in the worker's perform method, instead of the entire object.
Finally, restart your server and run bundle exec sidekiq. Now your app should be sending the email background.
When you deploy, you will need a separate dyno for the worker which should look like this: worker: bundle exec sidekiq. You will also need Heroku's redis add-on.
Sounds like a H21 Request Timeout:
An HTTP request took longer than 30 seconds to complete.
To create a background worker for this in RoR, you should grab Resque, a Redis-backed background queueing library for RoR. Here is a demo. Another demo. And another demo.
To learn more about using Resque in Heroku, you can also read the herokue article up here. Or this tutorial (it's an old one though). Another great tutorial.
There is also a resque_mailer gem that will speed things up for you.
gem install resque_mailer #or add it to your Gemfile & use bundler
It is fairly straightforward. Here is a snippet from a working demo by the author:
class Notifier < ActionMailer::Base
include Resque::Mailer
default :from => "from#example.com"
def test(data={})
data.symbolize_keys!
Rails.logger.info "sending test mail"
Rails.logger.info "params: #{data.keys.join(',')}"
Rails.logger.info ""
#subject = data[:subject] || "Testing mail"
mail(:to => "nap#localhost.local",
:subject => #subject)
end
end
doing Notifier.test.deliver will deliver the mail.
You can also consider using mail delivery services like SES.
Sidekiq is an option that you could consider. To get it working you can add something like RedisToGo, then configure an initializer for Redis. Then on Heroku you can add something like worker: bundle exec sidekiq ... to your Procfile.
https://github.com/mperham/sidekiq/wiki/Getting-Started
It also has a dashboard for monitoring.
https://github.com/mperham/sidekiq/wiki/Monitoring

Uninitialized constant error after uploading on Heroku

There is the following problem: I'm developing some Rails application on my local machine, and all is good, app works, but after uploading on Heroku there would be the following error (I saw it using 'heroku logs'):
NameError (uninitialized constant Api::V1::ApiV1Controller::UndefinedTokenTypeError)
My code:
def require_token
begin
Some code which generates UndefinedTokenTypeError
rescue UndefinedTokenTypeError => e
render json: e.to_json
end
end
UndefinedTokenTypeError is in lib/errors.rb file:
class EmptyCookieParamsError < StandardError
def to_json
{ result_code: 1 }
end
end
class UndefinedTokenTypeError < StandardError
def to_json
{ result_code: 2 }
end
end
I've got the same version for Rails/Ruby on my local machine (2.0). How can I fix it? Thanks.
From what I can see, you may be experiencing either a CORS-related issue or you're not authenticating properly
Cross Origin Resource Sharing
CORS is a standard HTML protocol, which basically governs which websites can "ping" your site. Facebook & Twitter's third-party widgets only work because they allow any site to send them data
For Rails to work with CORS, it's recommended to install the Rack-CORS gem. This will allow you to put this code in your config/application.rb file:
#CORS
config.middleware.use Rack::Cors do
allow do
origins '*'
resource '/data*', :headers => :any, :methods => :post
end
end
Because you're experiencing these issues on Heroku, it could be the problem you're experiencing. Even if it isn't, it's definitely useful to appreciate how CORS works
Authentication
Unless your API is public, you'll likely be authenticating the requests
The way we do this is with the authenticate_or_request_with_http_token function, which can be seen here:
#Check Token
def restrict_access
authenticate_or_request_with_http_token do |token, options|
user = User.exists?(public_key: token)
#token = token if user
end
end
We learnt how to do this with this Railscast, which discusses how to protect an API. The reason I asked about your code was because the above works for us on Heroku, and you could gain something from it!
Running on Heroku will be using the production environment. Check to see what is different between environments/development.rb and environments/production.rb
You can try running your app in production mode on your local machine, rails server -e production
I am guessing your config.autoload_paths isn't set correctly. Should be in config/application.rb

Fragment_exist for Memcache does not find cached info

I am using Rails, Dalli gem, friendly_id and Memcachier on Heroku.
My problem is similar to an issue I have had previously but that stopped working after I started using Memcache instead of the default Rails cache. It should be noted that I am not very familiar with Rails caching and it is quite likely that I do many things wrong (or fail to consider simple things).
production.rb
config.action_controller.perform_caching = true
config.cache_store = :dalli_store, 'mc2.ec2.memcachier.com:11211', { :namespace => 'my_app_name', :expires_in => 40.days, :compress => true, :username => 'asdf', :password => 'asdf' }
Gift#show - controller
unless fragment_exist?("gift-show--" + #gift.slug)
# Perform slow database fetches etc
end
Gift#show - view
<% cache('gift-show--' + #gift.slug) do %>
# Create HTML with lots of images and such
<% end %>
This worked fine before I started using Memcachier on Heroku. My guess is that fragment_exist? does not check in Memcachier but rather in "default Rails cache" (if there is a difference). I have tried to use Rails.cache.exist?("gift-show--" + #gift.slug) instead of fragment_exist? but it doesn't work.
I have loaded the particular gift#show-view a few times to make sure it is cached. In the logs I can also see Read fragment views/gift-show--cash-stash (1.3ms) (after the controller) which I believe is a proof for that a fragment actually exist. It is just that it is going through the slow (4 seconds) gift#show-controller when it is not necessary.
If I enter the console on Heroku and type "Rails.cache.read('gift-show--cash-stash')" I get a nil response.
Another peculiar things is that if do the following in the console:
irb(main):014:0> Rails.cache.write("foo", "bar")
=> true
irb(main):015:0> Rails.cache.read("foo")
=> nil
That is odd, isn't it?
So, what should I use, instead of fragment_exist? in order to make this work?
I am not 100% sure this is the solution but I added the 'memcachier' gem (which I didn't have) and altered my production.rb to:
config.cache_store = :dalli_store
This actually also solved another, completely different issue, to my great surprise!

Resources