Sidekiq queue failing in test - requires separate Redis process - ruby-on-rails

Running an RSpec test with a Sidekiq::Queue instance is failing unless Redis is running separately.
Sidekiq::Queue.new('my-queue').select(&:item)
Raises error in test
Redis::CannotConnectError:
Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)
I've added the usual to the spec helper:
require 'sidekiq/testing'
Sidekiq::Testing.inline!
And mock_redis to the gemfile.
# gemfile
gem 'mock_redis', '0.16.1'
Using sidekiq (3.4.2)
How can I update my configuration to allow this to work?

mock_redis only provides with a fake redis. It does not intercept/replace actual redis classes/connections. If you intend to use fake redis in tests, you should tell sidekiq so. In your config/initializers/sidekiq.rb (or whereever your sidekiq redis config is):
redis = if Rails.env.test?
require 'mock_redis'
MockRedis.new
else
{ url: 'redis://redis.example.com:7372/12' }
end
Sidekiq.configure_server do |config|
config.redis = redis
end
Sidekiq.configure_client do |config|
config.redis = redis
end

I solved this by mocking Redis for tagged RSpec tests in the spec_helper.rb file.
config.before(:each, redis: true) do
mock = MockRedis.new
allow(Redis).to receive(:new).and_return(mock)
end
Then in the scenario:
scenario "my scenario with redis", redis: true do
...
end

Related

Ruby on Rails: Operation now in progress - connect(2) would block

I'm working on a site someone else made. There is a production version.
I'm trying to send emails to users whose properties haven't been updated in 30 days.
Everything works until I try to use deliver_later. The reason I'm using deliver_later is because deliver_now results in an issue of sending too many emails per second. I'm currently using Mailtrap for testing, but I assume I will run into that sort of issue on production.
So I opted to wait 1 second for each email:
#testproperties = Property.has_not_updated.includes(:user)
#testproperties.each do |property|
UserMailer.with(property: property, user: property.user).check_listing.deliver_later(wait:1.seconds)
end
This results in IO::EINPROGRESSWaitWritable Operation now in progress - connect(2) would block
And nothing sends.
I'm not sure how to solve this issue.
Edit:
I can see on the production site that I can visit the route /sidekiq. The routes file has this block:
authenticate :user, lambda { |u| u.admin? } do
mount Sidekiq::Web => '/sidekiq'
end
I can view the web interface and see all the jobs. It's all working there. But I need to access development version running on localhost:3000.
Trying to access this locally still results in:
Operation now in progress - connect(2) would block
# # Socket#connect
def connect_nonblock(addr, exception: true)
__connect_nonblock(addr, exception)
end
end
Sidekiq.rb:
require 'sidekiq'
unless Rails.env.test?
host = 'localhost'
port = '6379'
namespace = 'sitename'
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{host}:#{port}", namespace: namespace }
schedule_file = "config/schedule.yml"
if File.exists?(schedule_file)
Sidekiq::Cron::Job.load_from_hash YAML.load_file(schedule_file)
end
config.server_middleware do |chain|
chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes
end
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware, expiration: 30.minutes
end
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{host}:#{port}", namespace: namespace }
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware, expiration: 30.minutes
end
end
end
for cable.yml:
development:
adapter: async
url: redis://localhost:6379/1
channel_prefix: sitename_dev
test:
adapter: async
production:
adapter: redis
url: redis://localhost:6379/1
channel_prefix: sitename_production
The production server is running Ubuntu and they already installed redis-server.
I had not installed that locally. (I'm using Ubuntu through Windows WSL)
sudo apt install redis-server
I can now access the web interface.
Make sure that Redis is started:
sudo service redis-server restart

Ruby on Rails - Sidekiq not sending email, stay lock up in schedule tasks

I have a contact form that send a email with the fields to a admin email. I'm using Sidekiq and Redis. When I send the form, the task stay lock up in sidekiq schedule task and is never sent.
Has anyone ever experienced this? I've already tried many things to try fix this, but without success. I configured something wrong?
# app/mailers/contact_mailer.rb
class ContactMailer < ActionMailer::Base
default from: "Facens Liga <no-reply#facens.br>"
def create(contact)
#contact = contact
mail(to: "felipe.marcon#atua.ag", subject: "Contato Através do Site")
end
end
# config/initializers/sidekiq.rb
require 'sidekiq'
require 'sidekiq-status'
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://127.0.0.1:6379/6', namespace: 'facenliga' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://127.0.0.1:6379/6', namespace: 'facensliga' }
end
# config/sidekiq.yml
:pidfile: tmp/pids/sidekiq.pid
:logfile: ./log/sidekiq.log
:queues:
- default
- mailers
production:
:concurrency: 25
staging:
:concurrency: 15
development:
:concurrency: 25
I hope that someone can help me. Thank you.
The problem is with your namespace. Don't use namespaces, as I wrote in my blog last year.
The redis-namespace gem allows you to share a Redis database among several applications by prefixing every key with a namespace but it's a terrible hack that no one should use. Redis already has a native solution if you want to share a Redis instance: databases. The default database is 0. Here's how to point Sidekiq to use database 1 instead:
https://www.mikeperham.com/2017/04/10/migrating-from-redis-namespace/

Dockerized Selenium with rails tests

I'm trying to run rspec tests with Selenium chrome in docker but caught dozens of error. Finally i connected capybara to remote capybara, but now i got these errors:
Got 0 failures and 2 other errors:
1.1) Failure/Error: visit new_user_session_path
Selenium::WebDriver::Error::WebDriverError:
unexpected response, code=404, content-type="text/html"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Action Controller: Exception caught</title>
....................
Failure/Error: raise Error::WebDriverError, msg
Selenium::WebDriver::Error::WebDriverError:
unexpected response, code=404, content-type="text/html"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Action Controller: Exception caught</title>
<style>
body {
background-color: #FAFAFA;
...............
So here is my rails_helper.rb. It's really messy cause I tried dozen times with different configs
require 'simplecov'
SimpleCov.start
ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
# Prevent database truncation if the environment is production
abort("The Rails environment is running in production mode!") if Rails.env.production?
require 'spec_helper'
require 'rspec/rails'
require 'turnip/capybara'
require "selenium/webdriver"
require 'capybara-screenshot/rspec'
Dir[Rails.root.join('spec/support/**/*.rb')].each { |f| require f }
Shoulda::Matchers.configure do |config|
config.integrate do |with|
with.test_framework :rspec
with.library :rails
end
end
# Checks for pending migration and applies them before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
Capybara::Screenshot.register_driver(:headless_chrome) do |driver, path|
driver.browser.manage.window.resize_to(1600, 1200)
driver.browser.save_screenshot("tmp/capybara/chrom_#{Time.now}.png")
end
url = 'http://test.prs.com:3001/'
Capybara.javascript_driver = :remote_browser
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOqptions: { args: %w(headless disable-gpu no-sandbox) }
)
end
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOqptions: { args: %w(headless disable-gpu no-sandbox) }
)
Capybara.default_driver = :remote_browser
Capybara.register_driver :remote_browser do |app|
Capybara::Selenium::Driver.new(app, :browser => :remote, url: url,
desired_capabilities: capabilities)
end
# Capybara::Selenium::Driver.new app,
# browser: :chrome,
# desired_capabilities: capabilities
# end
Capybara.app_host = "http://#{ENV['APP_HOST']}:#{3001}"
Capybara.run_server = false
Capybara.configure do |config|
config.always_include_port = true
end
Chromedriver.set_version '2.32'
# Capybara.javascript_driver = :headless_chrome
# Capybara.server_host= '0.0.0.0'
# Capybara.default_host = "http://test.prs.com"
# Capybara.app_host = "#{Capybara.default_host}:#{Capybara.server_port}"
RSpec.configure do |config|
config.include FactoryGirl::Syntax::Methods
config.include RequestSpecHelper
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
config.before(:each) do
DatabaseCleaner.strategy = :truncation
DatabaseCleaner.clean
end
config.before(:all, type: :request) do
host! 'test.prs.com'
end
config.use_transactional_fixtures = true
config.infer_spec_type_from_file_location!
config.filter_rails_from_backtrace!
end
And here is my docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3001 -b '0.0.0.0'
volumes:
- .:/prs
ports: ['3000:3000', '3001:3001']
# - "3001:3001"
depends_on:
- db
- selenium
extra_hosts:
- "test.prs.com:127.0.0.1"
environment:
- APP_HOST=test.prs.com
links:
- selenium
selenium:
image: selenium/standalone-chrome-debug:3.6.0-bromine
# Debug version enables VNC ability
ports: ['4444:4444', '5900:5900']
# Bind selenium port & VNC port
volumes:
- /dev/shm:/dev/shm
shm_size: 1024m
privileged: true
container_name: selenium
I'm new to all this so any help will be appreciated.
From the comments you have clarified that you are trying to run the tests in the web docker instance while using the selenium driven browser in the selenium docker instance. Additionally, since your tests are working locally I assume you are using Rails 5.1+ so transactional testing for feature tests will work. Based on those parameters there are a few things needed to make everything work properly.
Capybara needs to start its own copy of the app to run the tests against. This is needed for transactional testing to work and for request completion detection. You enable that with
Capybara.run_server = true # You currently have this set to false for some reason
Capybara needs to run its copy of the app on an interface which can be reached from the selenium docker instance. Easiest way to do that is to specify to bind to 0.0.0.0
Capybara.server_host = `0.0.0.0`
Capybara.server_port = 3001 # I'm assuming this is what you want, change to another port and make sure it's reachable from `selenium` instance if desired
The driver capybara is using needs to be configured to use the selenium instance
Capybara.register_driver :remote_browser do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w(headless disable-gpu no-sandbox) }
)
Capybara::Selenium::Driver.new(app,
:browser => :remote,
url: "http://selenium:4444",
desired_capabilities: capabilities
)
end
Capybara.javascript_driver = :remote_browser
# Capybara.default_driver = :remote_browser # only needed if you want all tests to use selenium rather than just JS tagged tests.
Configure Capybara to use the correct host when visiting relative URLs
Capybara.app_host = "http://web:3001" # a URL that represents the `web` docker instance from the perspective of the `selenium` docker instance
Notes: If you were expecting your tests to run on port 3001 as I guessed, then you want to stop the docker instance from launching rails s on that port, since you want the tests run against an app instance that Capybara itself launches # command: bundle exec rails s -p 3001 -b '0.0.0.0'. If the instance you currently have running on 3001 is for something else then you'll need a different port for the tests.
Additionally, if you're not running Rails 5.1+ then you'll need to set config.use_transactional_fixtures = false and fully configure database_cleaner - https://github.com/DatabaseCleaner/database_cleaner#rspec-with-capybara-example . If you are using Rails 5.1+ then you can probably remove all the database_cleaner stuff.
My problem was completely unrelated to Docker, Selenium, Capybara, Chromedriver or any of that. They were all red herrings because on my container only tests related to feature specs were failing.
It turns out they were all failing because feature specs are the only part of the app that looks at IP addresses.
I am using the ip_anonymizer gem and failed to set the IP_HASH_SECRET for the container. Hopefully, anyone else using this gem with Capybara and CI finds this useful.

Default Sidekiq Redis configuration in Rails app

I'm trying to understand the Redis & Sidekiq configuration in a Passenger+Rails app and have run into some lack of understanding. I start the redis server independently of my rails app, while Sidekiq is a gem in my Rails app. I start it likewise: (no sidekiq.yml file needed for me)
bundle exec sidekiq
Following is my sidekiq.rb initializer:
require 'sidekiq'
require 'sidekiq-status'
Sidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes
end
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
end
I went through some library classes, but to no avail.
I want to understand where does Sidekiq configure it's Redis server details. It defaults to localhost:6379, but I am not quite sure how.
Also, if I wish to use Memcached in future, how can I change that?
From sidekiq docs:
By default, Sidekiq tries to connect to Redis at localhost:6379
https://github.com/mperham/sidekiq/wiki/Using-Redis
You can change the port in the initializer:
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://redis.example.com:7372/12' }
end
From the looks of it sidekiq works only with redis
Sidekiq uses Redis to store all of its job and operational data.
Here is what I did from the other answer:
# frozen_string_literal: true
Sidekiq.configure_server do |config|
config.redis = {
url: ENV.fetch("SIDEKIQ_REDIS_URL", "redis://localhost:6379/1")
}
end

how setup active_job and rescue scheduler?

I'm trying to create background jobs for email notification and scraper.
I use resque-scheduler (4.0.0), resque (1.25.2) and rails 4.2.1.
My config.ru file:
# This file is used by Rack-based servers to start the application.
require ::File.expand_path('../config/environment', __FILE__)
run Rails.application
require 'resque/server'
run Rack::URLMap.new "/" => AppName::Application, "/resque" => Resque::Server.new
My /lib/tasks/resque.rake:
require 'resque/tasks'
require 'resque/scheduler/tasks'
namespace :resque do
task :setup do
require 'resque'
require 'resque-scheduler'
Resque.schedule = YAML.load_file("#{Rails.root}/config/resque_schedule.yml")
Dir["#{Rails.root}/app/jobs/*.rb"].each { |file| require file }
end
end
My /config/resque_scheduler.yml:
CheckFsUpdatesJob:
queue: fs_updates
every:
- '1h'
- :first_in: '10s'
class: CheckFsUpdatesJob
args:
description: scrape page
My /config/initializer/active_job.rb
ActiveJob::Base.queue_adapter = :resque
My /config/initializer/resque.rb:
#config/initializers/resque.rb
require 'resque-scheduler'
require 'resque/scheduler/server'
uri = URI.parse("redis://localhost:6379/")
Resque.redis = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
Dir["#{Rails.root}/app/jobs/*.rb"].each { |file| require file }
Resque.schedule = YAML.load_file(Rails.root.join('config', 'resque_schedule.yml'))
Resque::Server.use(Rack::Auth::Basic) do |user, password|
user = 'admin'
password = 'admin'
end
My first job for emails notifications:
class EmailNotificationJob < ActiveJob::Base
queue_as :email_notifications
def perform(episode_id, email)
NotificationMailer.new_record_appears(record_id, email).deliver_now
end
end
My second job for scheduled runs:
class CheckFsUpdatesJob < ActiveJob::Base
queue_as :fs_updates
def perform()
FsStrategy.new.check_for_updates
end
end
So I have to jobs:
1. emails notifications - should sends email when new record in DB appears
2. scrape a page - should runs every hour
How I run it:
redis-server
rake environment resque:work QUEUE=fs_updates
rake environment resque:work QUEUE=email_notifications
rake environment resque:scheduler
rails s
After running these commands I see in Resque Dashboard two workers and two queues, as it is expected.
But!
After clicking on 'queue now' button at resque Schedule tab, I see that task was created and wroted to "fs_updates" queue. But it's not running and in a few second it dissapears.
When I run a job for emails sending from rails console - it does not work at all.
Please, help me to fix my configurations.
Thanks kindly!
As I understood: rails and active_job developers is not responsible for resque plugins. Maybe this problem will be fixed in new gem versions, but now it does not work (active_job does not work fine with resque-scheduler).
Currently I use gem 'active_scheduler' to fix current problem.
I had the same issue trying to configure Sucker Punch on rails 4.2.1 In the end I moved my custom initialiser logic into application.rb, not great but it got me up and running.
Not sure if there is an issue with the initialisers in this release. Try moving your code from active_job.rb and resque.rb into application.rb or the appropriate environment file. Obviously this isn't a long term solution but it will at least help you you identify whether it's an initialiser issue or Resque config problem.

Resources