I'm working on a site someone else made. There is a production version.
I'm trying to send emails to users whose properties haven't been updated in 30 days.
Everything works until I try to use deliver_later. The reason I'm using deliver_later is because deliver_now results in an issue of sending too many emails per second. I'm currently using Mailtrap for testing, but I assume I will run into that sort of issue on production.
So I opted to wait 1 second for each email:
#testproperties = Property.has_not_updated.includes(:user)
#testproperties.each do |property|
UserMailer.with(property: property, user: property.user).check_listing.deliver_later(wait:1.seconds)
end
This results in IO::EINPROGRESSWaitWritable Operation now in progress - connect(2) would block
And nothing sends.
I'm not sure how to solve this issue.
Edit:
I can see on the production site that I can visit the route /sidekiq. The routes file has this block:
authenticate :user, lambda { |u| u.admin? } do
mount Sidekiq::Web => '/sidekiq'
end
I can view the web interface and see all the jobs. It's all working there. But I need to access development version running on localhost:3000.
Trying to access this locally still results in:
Operation now in progress - connect(2) would block
# # Socket#connect
def connect_nonblock(addr, exception: true)
__connect_nonblock(addr, exception)
end
end
Sidekiq.rb:
require 'sidekiq'
unless Rails.env.test?
host = 'localhost'
port = '6379'
namespace = 'sitename'
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{host}:#{port}", namespace: namespace }
schedule_file = "config/schedule.yml"
if File.exists?(schedule_file)
Sidekiq::Cron::Job.load_from_hash YAML.load_file(schedule_file)
end
config.server_middleware do |chain|
chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes
end
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware, expiration: 30.minutes
end
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{host}:#{port}", namespace: namespace }
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware, expiration: 30.minutes
end
end
end
for cable.yml:
development:
adapter: async
url: redis://localhost:6379/1
channel_prefix: sitename_dev
test:
adapter: async
production:
adapter: redis
url: redis://localhost:6379/1
channel_prefix: sitename_production
The production server is running Ubuntu and they already installed redis-server.
I had not installed that locally. (I'm using Ubuntu through Windows WSL)
sudo apt install redis-server
I can now access the web interface.
Make sure that Redis is started:
sudo service redis-server restart
Consider a site where Rails is used only for API. No server-side rendering.
With server-side rendering it's more or less clear. capybara starts puma, after which tests can connect to puma for pages.
But with no server-side rendering, there's no puma to ask for pages. How do I do that?
Explain yourself when downvoting, please.
Have a look at http://ruby-hyperloop.org. You can drive your client test suite from rspec, and easily integrate with rails
While Server-Side Rendering must be very common these days, I decided to take an alternative approach.
Add the following gems to the Gemfile:
gem 'httparty', '~> 0.16.2'
gem 'childprocess', '~> 0.7.0'
Move the following lines from config/environments/production.rb to config/application.rb to make RAILS_LOG_TO_STDOUT available in the test environment.
if ENV['RAILS_LOG_TO_STDOUT'].present?
config.logger = Logger.new(STDOUT)
end
Regarding webpack, make sure publicPath is set to http://localhost:7777/, and UglifyJsPlugin is not used in the test environment.
And add these two files:
test/application_system_test_case.rb:
# frozen_string_literal: true
require 'uri'
require 'test_helper'
require 'front-end-server'
FRONT_END = ENV.fetch('FRONT_END', 'separate_process')
FRONT_END_PORT = 7777
Capybara.server_port = 7778
Capybara.run_server = ENV.fetch('BACK_END', 'separate_process') == 'separate_thread'
require 'action_dispatch/system_test_case' # force registering and setting server
Capybara.register_server :rails_puma do |app, port, host|
Rack::Handler::Puma.run(app, Port: port, Threads: "0:1",
Verbose: ENV.key?('BACK_END_LOG'))
end
Capybara.server = :rails_puma
DatabaseCleaner.strategy = :truncation
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
self.use_transactional_tests = false
def setup
DatabaseCleaner.start
end
def teardown
DatabaseCleaner.clean
end
def uri(path)
URI::HTTP.build(host: 'localhost', port: FRONT_END_PORT, path: path)
end
end
unless ENV.key?('NO_WEBPACK')
system(
{'NODE_ENV' => 'test'},
'./node_modules/.bin/webpack', '--config', 'config/webpack/test.js', '--hide-modules') \
or abort
end
if FRONT_END == 'separate_process'
front_srv = ChildProcess.build(
'bundle', 'exec', 'test/front-end-server.rb',
'-f', FRONT_END_PORT.to_s,
'-b', Capybara.server_port.to_s
)
if ENV.key?('FRONT_END_LOG')
front_srv.io.inherit!
end
front_srv.start
Minitest.after_run {
front_srv.stop
}
else
Thread.new do
FrontEndServer.new({
Port: FRONT_END_PORT,
back_end_port: Capybara.server_port,
Logger: Rails.logger,
}).start
end
end
unless Capybara.run_server
back_srv = ChildProcess.build(
'bin/rails', 'server',
'-P', 'tmp/pids/server-test.pid', # to not conflict with dev instance
'-p', Capybara.server_port.to_s
)
back_srv.start
# wait for server to start
begin
socket = TCPSocket.new 'localhost', Capybara.server_port
rescue Errno::ECONNREFUSED
retry
end
socket.close
Minitest.after_run {
back_srv.stop
}
end
test/front-end-server.rb:
#!/usr/bin/env ruby
require 'webrick'
require 'httparty'
require 'uri'
class FrontEndServer < WEBrick::HTTPServer
class FallbackFileHandler < WEBrick::HTTPServlet::FileHandler
def service(req, res)
super
rescue WEBrick::HTTPStatus::NotFound
req.instance_variable_set('#path_info', '/index.html')
super
end
end
class ProxyHandler < WEBrick::HTTPServlet::AbstractServlet
def do_GET(req, res)
req.header.each do |k, v|
#logger.debug("-> #{k}: #{v}");
end
#logger.debug("-> body: #{req.body}");
uri2 = req.request_uri.dup
uri2.port = #config[:back_end_port]
res2 = HTTParty.send(req.request_method.downcase, uri2, {
headers: Hash[req.header.map { |k, v| [k, v.join(', ')] }],
body: req.body,
})
res.content_type = res2.headers['content-type']
res.body = res2.body
res2.headers.each do |k, v|
#logger.debug("<- #{k}: #{v}");
end
if res.body
body = res.body.length < 100 ? res.body : res.body[0,97] + '...'
#logger.debug("<- body: #{req.body}");
end
end
alias do_POST do_GET
alias do_PATCH do_GET
alias do_PUT do_GET
alias do_DELETE do_GET
alias do_MOVE do_GET
alias do_COPY do_GET
alias do_HEAD do_GET
alias do_OPTIONS do_GET
alias do_MKCOL do_GET
end
def initialize(config={}, default=WEBrick::Config::HTTP)
config = {AccessLog: config[:Logger] ? [
[config[:Logger], WEBrick::AccessLog::COMMON_LOG_FORMAT],
] : [
[$stderr, WEBrick::AccessLog::COMMON_LOG_FORMAT],
]}.update(config)
super
if ENV.key?('FRONT_END_LOG_LEVEL')
logger.level = WEBrick::BasicLog.const_get(ENV['FRONT_END_LOG_LEVEL'])
end
mount('/', FallbackFileHandler, 'public')
mount('/api', ProxyHandler)
mount('/uploads', ProxyHandler)
end
end
if __FILE__ == $0
require 'optparse'
options = {}
OptionParser.new do |opt|
opt.on('-f', '--front-end-port PORT', OptionParser::DecimalInteger) { |o|
options[:front_end_port] = o
}
opt.on('-b', '--back-end-port PORT', OptionParser::DecimalInteger) { |o|
options[:back_end_port] = o
}
end.parse!
server = FrontEndServer.new({
Port: options[:front_end_port],
back_end_port: options[:back_end_port],
})
trap('INT') { server.shutdown }
trap('TERM') { server.shutdown }
server.start
end
Tested with rails-5.1.1, webpack-2.4.1.
To run the tests you can use the following commands:
$ xvfb-run TESTOPTS=-h bin/rails test:system
$ xvfb-run bin/rails test -h test/system/application_test.rb:6
$ xvfb-run TEST=test/system/application_test.rb TESTOPTS=-h bin/rake test
You can simplify running tests by adding package scripts:
"scripts": {
"test": "xvfb-run bin/rails test:system",
"test1": "xvfb-run bin/rails test"
}
Then:
$ yarn test
$ yarn test1 test/system/application_test.rb:6
Or so I'd like to say. But unfortunately yarn has an issue where it prepends extra paths to the PATH variable. Particularly, /usr/bin. Which leads to system ruby being executed. With all sorts of outcomes (ruby not finding gems).
To work around it you may use the following script:
#!/usr/bin/env bash
set -eu
# https://github.com/yarnpkg/yarn/issues/5935
s_path=$(printf "%s" "$PATH" | tr : \\n)
_IFS=$IFS
IFS=$'\n'
a_path=($s_path)
IFS=$_IFS
usr_bin=$(dirname -- "$(which node)")
n_usr_bin=$(egrep "^$usr_bin$" <(printf "%s" "$s_path") | wc -l)
r=()
for (( i = 0; i < ${#a_path[#]}; i++ )); do
if [ "${a_path[$i]}" = "$usr_bin" ] && (( n_usr_bin > 1 )); then
(( n_usr_bin-- ))
else
r+=("${a_path[$i]}")
fi
done
PATH=$(
for p in ${r[#]+"${r[#]}"}; do
printf "%s\n" "$p"
done | paste -sd:
)
"$#"
Then the package scripts are to be read as follows:
"scripts": {
"test": "./fix-path.sh xvfb-run bin/rails test:system",
"test1": "./fix-path.sh xvfb-run bin/rails test"
}
By default, rails starts puma in a separate thread to handle api requests while running tests. With this setup it by default runs in a separate process. Since then you can drop byebug line anywhere in your test, and the site in the browser will remain functional (XHR requests will not stuck). You can still make it run in a separate thread if you feel like it by setting BACK_END=separate_thread.
Additionally, another process (or thread, depending on the value of the FRONT_END variable) will start to handle requests for static files (or proxy requests to the back end). For that webrick is used.
To see rails's output, run with RAILS_LOG_TO_STDOUT=1, or see log/test.log. To prevent rails from colorizing the log, add config.colorize_logging = false (which will strip colors in the console as well) to config/environments/test.rb, or use less -R log/test.log. puma's output can be seen by running with BACK_END_LOG=1.
To see webrick's output, run with FRONT_END_LOG=1 (separate process), RAILS_LOG_TO_STDOUT=1 (separate thread), or see log/test.log (separate thread). To make webrick produce more info, set FRONT_END_LOG_LEVEL to DEBUG.
Also, every time you run the tests, webpack starts to compile the bundle. You can avoid that with WEBPACK=1.
Finally, to see Selenium requests:
Selenium::WebDriver.logger.level = :debug # full logging
Selenium::WebDriver.logger.level = :warn # back to normal
Selenium::WebDriver.logger.output = 'selenium.log' # log to file
I'm testing out the backup gem
http://backup.github.io/backup/v4/utilities/
I understand that I've to create a db_backup.rb with the configuration for example
Model.new(:my_backup, 'My Backup') do
database MySQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_database_name"
db.username = "my_username"
db.password = "my_password"
db.host = "localhost"
db.port = 3306
However I'm not able to find out how to get those details from the Rails database.yml. I've tried something like this:
env = defined?(RAILS_ENV) ? RAILS_ENV : 'development'
#settings = YAML.load(File.read(File.join( "config", "database.yml")))
But I guess there should be a better way.
I would do something like this:
env = defined?(RAILS_ENV) ? RAILS_ENV : 'development'
config = YAML.load_file(File.join('config', 'database.yml'))[env]
Model.new(:my_backup, 'My Backup') do
database MySQL do |db|
config.each_pair do |key, value|
db.public_send("#{key}=", value)
end
# ...
Use ActiveRecord's own configuration handling:
require 'active_record'
require 'yaml'
Model.new(:my_backup, 'My Backup') do
database MySQL do |db|
config = {
# these are the default values
host: 'localhost'
port: 3306
}.merge(load_configuration(ENV['RAILS_ENV'] || 'development'))
config.each_pair do |key, value|
db.public_send("#{key}=", value)
end
end
# this loads the configuration from file and memoizes it
def load_configuration(env)
#file_config ||= YAML.load(File.read(File.join( "config", "database.yml")))
#configurations ||= ActiveRecord::ConnectionHandling::MergeAndResolveDefaultUrlConfig.new(file_config).resolve
#configurations[env]
end
end
The key advantage here is that it will merge the values from ENV['DATABASE_URL']. Which is very important since you should avoid adding database credentials to config/database.yml.
A good habit is to specify only the connection adapter and base essentials in database.yml. Use ENV['DATABASE_URL'] for usernames, passwords and everything else.
Env vars are easy to change between deploys without changing any
code; unlike config files, there is little chance of them being
checked into the code repo accidentally; and unlike custom config
files, or other config mechanisms such as Java System Properties, they
are a language- and OS-agnostic standard.
- https://12factor.net/config
See:
Configuring Rails Applications
I have absolutely no idea how to run my resque scheduler. When i enqueue a single task and run it manually it works fine but when i try to implement resque scheduler using the command rake resque:scheduler --trace, i get ArgumentError: unsupported signal SIGUSR1. Below are the files needed for resque scheduler:
config/initializers/resque.rb
require 'resque/failure/multiple'
require 'resque/failure/redis'
Resque::Failure::Multiple.classes = [Resque::Failure::Redis]
Resque::Failure.backend = Resque::Failure::Multiple
Dir[File.join(Rails.root, 'app', 'jobs', '*.rb')].each { |file| require file }
config = YAML.load(File.open("#{Rails.root}/config/resque.yml"))[Rails.env]
Resque.redis = Redis.new(host: config['host'], port: config['port'], db: config['db'])
config/resque.yml
defaults: &defaults
host: localhost
port: 6379
db: 6
development:
<<: *defaults
test:
<<: *defaults
staging:
<<: *defaults
production:
<<: *defaults
lib/tasks/resque.rake
require 'resque/tasks'
require 'resque/scheduler/tasks'
require 'yaml'
task 'resque:setup' => :environment
namespace :resque do
task :setup_schedule => :setup do
require 'resque-scheduler'
# If you want to be able to dynamically change the schedule,
# uncomment this line. A dynamic schedule can be updated via the
# Resque::Scheduler.set_schedule (and remove_schedule) methods.
# When dynamic is set to true, the scheduler process looks for
# schedule changes and applies them on the fly.
# Note: This feature is only available in >=2.0.0.
# Resque::Scheduler.dynamic = true
# The schedule doesn't need to be stored in a YAML, it just needs to
# be a hash. YAML is usually the easiest.
Resque.schedule = YAML.load_file(File.open("#{Rails.root}/config/resque_schedule.yml"))
end
task :scheduler => :setup_schedule
end
config/resque_schedule.yml
run_my_job:
cron: '30 6 * * 1'
class: 'MyJob'
queue: myjob
args:
description: "Runs MyJob"
Here's the error message for the rake resque:scheduler command:
error message
just found out that Windows doesn't support the SIGUSR1 signal. Here's a list of supported signals in Windows. The solution will be to use another OS such as Ubuntu to run the operation and it runs with no problems.
I have rails app 3.2.17, deployed in staging environment with unicorn 4.6.3 monitored by bluepill 0.0.66 using mongid 3.1.5
When I deploy for staging environment all works find including active_record except mongoid queries with result with following error:
Error during failsafe response:
Problem:
No configuration could be found for a session named 'default'.
Summary:
When attempting to create the new session, Mongoid could not find a session configuration for the name: 'default'. This is necessary in order to know the host, port, and options needed to connect.
Resolution:
Double check your mongoid.yml to make sure under the sessions key that a configuration exists for 'default'. If you have set the configuration programatically, ensure that 'default' exists in the configuration hash.
mongoid.yml file in deployed station
staging:
sessions:
default:
database: mydb
username: user
password: password
hosts:
- localhost:27017
options:
options:
myapp.pill file in deployed station
Bluepill.application('myapp', log_file: '/var/log/bluepill/myapp.log') do |app|
app.process('myapp-app') do |process|
process.pid_file = '/home/user/myapp/current/tmp/pids/unicorn.pid'
process.working_dir = '/home/user/myapp/current'
process.start_command = '/home/user/.gem/ruby/1.9.1/bin/bundle exec unicorn -c config/unicorn.rb -D -E staging'
process.stop_command = 'kill -QUIT {{PID}}'
process.restart_command = 'kill -USR2 {{PID}}'
process.uid = 'user'
process.gid = 'user'
process.start_grace_time 30.seconds
process.stop_grace_time 30.seconds
process.restart_grace_time 60.seconds
process.monitor_children do |child_process|
child_process.stop_command 'kill -QUIT {{PID}}'
child_process.checks(:mem_usage,
:every => 30.seconds,
:below => 1024.megabytes,
:times => [3,4]
)
child_process.checks(:cpu_usage,
:every => 30.seconds,
:below => 90,
:times => [3,4]
)
end
end
end
I suspect that mongoid is not receiving the RAILS_ENV, but I'm not sure, I check indentation in mongoid.yml. Should another reason that I not find out.
in /config/application.rb type
require 'mongoid'
Mongoid.load!(File.expand_path('mongoid.yml', './config'))
It may be do it via initializer.