Rails 4: Delayed job keep cache for email "from" header value - ruby-on-rails

I am developing a mailing web application in Ruby on Rails and I am in front of an issue with delayed_job gem:
In the settings of my application, I give the ability for the customer to update the email address from where mailing are sent. But I discovered there was something like a cache from delayed_job which doesn’t use the update email address for the « from » header.
When I use the delayed_job task by Capistrano manually it works so I tried to add a callback after_update in my model to handle the restart of delayed_job but without any success.
Capistrano command:
cap <my_env> delayed_job:restart # this works but it’s a manual command so useless in my case
What I tried is to dynamically restart delayed_job from the model:
class Setting < ActiveRecord::Base
after_save :restart_delayed_job
def restart_delayed_job
if email_changed?
system "RAILS_ENV=#{Rails.env} do bundle exec bin/delayed_job -n 1 restart"
end
end
end
My Mailing class:
class MyMailer < ApplicationMailer
default from: Setting.first.email # After updating the email value in setting, it still the old one used.
# more code skipped
end
Does anyone knows how can I restart delayed_job from Rails ?
Is there a way to do it exclusively in Ruby without writing shell script ?
In order to help me to understand better, is there several instance of delayed_job (one by website in the server) or one for all website ?
Thanks for your help !
My project:
- Rails 4.2.5
- Ruby 2.2.2
- ActiveAdmin 1.0.0 pre2
- Delayed job 4.1.1
- Capistrano 3.4.0

The solution was to move the from header into the mail function.
The reason is the default from is set only once when the application start and cannot be changed this way.
mail(from: email, subject: subject, ...) do
# skipped code
end
Here is the answer who helped me to understand that: https://github.com/collectiveidea/delayed_job/issues/882

Related

How to use Heya email campaigns with GoodJob instead of Sidekiq

I'm trying to send email campaigns in a rails app with the Heya gem and GoodJob. The example in the Heya readme as well as the Heya example app uses Sidekiq as the Active Job backend.
I'm confused about how to actually send the Heya campaigns with GoodJob.
The docs for Heya show this example of starting Sidekick: bundle exec sidekiq -q default -q heya
I assume that there is a Job queue somewhere in the gem called "Heya", but I can't find this in the source code. Do I need to create one?
Do I need to create a job that runs the Heya scheduler? While the example app uses Sidekiq, I also don't see any custom jobs in that app.
I have the following setup for GoodJob and it appears to be running fine with good_job start which should run all of the jobs and queues, but I've also tried good_job start --queues=heya,default.
Here is the relevant code:
Profile.dev
web: bin/rails server -p 3000
css: bin/rails tailwindcss:watch
worker: bundle exec good_job start
config/initializers/heya.rb
Heya.configure do |config|
config.user_type = "User"
config.campaigns.priority = [
"WelcomeCampaign",
]
end
app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
# Automatically retry jobs that encountered a deadlock
# retry_on ActiveRecord::Deadlocked
# Most jobs are safe to ignore if the underlying records are no longer available
# discard_on ActiveJob::DeserializationError
end
app/campaigns/application_campaign.rb
class ApplicationCampaign < Heya::Campaigns::Base
segment :email_subscriber?
default from: "#{I18n.t('settings.site_name')} <#{I18n.t('settings.newsletter_email')}>"
end
app/campaigns/welcome_campaign.rb
class WelcomeCampaign < ApplicationCampaign
default wait: 5.minutes,
layout: "newsletter"
step :intro, wait: 0.minutes,
subject: "Welcome to #{I18n.t('settings.site_name')}"
end
I also have a layout and views for the campaign similar to the Heya example app, and I'm using Mailcatcher to see if any email is being sent.
What am I missing to send these emails with Heya and GoodJob?
Note that I'm subscribing the users on signups like this:
class User < ApplicationRecord
after_create_commit :add_user_to_newsletters
private
def add_user_to_newsletters
WelcomeCampaign.add(self)
EvergreenCampaign.add(self)
self.update(email_subscriber: true)
end
end
And the default segment in campaigns/application_campaign.rb is segment :email_subscriber?
If I run User.last.email_subscriber? in the console to check this it returns true.
I feel like I'm missing something about how Heya connects to Active Job that is not obvious in the Heya docs.
Also, not sure if this is related, but I added this to config/puma.rb
# https://github.com/bensheldon/good_job#execute-jobs-async--in-process
before_fork do
GoodJob.shutdown
end
on_worker_boot do
GoodJob.restart
end
on_worker_shutdown do
GoodJob.shutdown
end
MAIN_PID = Process.pid
at_exit do
GoodJob.shutdown if Process.pid == MAIN_PID
end
preload_app!
Are you running the heya scheduler periodically? $ rails heya:scheduler
It looks like you could create your own background job to be run using GoodJob Cron, by executing Heya::Campaigns::Scheduler.new.run to run the scheduler and enqueue the emails.
Reading the "Running the Scheduler" part of the README explains what's happening:
To start queuing emails, run the scheduler task periodically:
rails heya:scheduler
Heya uses ActiveJob to send emails in the background. Make sure your
ActiveJob backend is configured to process the heya queue. By default, GoodJob runs from all queues "*".
You can change Heya's default queue using the queue option:
# app/campaigns/application_campaign.rb
class ApplicationCampaign < Heya::Campaigns::Base
default queue: "custom"
end

How do I see delayed_job jobs in production?

I'm have a server where I deploy using capistrano and I use delayed_jobs to do some mailing but at my server for some reason the jobs do not execute. The delayed_job process is running (running bin/delayed_job status answers me correctly saying there's a process there by some pid), but I don't know if the process just isn't executing my jobs or even if my jobs aren't being enqueued. Locally it all works fine, but at production staging in the server it does not.
I'd like to know if there's a way I can at least check what jobs are there, since I can't do it by accessing the console
Another gem that works with delayed job is delayed-web which you can find here https://github.com/tatey/delayed-web
you add it to your gemfile
gem 'delayed-web'
then run
rails generate delayed:web:install
this will generate an initializer file delayed_web.rb under config/initializers with the following:
Rails.application.config.to_prepare do
Delayed::Web::Job.backend = 'active_record'
end
and in config/application.rb this will be added for you as well by the generator
# config/application.rb
config.assets.enabled = true
config.assets.precompile << 'delayed/web/application.css'
In routes.rb it may add a route as well but if you are using devise then maybe you want to restrict access to admin user(s) only as follows:
authenticated :user, -> user { user.admin? } do
mount Delayed::Web::Engine, at: '/jobs'
end
Ok so I'm checked my jobs through the database itself, I entered psql through postgres user and did some queries in the delayed_jobs table, you can also try doing RAILS_ENV=production bin/delayed_jobs run (for rails 4, rails 3 use "script/" instead of "bin/") which will show you what the workers are doing while they execute the job.
You can also, as Swards commented above, use a gem to have a web interface for delayed_jobs: https://github.com/ejschmitt/delayed_job_web
If you still wanna see what was my problem with the email sending I've opened another question because it got to far away from what this one was about: What port to use sending email with SMTP (mailgun) in rails app on production server (DigitalOcean)?

start thinking sphinx on rails server startup

I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.

How to log in a Ruby worker script of a Rails app that does not have the environment?

I'm using rufus-scheduler for handling cron jobs for a Rails 3.2.x app. The root worker.rb is being fired off by foreman (actually upstart on this particular server) and therefore when it starts off it does not have the Rails context in which to operate. Obviously when I attempt to call logger or Rails.logger it will fail each time.
I'm using log4r as a replacement for the default Rails Logger, which is quite nice, but I am wondering what the proper solution for this problem would be:
1) Do I need to give the script the Rails context at startup (it is simply running rake tasks right now so it ends up getting the Rails environment when the worker script hands off to the rake task and I fear giving the script the Rails env before running the timed task would be overkill since it takes so long to fire up the env)? or
2) Should I just set up log4r as one would do in a non-Rails Ruby script that simply reads in the same log4r.yml config that the Rails app is using and therefore participate in the same log structure?
3) Or some other option I'm not thinking of?
Additionally, I would appreciate either an example or the steps that I should consider with the recommended implementation.
For reference, I followed "How to configure Log4r with Rails 3.0.x?" and I think this could be helpful when integrated with the above: "Properly using Log4r in Ruby Application?"
I think this might be what you're looking for..
Use this within the worker itself, and create a custom named log file
require 'log4r'
logger = Log4r::Logger.new('test')
logger.outputters << Log4r::Outputter.stdout
logger.outputters << Log4r::FileOutputter.new('logtest', :filename => 'logtest.log')
logger.info('started script')
## You're actual worker methods go here
logger.info('finishing')

Delayed_job and Prawn scripts

How do I run Prawn scripts with Delayed_job.
(Currently using Bj but not supported in Rails3)
This code does not work.
/lib/report_job.rb
class ReportJob < Struct.new(:prawn_script_name , :account_id )
def perform
bundle exec rails runner "#{Rails.root}/jobs/#{prawn_script_name}.rb #{#current_user.account_id} "
end
/reports_controller.rb
def generate_report(prawn_script_name)
Delayed::Job.enqueue(ReportJob.new("#{prawn_script_name}.rb","#{#current_user.account_id}"))
end
delayed_job table is populated as expected.
--- !ruby/struct:ReportJob
prawn_script_name: statements.rb
account_id: '18'
Error in last_error field.
{undefined method `runner' for ReportJob:0xc28f080
Any suggestions?
I think there are several misunderstandings here:
you meant to call runner from outside your app, e.g., in a shell script or command line. in other words, bundle exec rails runner are all commands and arguments of commands, not ruby methods or variables. runner is the first expression that is eval'd inside your perform method, hence your error.
rails runner just brings up your apps environment and evals the string or path argument given.
note account_id within the perform task, another mistake in your code I guess.
What you wanted to do could be a simple system call.
It seems your prawn script needs the environment, so simply calling
system "ruby #{Rails.root}/jobs/#{prawn_script_name}.rb #{account_id}"
won't work.
Now you could surely execute the script with runner from your project directory.
system "bundle exec rails runner #{Rails.root}/jobs/#{prawn_script_name}.rb #{account_id}"
but doing this via a system call within your environment is quite redundant. Delayed jobs already have access to your rails environment. so just simply load them.
class ReportJob < Struct.new(:prawn_script_name , :account_id )
def perform
load "#{Rails.root}/jobs/#{prawn_script_name}.rb"
end
end
hope this helps

Resources