How to use Byebug with Sidekiq and Foreman - ruby-on-rails

I have a rails application in which I use foreman to start my rails and sidekiq servers. Since foreman doesn't interact well with regular byebug (you can't see the prompt as you type), I have set up remote debugging for both my rails and sidekiq servers. This works perfectly for the rails server, but when I connect to the byebug server for the sidekiq server, I get the following:
$ bundle exec byebug -R localhost:58501
Connecting to byebug server localhost:58501...
Connected.
(byebug:ctrl)
And I'm unable to catch any byebug breakpoints.
According to the documentation, the (byebug:ctrl) prompt means that the program has terminated normally (https://github.com/deivid-rodriguez/byebug/blob/master/GUIDE.md), but sidekiq is running jobs just fine.
Is there something incorrect in my configuration, or is sidekiq just not compatible with byebug's remote debugging?
Procfile:
sidekiq: bundle exec sidekiq
rails: rails server
config/initializers/byebug.rb:
if Rails.env.development?
require 'byebug'
def find_available_port
server = TCPServer.new(nil, 0)
server.addr[1]
ensure
server.close if server
end
port = find_available_port
puts "Starting remote debugger..."
Byebug.start_server 'localhost', port
puts "Remote debugger on port #{port}"
end
Note that when I don't use remote debugging, byebug functions fine with sidekiq (although in foreman I can't see the prompt as I type).
Also note that I've tried using Byebug.wait_connection = true before Byebug.start_server, but I have the same issue.

I've tried to replicate this locally, and with sidekiq 3.3.1 and byebug 9.0.5, it seems to work fine with a minor adjustment to the require:
require 'byebug/core'
def find_available_port
server = TCPServer.new(nil, 0)
server.addr[1]
ensure
server.close if server
end
port = find_available_port
puts "Starting remote debugger..."
Byebug.start_server 'localhost', port
puts "Remote debugger on port #{port}"
Job:
class TestJob
include Sidekiq::Worker
def perform
byebug
end
end

Related

Configuration issue on heroku with rabbitMQ (cloudamqp addon) - ruby application

First of all this is working fine in my local machine.
I have one publisher app (Rails 3), which is publishing message to cloudamq(addon) in heroku, we are getting messages in cloudamq
and one consumer application (Rails 5) which is also running on heroku as a separate app, i put this consumer in a rake task but here the issue is this takes messages from the queue only when we restart the server, that means this rake task will run when we restart, but in rabbitMQ it shows always running
So how can we make the consumer to listen continuously once we push to heroku?
Also once we get the message from queue i have to call a sidekiq worker, that is not happening? using 'redis to go' addon
CONSUMER APP
Gem which i have used in this cosumer application
gem 'redis', '~> 3.0'
gem 'sidekiq'
gem 'bunny'
This is the consumer part to get the message from queue as a rake task.
task :do_consumer => :environment do
connection = Bunny.new(ENV['CLOUDAMQP_URL'])
connection.start # start a communication session with the amqp server
channel = connection.channel()
queue = channel.queue('order-queue', durable: true)
puts ' [*] Waiting for messages. To exit press CTRL+C'
queue.subscribe(manual_ack: true, block: true) do |delivery_info,
properties, payload|
puts " [x] Received #{payload}"
puts " [x] Done"
channel.ack(delivery_info.delivery_tag, false)
# Call sidekiq worker to do task
callSidekiqWorker.perform_async(payload)
end
end
Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e production
worker: bundle exec rake do_consumer

NotImplementedError (Use a queueing backend...) using delayed_job

In my Rails app (4.2.4), I have been trying to get asynchronous mail sending to work.
I installed delayed_job as my queue adapter, and set it as the adapter in several places: config/application.rb, config/environments/{development,production}.rb, and config/initializers/active_job.rb.
Installation:
I added this to my Gemfile:
gem 'delayed_job_active_record'
Then, I ran the following commands:
$ bundle install
$ rails generate delayed_job:active_record
$ rake db:migrate
$ bin/delayed_job start
In config/application.rb, config/environments/production.rb, config/environments/development.rb:
config.active_job.queue_adapter = :delayed_job
In config/initializers/active_job.rb (added when the above did not work):
ActiveJob::Base.queue_adapter = :delayed_job
I've also run an ActiveRecord migration for delayed_job, and started bin/delayed_job before running my server.
That being said, any time I try:
UserMailer.welcome_email(#user).deliver_later(wait: 1.minutes)
I get the following error:
NotImplementedError (Use a queueing backend to enqueue jobs in the
future. Read more at http://guides.rubyonrails.org/active_job_basics.html):
app/controllers/user_controller.rb:25:in `create'
config.ru:25:in `call'
I was under the impression that delayed_job is a queueing backend... am I missing something?
EDIT:
I can't get sucker_punch to work either. When installing sucker_punch in the bundler, and using:
config.active_job.queue_adapter = :sucker_punch
in config/application.rb, I get the same error and stack trace.
If you are having this issue in your development environment even though you are using an adapter capable of asynchronous jobs like Sidekiq, make sure that Rails.application.config.active_job.queue_adapter is set to :async instead of :inline.
# config/environments/development.rb
Rails.application.config.active_job.queue_adapter = :async
Provide you are following all the steps listed here, I feel you didn't start delayed_job running
bin/delayed_job start
Please also check you run
rails generate delayed_job:active_record
rake db:migrate
Try this:
in controller:
#user.delay.welcome_email
in your model
def welcome_email
UserMailer.welcome_email(self).deliver_later(wait: 1.minutes)
end
Figured out what it was: I typically start my server and everything associated with it using a single shell script. In this script, I was running bin/delayed_job start in the background, and starting the server before bin/delayed_job start finished. The solution was to make sure delayed_job start finished before starting the server by running it in the foreground in my startup script.
Thanks everyone for all the help!

How to change the default binding ip of Rails 4.2 development server?

After upgrading our team's rails application to 4.2, as the release note mentioned, the default ip rails server binds to is changed to localhost from 0.0.0.0.
We develop with Vagrant, and want the development server to be accessible directly from browser on the host machine.
Instead of typing rails s -b 0.0.0.0 every time from now on, I wonder if there's any more elegant solution, so that we can still use sth as simple as rails s to start the server. Perhaps:
a config file rails s reads where I can modify the default binding ip (without using -c)
port forward with vagrant (tried but failed, see problem encountered below)
a monkey patch to rack, that changes the default binding ip
The real goal behind this is that I want the upgrade to be smooth among our team, avoiding the glitch that people will have to constantly restarting their rails server due to the missing -b 0.0.0.0 part.
I tried vagrant port forwarding, but still get Connection Refused when I visit localhost:3000 on the host machine. The two configuration lines I tried was:
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 3000, guest_ip: '127.0.0.1', host: 3000
Didn't find any relevant instructions in the official docs. Any help will be appreciated.
I'm having the same issue here and I found today a better solution. Just append this code to your config/boot.rb and it should work with vagrant.
require 'rails/commands/server'
module Rails
class Server
def default_options
super.merge(Host: '0.0.0.0', Port: 3000)
end
end
end
ps: Its based on: this answer
You can use foreman to run a Procfile with your custom commands:
# Procfile in Rails application root
web: bundle exec rails s -b 0.0.0.0
Now start your Rails application with:
foreman start
The good thing about foreman is that you can add other applications to the Procfile (like sidekiq, mailcatcher).
The bad thing about foreman is that you have to train your team to run foreman start instead of rails s.
Met the same problem. Found the blog Make Rails 4.2 server listens to all interfaces.
Add the following to config/boot.rb
require 'rails/commands/server'
module Rails
class Server
alias :default_options_bk :default_options
def default_options
default_options_bk.merge!(Host: '0.0.0.0')
end
end
end
For Rails 5.1.7 with Puma 3.12.1 the selected answer does not work, but I accomplished it by adding the following to my config/puma.rb file:
set_default_host '0.0.0.0' # Note: Must come BEFORE defining the port
port ENV.fetch('PORT') { 3000 }
I determined this by inspecting the dsl file. It uses instance_eval on that file, so there are probably other ways to do it, but this seemed the most reasonable to me.
If you put the default options on config/boot.rb then all command attributes for rake and rails fails (example: rake -T or rails g model user)! So, append this to bin/rails after line require_relative '../config/boot' and the code is executed only for the rails server command:
if ARGV.first == 's' || ARGV.first == 'server'
require 'rails/commands/server'
module Rails
class Server
def default_options
super.merge(Host: '0.0.0.0', Port: 3000)
end
end
end
end
The bin/rails file loks like this:
#!/usr/bin/env ruby
APP_PATH = File.expand_path('../../config/application', __FILE__)
require_relative '../config/boot'
# Set default host and port to rails server
if ARGV.first == 's' || ARGV.first == 'server'
require 'rails/commands/server'
module Rails
class Server
def default_options
super.merge(Host: '0.0.0.0', Port: 3000)
end
end
end
end
require 'rails/commands'
If you use docker or another tool to manage the environment variables, you can set the HOST environment variable to the IP you need to bind.
Example:
HOST=0.0.0.0
Add it to docker.env file if you use Docker or .env if you use foreman.
Here's a simpler solution that I'm using. I already like/need dotenv and puma-heroku, so if using those doesn't work for you then this might not be for you.
/config/puma.rb
plugin :heroku
Gemfile
gem 'dotenv-rails', groups: [:development, :test]
.env
PORT=8080
Now I can start both dev and production with rails s.
For Rails 5 with Puma the selected answer does not work. You may get such error: cannot load such file -- rails/commands/server
For proper solution add following to config/puma.rb:
bind 'tcp://0.0.0.0:3000'
Switch to Puma and specify port in config/puma.rb, e.g.:
port ENV.fetch("PORT") { 3000 }
Apparently it will bind to 0.0.0.0 for the specified port: https://github.com/puma/puma/issues/896

Capistrano v3 task fails to start unicorn server with error "eval: bundle not found"

I'm using Capistrano v3 to deploy a rails 4 app to a VPS using unicorn with nginx.
Following the capistrano most recent official documentation, I managed to set up everything related to the deployment itself:
I use the gems 'capistrano', 'capistrano-bundler', 'capistrano-rails' and 'capistrano-rvm' and when I do cap production deploy everything seems to work without any error message (the repository is pulled from github and copied on the server, assets are precompiled and so on).
At this point if I connect to the sever via ssh and type /etc/init.d/unicorn start the server starts as expected, serving my rails app.
However, I created a task to automate this with capistrano v3 that looks like:
namespace :unicorn do
desc 'Start Unicorn'
task :start do
on roles(:app) do
within current_path do
execute "/etc/init.d/unicorn start"
end
end
end
desc 'Stop Unicorn'
task :stop do
on roles(:app) do
within current_path do
execute "/etc/init.d/unicorn stop"
end
end
end
end
But whenever I try capistrano deploy unicorn:start I get the following error:
/etc/init.d/unicorn: 1: eval: bundle: not found
cap aborted!
/etc/init.d/unicorn start stdout: Nothing written
/etc/init.d/unicorn start stderr: Nothing written
What's even stranger is that when I start unicorn manually and then do cap production unicorn:stop it works seamlessly.
I suspected some differences in available environment variables when logging in via ssh so I configured 'rvm_bin_path', 'path' and 'gem_path' to be the same as on server but I still get the same error.
I'm running out of ideas, anyone knows what could cause this?
Cheers.
When rvm is used on the deploy server rvm1-capistrano3 saves you. You can use this template as how-to

Redis connection refused through ruby daemon worker on Heroku

A Rails 3.2.6 application running as a 'web' Heroku process is connecting to Redis using the ENV["REDISTOGO_URL"] environment variable:
irb(main):002:0> Redis.current
=> #<Redis client v2.2.2 connected to redis://xxx.redistogo.com:1234/0 (Redis v2.4.11)>
----- /initializers/redis.rb
if Rails.env.development?
Redis.current = Redis.new
elsif Rails.env.test?
Redis.current = Redis.new
elsif Rails.env.production?
uri = URI.parse(ENV["REDISTOGO_URL"])
Redis.current = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
end
A ruby daemon 'streaming' process runs as a secondary worker:
----- Procfile
web: bundle exec rails server thin -p $PORT -e $RACK_ENV
worker: bundle exec rake jobs:work
streaming: RAILS_ENV=production ruby bin/streaming.rb start
However, the streaming process is crashing when it calls methods in the main Rails application that connect with Redis - even though the streaming process supposed to be loading the same redis.rb initializer as the 'web' Rails application process.
----- /bin/streaming_ctl.rb
# encoding: UTF-8
require "rubygems"
require "bundler/setup"
require "daemons"
Daemons.run(File.expand_path("../streaming.rb", __FILE__))
----- /bin/streaming.rb
# encoding: UTF-8
# TODO: set rails env in init script
ENV["RAILS_ENV"] ||= "production"
# load rails environment
require File.expand_path('../../config/environment', __FILE__)
logger = ActiveSupport::BufferedLogger.new(File.expand_path("./../../log/streaming.log", __FILE__))
Streaming.start
logger.info("\nStarting streaming in #{Rails.env.to_s} mode.")
Why is the streaming process/worker using the default Redis host and port?
[streaming.1]: /app/vendor/bundle/ruby/1.9.1/gems/redis-2.2.2/lib/redis/client.rb:236:in `rescue in establish_connection': Connection refused - Unable to connect to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
Switching from "Redis.current" in /initializer/redis.rb and using a "$redis" variable convention instead (as illustrated here http://jimneath.org/2011/03/24/using-redis-with-ruby-on-rails.html) fixed the connection problem.
The app was using redis gem version 2.2.2 at the time. It appears as though the Redis.current in this gem version won't be consistent across 'web' and 'worker' processes because Heroku runs these processes on separate threads. The documentation on the gem repo suggests that updating to gem to >= version 3 will enable Redis.current to run in multi-threaded environments:
https://github.com/redis/redis-rb/blob/master/CHANGELOG.md#300

Resources