sidekiq: deleted queues on sidekiq/queues page -- how to get them back? - ruby-on-rails

I'm birthing sidekiq queues via the Procfile:
worker: bundle exec sidekiq -q default -q events -q summaries -c 5 -v
And in development I mistakenly deleted the queues events and summaries from the sidekiq/queues page. I think they're still there and functioning but I can't SEE them. I thought they would once again be added the minute I called the bundle exec sidekiq again but they're not there....
Something I'm missing?

You have to push a job to them. Queues don't actually exist in Redis unless they contain jobs.

Related

Ruby on Rails batch processing

I am working on a Rails app that runs regularly scheduled sidekiq jobs, and I understand how queues and background jobs works. I'm working with a 3rd party that requests that I batch jobs to them so that each worker handles one job at a time with 50 workers running in parallel.
I've been researching this for hours, but I'm unclear on how to do this and how to tell if it's actually working. Currently, my procfile looks like this:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq -C ./config/sidekiq.yml
Is it as simple as increasing the concurrency from the rake task to -c 50 in the worker line? Or do I need to use ConnectionPool inside the worker class? The Rails docs say that using find_each is "useful if you want multiple workers dealing with the same processing queue." If I run find_each inside the rake task and call the worker once for each item, will it run the jobs in parallel? I read one article that says that concurrency and parallelism are often confused, so I am, in turn, a little confused about which direction to take.

Redis::CommandError: ERR max number of clients reached

I am receiving the above error and trying to get more insight. My app has 3 types of background jobs and about 100 users so nothing too heavy.
My goal is to be able to process multiple background jobs at the same time (so if 10 users perform the same job, they don't need to wait for each other job to finish before starting).
I'm confused as to how many dynos I need, how many workers I need, how many redis connections I need. What's the difference between all these things?
My current setup has:
1 x professional web dyno
1 x professional scheduler dyno
3 x professional worker dyno
and my procfile:
web: bundle exec rails server -p $PORT
scheduler: bundle exec rake resque:scheduler
worker: env TERM_CHILD=1 QUEUE='*' COUNT='3' bundle exec rake resque:workers
And I am getting the error:
Redis::CommandError: ERR max number of clients reached
I am just surprised because it seems like what I'm trying to achieve is pretty simple.

Prevent sidekiq from executing queued up jobs when starting from command line?

When I start sidekiq in my development environment (Rails 3.2), I use the following command:
bundle exec sidekiq
When I do this, sidekiq will execute all jobs that have been queued up when it was not running. e.g. If I have created a bunch of new user accounts during testing, it will try and send welcome emails to all of the fake accounts (my emails are sent from a sidekiq job).
Is there a way to start sidekiq and tell it to delete all pending jobs? That way I can turn it back on without worrying that it will try and run a bunch of jobs that don't need to run (since this is my dev environment).
I have looked in documentation, but can't find an answer, hopefully it's something simple I overlooked...
redis-cli flushall && bundle exec sidekiq
I found a solution: Using the sidekiq monitoring UI that comes with sidekiq (https://github.com/mperham/sidekiq/wiki/Monitoring), I'm able to view all queues (even when sidekiq is not running). Deleting the queue will remove all of the jobs in it, which solves the problem.

Proper deployment of a Rails app with Mina and Foreman

For production purposes I need three processes running. This is my procfile and I use Foreman to start them:
web: bundle exec rails s Puma -p $PORT
queuing: bundle exec clockwork clock.rb
workers: bundle exec rake resque:workers
For deployment I'm using Mina. What's the appropriate way to start Foreman at the end of deploy task? Currently I'm starting like this:
desc "Deploys the current version to the server."
task :deploy => :environment do
deploy do
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'bundle:install'
invoke :'rails:db_migrate'
invoke :'rails:assets_precompile'
to :launch do
queue "touch #{deploy_to}/tmp/restart.txt"
queue "bundle exec foreman start"
end
end
end
... but I don't think that's the proper way since the "mina deploy" command never successfully exits and the local console just starts outputting whatever these processes are doing.
Question number two: How do I initialize logging for each of these three processes separately in separate files?
And how do I prevent killing all of these three processes when one of them crashes? How do I make the process restart when it crashes?
Thanks!
OK, so that's 3 questions.
1) I think you want to detach foreman process from the terminal. That way the deployment process will finish and foreman process will be running even after you have disconnected from the server. nohup is great for that, e.g. this will launch your app and pipe all logs to server.log file:
nohup foreman start > server.log 2>&1 &
2) AFAIK, foreman doesn't let you do that. You should probably use another process management service (e.g. systemd, upstart). Thankfully, foreman lets you easily export your config to different process management formats (http://ddollar.github.io/foreman/#EXPORTING).
3) Again, you probably want to separate your processes and manage them separately via upstart, systemd, etc.

How to keep a ruby script running persistently within a Rails app on Heroku?

I have a Mailman server script that checks for incoming email and loads it into the rails app database. The script (should) run continuously and checks for new email every 60 seconds. I was able to run the script on Heroku using heroku run:detached script/mailman_server, but when I checked back a few days later it wasn't running. How can I ensure it is always running?
You should use the Cedar stack, and add a Procfile. Eg. Something like...
web: bundle exec unicorn -p $PORT -c ./unicorn.rb
mailman: bundle exec script/mailman_server
Then:
heroku ps:scale mailman=1
On the command line will add one worker. However. Should the worker encounter some kind of error and close you would need additional config to restart it.
Sendgrid have a service which can accept incoming emails for your app:
http://docs.sendgrid.com/documentation/api/parse-api-2/
I haven't looked at the pricing.

Resources