I am using rails 5 and sidekiq (4.1.2) on Heroku
and calling delay on a class method written in my User model
like:
delay.mass_invite_through_csv(mass_invitation.id, current_user, data)
here mass_invitation is a object of MassInvitation Class and current_user is current_user, and data is a hash params.
now this method is getting executed infinitely.
In my Procfile:
web: bundle exec rails server -p $PORT
worker: bundle exec sidekiq -C config/sidekiq.yml
in my config/sidekiq.yml
concurrency: 3
everything is working correctly locally, please help.
Sidekiq retries failed jobs automatically, therefore you have to understand why it is failing first. 2 possible solutions here: tail into sidekiq output either mount sidekiq dashboard to your app and look whats wrong there.
Guide to install the dashboard
Related
I'm confused how heroku and sidekiq work. My Procfile looks like:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e $RAILS_ENV
Now inside my rails I run my sidekiq jobs in my code like:
SomeWorker.perform_async(some.id)
Now will this automatically somehow make this process run in the worker dyno?
If yes, how does it just know to run this out of process?
It is confusing because when I am in my main git folder I can run heroku commands and I know this are for my web dyno, but how do I then see the logs for my worker dyno or will these be in my same dyno logs?
When you setup your Procfile, you're telling Heroku to setup 2 types of dynos: web and worker. It's likely that these are using the same Rails app code but are starting up with different commands (bundle exec puma vs. bundle exec sidekiq). You then can scale however many VMs (dynos) that you need for each type of process.
The glue that holds the two together is Redis. When you run SomeWorker.perform_async(some.id) from your web process, you're adding a record to Redis describing a job to run. Your worker process watches Redis for new records and then processes them.
The Heroku logs show logs from all running dynos. So you should see logs from both your web and worker processes mixed in together.
I was watching the RailsCast on Sidekiq and had some questions:
1) Sidekiq handles tasks through threads instead of processes. What does this mean? Why does it save on memory?
2) Does the method inside the worker class need to have a "perform" method?
3) On the Sidekiq docs, it says:
Start sidekiq from the root of your Rails application so the jobs
will be processed:
bundle exec sidekiq
So if I'm running this on localhost, I can run bundle exec sidekiq. If I pushed up to Heroku, what do I do now? How do I run Sidekiq on Heroku?
4) I am not sure if my Sidekiq is working. I have this code:
def set_defaults
self.clicks = 0 if clicks.blank?
self.title = TitleWorker.perform_async(orig_url)
end
But TitleWorker.perform_asynch(orig_url) in testing just seems to return a string of numbers. What is going on? How do I fix this?
While I'm not super clear about the first question, I might be able to answer the other questions you have:
Yes. It needs a Perform method to start the worker process.
Heroku is just amazing. You don't have to do any special configuration on Heroku for running Sidekiq. All you have to do is, just make sure RedisToGo is installed, and add this to the Procfile:
worker: bundle exec rake jobs:start
sidekiq: bundle exec sidekiq -c 15 -v
Whenever I run heroku run bundle exec sidekiq, I see all my background jobs being done, however, I want them to be able to go without me needing to be there. When I exit out of that terminal tab, sidekiq stops working. How would I mitigate that?
Also, I've read something about procfiles and increasing workers. I don't know what procfiles are and I don't know how to increase workers either.
Basically, I'm a newbie trying to get sidekiq set up to run on Heroku for my Rails app. I want it to be running at all times.
Create a file named ./Procfile with the following in it:
web: bundle exec rails server -p $PORT
worker: bundle exec sidekiq
sidekiq on Heroku
more on Procfiles
foreman gem
It's my first attempt to get Redis working on Heroku.
I've added one worker dyno (just today, so didn't pay yet), added RedisToGo Nano add-on, tested background jobs on my local machine, and pushed the app to heroku.
heroku ps
gives
=== web: `bundle exec rails server -p $PORT`
web.1: up 2013/03/03 18:26:09 (~ 37m ago)
=== worker: `bundle exec rake jobs:work`
worker.1: crashed 2013/03/03 19:02:15 (~ 1m ago)
Sidekiq Web Interface says that one job is enqueued, but zero processed or failed.
I'm guessing it's because my worker dyno is crashed.
Are there any noob mistakes that I don't know about?
(e.g. I need to run some command to start listening to background jobs etc)
heroku logs --tail doesn't show any errors, so I don't understand why my worker dyno chashes.
I did some research and fixed it like this:
Under app's root directory I created a file called "Procfile" with this content:
web: bundle exec rails server -p $PORT
worker: bundle exec sidekiq -c 5 -v
Got this idea from here.
After that it worked ok.
Also make sure you setup REDIS_PROVIDER:
heroku config:set REDIS_PROVIDER=REDISTOGO_URL
Sidekiq's GitHub page also has instruction. Click here
I have an app on (ruby/rails) heroku. It's running 1 web and 1 worker (for example)
I want to be able to tell what "type" of dyno the app is running under.
I suspect it's a simple thing to tell, but I can't see anything that tells me how to tell.
I don't know if there's a more elegant way to do this, but you can set an environment variable in your Procfile:
web: bundle exec ... PROC_TYPE=web
worker: bundle exec ... PROC_TYPE=worker
Then in your rails code, you can check ENV['PROC_TYPE']
EDIT: more detailed Procfile example, typical for a rails app:
web: bundle exec rails server -p $PORT PROC_TYPE=web
worker: bundle exec rake jobs:work PROC_TYPE=worker