Not able to make Resque work - ruby-on-rails

I'm trying to make Resque work with my project, but unfortunately it seems that for some reasons Resque is not able to write on Redis.
Redis seems to be configured correctly, I'm able to connect with redis-cli and issue commands, runs on port 6379 as configured inside my Rails 3.0.5 app.
When I try to Resque enqueue something the job is queued, but it doesn't seem that something actually happens on Redis (0 clients connected inside my Redis logs).
When I restart the console, the queue is empty, with no workers running.
Everything fails silently, I have nothing in my rails logs, nothing on the console, nothing if I start a worker, it just (obviously) doesn't find any job to perform.
https://gist.github.com/867620
Any suggestions on how to fix or debug this ?

The problem was that I was including resque_spec in the bundle.
Obviously, resque_spec was stubbing Resque.enqueue, making my mistake very stupid and very difficult to spot.

Related

Docker compose ruby on rails server byebug

I have a problem with docker compose locally running rails server. I basically have a rails server with database running with 2 containers, nothing special, it all works. Problem happens when I add a breakpoint via byebug or binding.pry in Ruby on rails codebase. I have added tty: true and stdin_open: true to docker compose and when I attach to the rails container it hits the breakpoint...after that is when the problem occurs. Basically when I continue with the code execution it just hangs(most likely server hits the breakpoint again, but its not registered in the terminal that is attached to the container) and I can't get to respond. Postman or whatever is using that API just gets stuck in endless loading(waiting for server to return response).
Another thing to mention is that I'm using puma and when I set 2+ puma threads or workers, at that point this starts happening, but if I set puma threads to 1 then it does not happen. I have no idea what exactly happens and I want to figure it out. Can anyone help...

PG::ConnectionBad Sidekiq could not translate host name

We are getting a lot of sentries issue with PG:ConnectionBad on Postgres in RDS AWS and Ruby on Rails
PG::ConnectionBad Sidekiq/BookingExtensionCheckWorker
could not translate host name “ls-XXXXXXXXXXXXfee44.XXXXXXXXX.eu-west-1.rds.amazonaws.com” to address: Name or service not known
In two weeks ago, we have migrate a new database and we change endpoint in the RoR files api
new database endpoint,
ls-XXXXXXXXXXXXXXXf3d4a.XXXXXXXX.eu-west-1.rds.amazonaws.com
It working fine with no issue between the new database and Ruby on Rails api. however, I get a lot of sentries issue said that sidekiq having an issue with connection database as they use old database address which it’s no longer used. I have to check sidekiq database; the code shows it’s connected to the new database. They keep go back to the old database when I run Ruby on rails.
Is there some way to find why sidekiq keep connection old database
Sidekiq is background service. So when someone deployed he has used some sort of like. Right now you are getting issue you think is fine but actually most jobs are not running. Which you will noticed in few days.
How can you check jobs If you sidekiq setup. Probably following url will lead to all jobs.
your_url/sidekiq
It will show you all jobs, I think you have option here to restart services. Just click to restart sidekiq. And everything would be fine.
How to start
As your sidekiq running with old configuration. So following steps could be dangrous, I think you must check how you have started your process. Otherwise following are some way people configure it and runt it.
systemctl restart sidekiq
If this does not work, check your command your deployment guy has setup some sort of scripts inside /etc/init.d folder
Some time developer use following simple line to run sidekiq
bundle exec sidekiq -d -P tmp/sidekiq.pid -L log/sidekiq.log
or
bundle exec sidekiqctl stop

Err max clients reached Redis/Sidekiq/Rails

I have been stuck on this issue for the past 3 days and unsure where to look now.
I have a simple Sidekiq implementation into my rails app.
I am working on: Rails 4.2.0, Sidekiq 4.1.2, Redis 3.0.6
The production app is running live with heroku, and I have 1 worker dyno and 1 web dyno.
The issue is this, and I am unsure on how to approach it or what I did to make it do this.
When I run the redis-cli on heroku I can see the clients that I have running. At most I have 2 or 3 clients running at any given time. I can easily kill the clients with
CLIENT KILL TYPE normal
So that's all fine and dandy. The part when things get a little tricky is when I fire up my server locally, and I am working in development. All of a sudden my redic-cli shows that I have 19 clients running. This will result in me logging
Err max clients reached
My assumption is that somehow locally I am directing sidekiq to work off the redis production url. I have to admit what I know about Redis and Sidekiq is limited, but I do have a basic understanding of how it should be working.
Any help or guidance would be appreciated.
Try using sidekiq -c 3 to limit your concurrency.
This ended up being a configuration error. Just in case anyone stumbles upon this question hopefully this will help them not overlook something like I did.
This issue was happening only when I was firing up my local server, so I knew it had something to do with me locally. I noticed that on my production redis:cli I was seeing clients that had my local IP in the ADDR column.
This led me to believe that my local machine was pushing clients to my production Redis server. Looking at my logs when I fired up my Procfile I saw the Redis url there so that only confirmed it.
Finally after searching through my code, I discovered that I had actually added the url into my .env, so when I fired up my server it was using that production Redis url. So I changed it to the appropriate IP address for local development on my .env file redis://127.0.0.1:6379 and everything is now working as normal.

How does Redis work with Rails and Sidekiq

Problem: need to send e-mails from Rails asynchronously.
Environment: Windows 7, Ruby 2.0, Rails 4.1, Sidekiq, Redis
After setting everything up, starting Sidekiq and starting Redis, I can see the mail request queued to Redis through the monitor:
1414256204.699674 "exec"
1414256204.710675 "multi"
1414256204.710675 "sadd" "queues" "default"
1414256204.710675 "lpush" "queue:default" "{\"retry\":true,\"queue\":\"default\",\"class\":\"Sidekiq::Extensions::DelayedMailer\",\"args\":[\"---\\n- !ruby/class 'UserMailer'\\n- :async_reminder\\n- - 673\\n\"],\"jid\":\"d4024c0c219201e5d1649c54\",\"enqueued_at\":1414256204.709674}"
But the mailer method never seems to get executed. The mail doesn't get sent and none of the log messages show up.
How does Redis know to execute the job on the queue and does something else need to be setup in the environment for it to know where the application resides?
Is delayed_job a better solution?
I started redis in one window, bundle exec sidekiq in another window, and rails server in a third window.
How does an item on the redis queue get picked up and processed? Is sidekiq both putting things on the redis queue and checking to see if something was added that needs to be processed?
Redis is used just for storage. It stores jobs to be done. It does not execute anything. DelayedJob uses your database for job storage instead of Redis.
Rails process pushes new jobs to Redis.
Sidekiq process pops jobs from Redis and executes them.
In your MONITOR output, you should see LPUSH commands when Rails sends mail. You should also see BRPOP commands from Sidekiq.
You need to make sure that both Rails and Sidekiq processes use the same Redis server, database number, and namespace (if any). It's a frequent problem that they don't.

DelayedJob fails silently when couldn't connect to database

So, i have a big rails application that is using delayed_job to send emails and SMS to the users.
Once in a while, the delayed_job process will simply stop working without any message on the logs. I have finally pinpointed the problem as the delayed_job process crashs when it coulnd't connect to the database.
Is there any configuration i can make so it will retry the connection instead of just crashing? I've tried setting the reconnect: true on the database.yml file with no success.
Another option that i'm looking for is maybe using a monitoring tool like god or bluepill.
Try using the -m flag when you start delayed job - that should start a separate monitor process that in my experience has been very good about restarting the process.

Resources