Postgres connection not closing after sidekiq Ruby script - ruby-on-rails

It's a small Ruby script running under Sidekiq. It opens a connection with
db_connect = Sequel.connect(#db_credential, search_path: #namespace)
It never explicitly closes the connection; I think this is not supposed to be necessary?
After the script has has been run many times, and they have all completed, and the Sidekiq web panel shows no tasks running or queued, Postgres shows 60 Sidekiq connections:
postgres=# select count(*) from pg_stat_activity where application_name like '%sidekiq%';
count
-------
60
(1 row)
The database is on localhost, so nothing else is creating these connections.
psql 9.3.6, Sidekiq 3.3.3, Rails 4.0.0, ruby 2.1.1p76, sequel 4.19.0, Ubuntu 14.04.2 LTS.

You can:
Either use Sequel pooling by connecting only once and maintaining db_connect value between your Sidekiq tasks executions
Or you can connect every time, but then you have to disconnect manually by calling the disconnect method (http://sequel.jeremyevans.net/rdoc/classes/Sequel/Database.html#method-i-disconnect).
I believe the issue with your current approach is that you're constructing a new connection pool on every Sidekiq tasks execution by calling Sequel.connect, and these connections keep hanging around. It may take a long time before they're actually garbage collected, if ever.

Related

Run Sidekiq workers without database connection

Each Sidekiq worker (thread) requires 1 connection to the database. Postgresql can have at most a few hundreds connections. This is a bottleneck for scalability.
Since I need about 1 thousand workers and Postgresql isn't required (I can pass all the data that I need through Redis and remove the SQL) I am wondering if it's possible to start the Rails environment without connections to Postgresql.
How can I start Sidekiq workers without Postgresql?
Note that I still need Postgresql for the normal web app/backend so I cannot remove ActiveRecord altogether from the Rails app.
If a thread doesn't use the database, it won't take a connection. This assumption is false:
Each Sidekiq worker (thread) requires 1 connection to the database.

After a deploy to EC2 sidekiq now reports SocketError: getaddrinfo: Name or service not known

Application is Rails 4.1.4, Ruby 2.1.2.
Using sidekiq 3.2.6, redis 3.1.0, celluloid 0.15.2. The sidekiq implementation is as default as can be, with the exception of connecting to a remote redis queue (elastic cache).
When certain events are processed, we use sidekiq to queue up calls to an external API. The API is reachable through curl from the server our application is hosted on. All other functionality seems to still be performing as expected. This functionality has worked for weeks on the current server implementation/architecture.
After a successful deploy (with Capistrano, through Jenkins) to and EC2 instance, which is behind an elastic load balancer, and an auto-scaling group sidekiq will no longer connect(?) to elasticcache.
SocketError: getaddrinfo: Name or service not known
/gems/redis-3.1.0/lib/redis/connection/ruby.rb:152 in getaddrinfo
/gems/redis-3.1.0/lib/redis/connection/ruby.rb:152 in connect
/gems/redis-3.1.0/lib/redis/connection/ruby.rb:211 in connect
/gems/redis-3.1.0/lib/redis/client.rb:304 in establish_connection
/gems/redis-3.1.0/lib/redis/client.rb:85 in block in connect
/gems/redis-3.1.0/lib/redis/client.rb:266 in with_reconnect
/gems/redis-3.1.0/lib/redis/client.rb:84 in connect
/gems/redis-3.1.0/lib/redis/client.rb:326 in ensure_connected
/gems/redis-3.1.0/lib/redis/client.rb:197 in block in process
/gems/redis-3.1.0/lib/redis/client.rb:279 in logging
/gems/redis-3.1.0/lib/redis/client.rb:196 in process
/gems/redis-3.1.0/lib/redis/client.rb:102 in call
/gems/redis-3.1.0/lib/redis.rb:1315 in block in smembers
/gems/redis-3.1.0/lib/redis.rb:37 in block in synchronize
/usr/local/rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/monitor.rb:211 in mon_synchronize
/gems/redis-3.1.0/lib/redis.rb:37 in synchronize
/gems/redis-3.1.0/lib/redis.rb:1314 in smembers
/gems/sidekiq-3.2.6/lib/sidekiq/api.rb:557 in block in cleanup
/gems/connection_pool-2.0.0/lib/connection_pool.rb:58 in with
/gems/sidekiq-3.2.6/lib/sidekiq.rb:72 in redis
/gems/sidekiq-3.2.6/lib/sidekiq/api.rb:556 in cleanup
/gems/sidekiq-3.2.6/lib/sidekiq/api.rb:549 in initialize
/gems/sidekiq-3.2.6/lib/sidekiq/scheduled.rb:79 in new
/gems/sidekiq-3.2.6/lib/sidekiq/scheduled.rb:79 in poll_interval
/gems/sidekiq-3.2.6/lib/sidekiq/scheduled.rb:58 in block in poll
/gems/sidekiq-3.2.6/lib/sidekiq/util.rb:15 in watchdog
/gems/sidekiq-3.2.6/lib/sidekiq/scheduled.rb:23 in poll
/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25 in public_send
/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25 in dispatch
/gems/celluloid-0.15.2/lib/celluloid/calls.rb:122 in dispatch
/gems/celluloid-0.15.2/lib/celluloid/actor.rb:322 in block in handle_message
/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416 in block in task
/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55 in block in initialize
/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13 in block in create
We have restarted sidekiq, restarted elastic cache, restarted the server, inspected the redis queue with redis-cli and seen nothing noteworthy.
As implied, we can connect to elastic cache using redis-cli, however, using sidekiq/apifrom the console, we get the same SocketError
Any ideas on how to remedy? The application is neigh unusable at this point.
Thanks!
Yay for embarrassing errors! There was a typo in the ENV var url. 10 hours later, between me and the devops, and it was a copy and paste issue.
Thanks

How does Redis work with Rails and Sidekiq

Problem: need to send e-mails from Rails asynchronously.
Environment: Windows 7, Ruby 2.0, Rails 4.1, Sidekiq, Redis
After setting everything up, starting Sidekiq and starting Redis, I can see the mail request queued to Redis through the monitor:
1414256204.699674 "exec"
1414256204.710675 "multi"
1414256204.710675 "sadd" "queues" "default"
1414256204.710675 "lpush" "queue:default" "{\"retry\":true,\"queue\":\"default\",\"class\":\"Sidekiq::Extensions::DelayedMailer\",\"args\":[\"---\\n- !ruby/class 'UserMailer'\\n- :async_reminder\\n- - 673\\n\"],\"jid\":\"d4024c0c219201e5d1649c54\",\"enqueued_at\":1414256204.709674}"
But the mailer method never seems to get executed. The mail doesn't get sent and none of the log messages show up.
How does Redis know to execute the job on the queue and does something else need to be setup in the environment for it to know where the application resides?
Is delayed_job a better solution?
I started redis in one window, bundle exec sidekiq in another window, and rails server in a third window.
How does an item on the redis queue get picked up and processed? Is sidekiq both putting things on the redis queue and checking to see if something was added that needs to be processed?
Redis is used just for storage. It stores jobs to be done. It does not execute anything. DelayedJob uses your database for job storage instead of Redis.
Rails process pushes new jobs to Redis.
Sidekiq process pops jobs from Redis and executes them.
In your MONITOR output, you should see LPUSH commands when Rails sends mail. You should also see BRPOP commands from Sidekiq.
You need to make sure that both Rails and Sidekiq processes use the same Redis server, database number, and namespace (if any). It's a frequent problem that they don't.

thin: waiting for n connection to close while trying to stop server

I have put my application (Ruby on Rails) on (Ubuntu) Amazon EC2 server which is running on thin and nginx.
Whenever I stop my thin production server I am getting this message as "waiting for n connection(s) to finish,can take up to 30 sec, CTRL+C to stop"
Below is the attached screenshot.....
What does it mean and why is it coming.I had to wait for a long time for the connections to stop.I have Thin version 1.5.1.
And since this is Live environment We don't want the website to go down for much time.
Please help
Just set 'wait' to need seconds in thin.yml!

DelayedJob fails silently when couldn't connect to database

So, i have a big rails application that is using delayed_job to send emails and SMS to the users.
Once in a while, the delayed_job process will simply stop working without any message on the logs. I have finally pinpointed the problem as the delayed_job process crashs when it coulnd't connect to the database.
Is there any configuration i can make so it will retry the connection instead of just crashing? I've tried setting the reconnect: true on the database.yml file with no success.
Another option that i'm looking for is maybe using a monitoring tool like god or bluepill.
Try using the -m flag when you start delayed job - that should start a separate monitor process that in my experience has been very good about restarting the process.

Resources