Heroku scheduler calls job but it doesn't get executed - ruby-on-rails

I am hosting my app with Heroku, and have a task in the scheduler.rake file (to be called by Heroku Scheduler):
task :send_recipe_summary => :environment do
puts "Calling send recipe summary slack message job..."
SendRecipeSummarySlackMessageJob.perform_later
puts "Done!"
end
When this task gets called (by the scheduler or in the console) it says that the task gets enqueued:
➜ Plant-as-Usual-2 git:(master) heroku run bundle exec rake send_recipe_summary
Running bundle exec rake send_recipe_summary on ⬢ plant-as-usual-2... up, run.5548 (Hobby)
Calling send recipe summary slack message job...
I, [2020-09-24T16:25:50.193685 #4] INFO -- : [ActiveJob] Enqueued SendRecipeSummarySlackMessageJob (Job ID: a4110651-a9fa-4030-96ef-db4caf64c7e5) to Async(default)
Done!
However, in the logs, the job does not get enqueued, and the Slack message never gets sent:
2020-09-24T16:25:35.913315+00:00 app[api]: Starting process with command `bundle exec rake send_recipe_summary` by user jonappleseed#email.com
2020-09-24T16:25:44.008993+00:00 heroku[run.5548]: Awaiting client
2020-09-24T16:25:44.039520+00:00 heroku[run.5548]: State changed from starting to up
2020-09-24T16:25:44.291580+00:00 heroku[run.5548]: Starting process with command `bundle exec rake send_recipe_summary`
2020-09-24T16:25:49.334067+00:00 app[worker.1]: D, [2020-09-24T16:25:49.333949 #4] DEBUG -- : [1m[36mDelayed::Backend::ActiveRecord::Job Load (1.5ms)[0m [1m[37mUPDATE "delayed_jobs" SET locked_at = '2020-09-24 16:25:49.331558', locked_by = 'host:9b1fc7b1-4107-427f-8f49-f058b41b7906 pid:4' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2020-09-24 16:25:49.330595' AND (locked_at IS NULL OR locked_at < '2020-09-24 12:25:49.330627') OR locked_by = 'host:9b1fc7b1-4107-427f-8f49-f058b41b7906 pid:4') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *[0m
2020-09-24T16:25:53.372387+00:00 heroku[run.5548]: Process exited with status 0
2020-09-24T16:25:53.407641+00:00 heroku[run.5548]: State changed from up to complete
2020-09-24T16:26:49.394474+00:00 app[worker.1]: D, [2020-09-24T16:26:49.394376 #4] DEBUG -- : [1m[36mDelayed::Backend::ActiveRecord::Job Load (1.5ms)[0m [1m[37mUPDATE "delayed_jobs" SET locked_at = '2020-09-24 16:26:49.392377', locked_by = 'host:9b1fc7b1-4107-427f-8f49-f058b41b7906 pid:4' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2020-09-24 16:26:49.391803' AND (locked_at IS NULL OR locked_at < '2020-09-24 12:26:49.391827') OR locked_by = 'host:9b1fc7b1-4107-427f-8f49-f058b41b7906 pid:4') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *[0m
However, if I run the job itself (rather than the task), everything works as expected, and the message does get sent:
irb(main):002:0> SendRecipeSummarySlackMessageJob.perform_later
I, [2020-09-24T16:22:45.268729 #4] INFO -- : [ActiveJob] Enqueued SendRecipeSummarySlackMessageJob (Job ID: 8285bb7f-9646-42bb-9901-8d219ca1175c) to Async(default)
=> #<SendRecipeSummarySlackMessageJob:0x000055b1a768dfb8 #arguments=[], #job_id="8285bb7f-9646-42bb-9901-8d219ca1175c", #queue_name="default", #priority=nil, #executions=0, #exception_executions={}, #provider_job_id="c41cedd7-e5a5-4a7d-9e64-280d2ec81e1e">
irb(main):003:0> I, [2020-09-24T16:22:45.269348 #4] INFO -- : [ActiveJob] [SendRecipeSummarySlackMessageJob] [8285bb7f-9646-42bb-9901-8d219ca1175c] Performing SendRecipeSummarySlackMessageJob (Job ID: 8285bb7f-9646-42bb-9901-8d219ca1175c) from Async(default) enqueued at 2020-09-24T16:22:45Z
D, [2020-09-24T16:22:46.412105 #4] DEBUG -- : [ActiveJob] [SendRecipeSummarySlackMessageJob] [8285bb7f-9646-42bb-9901-8d219ca1175c] (1.5ms) SELECT COUNT(*) FROM "recipes" WHERE "recipes"."state" = $1 [["state", "awaiting_approval"]]
D, [2020-09-24T16:22:46.413683 #4] DEBUG -- : [ActiveJob] [SendRecipeSummarySlackMessageJob] [8285bb7f-9646-42bb-9901-8d219ca1175c] (1.0ms) SELECT COUNT(*) FROM "recipes" WHERE "recipes"."state" = $1 [["state", "incomplete"]]
I, [2020-09-24T16:22:46.415123 #4] INFO -- : [ActiveJob] [SendRecipeSummarySlackMessageJob] [8285bb7f-9646-42bb-9901-8d219ca1175c] Enqueued SendSlackMessageJob (Job ID: c640b706-9920-4639-a1a0-4c7dbec15c39) to Async(default) with arguments: "There are no recipes awaiting approval, and 1 incomplete recipe https://www.plantasusual.com/admin", {:nature=>"inform"}
I, [2020-09-24T16:22:46.415260 #4] INFO -- : [ActiveJob] [SendRecipeSummarySlackMessageJob] [8285bb7f-9646-42bb-9901-8d219ca1175c] Performed SendRecipeSummarySlackMessageJob (Job ID: 8285bb7f-9646-42bb-9901-8d219ca1175c) from Async(default) in 1145.43ms
I, [2020-09-24T16:22:46.415882 #4] INFO -- : [ActiveJob] [SendSlackMessageJob] [c640b706-9920-4639-a1a0-4c7dbec15c39] Performing SendSlackMessageJob (Job ID: c640b706-9920-4639-a1a0-4c7dbec15c39) from Async(default) enqueued at 2020-09-24T16:22:46Z with arguments: "There are no recipes awaiting approval, and 1 incomplete recipe https://www.plantasusual.com/admin", {:nature=>"inform"}
I, [2020-09-24T16:22:46.591446 #4] INFO -- : [ActiveJob] [SendSlackMessageJob] [c640b706-9920-4639-a1a0-4c7dbec15c39] Performed SendSlackMessageJob (Job ID: c640b706-9920-4639-a1a0-4c7dbec15c39) from Async(default) in 175.12ms
Can anyone help me understand why the job runs as expected when run from the console, but it doesn't work when run from the scheduler?
Thank you to anyone who can help!

It seems you have not enabled worker on heroku bundle exec rake jobs:work.
It is a paid service. goto: your_app > resource tab and enable it from there. this link could be helpful

Related

500 error after pushing to heroku - DEBUG

Im having an openuri issue when in production. The code below works in development but when in production the unauthorized error comes up. The only way i've solved this is if i put the api key directly in the url, which obviously i dont want to do. Any ideas why my current code doesnt work?
api_key = ENV["NEWS_API"]
url = "https://newsapi.org/v2/top-headlines?sources=techcrunch&apiKey=#{api_key}"
article_serialized = open(url).read
#articles = JSON.parse(article_serialized)
2018-10-25T01:55:47.184856+00:00 app[web.1]: F, [2018-10-25T01:55:47.184804 #4] FATAL -- : [40a7ee38-ef6e-4622-ad86-edb34d7eeccf] OpenURI::HTTPError (401 Unauthorized):
Running rails db:migrate on ⬢ twittter-clone... up, run.6483 (Free)
D, [2018-10-25T01:12:44.387465 #4] DEBUG -- : (0.9ms) SELECT pg_try_advisory_lock(2661719123600558280)
D, [2018-10-25T01:12:44.433095 #4] DEBUG -- : (2.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
D, [2018-10-25T01:12:44.456835 #4] DEBUG -- : ActiveRecord::InternalMetadata Load (3.8ms) SELECT "ar_internal_metadata".* FROM "ar_internal_metadata" WHERE "ar_internal_metadata"."key" = $1 LIMIT $2 [["key", "environment"], ["LIMIT", 1]]
D, [2018-10-25T01:12:44.480937 #4] DEBUG -- : (5.3ms) BEGIN
D, [2018-10-25T01:12:44.485057 #4] DEBUG -- : (1.7ms) COMMIT
D, [2018-10-25T01:12:44.492822 #4] DEBUG -- : (7.1ms) SELECT pg_advisory_unlock(2661719123600558280)
OpenURI returns a 401 error and my guess is that the ENV['NEWS_API'] is not set. You can run heroku config in the terminal or look it up on the web interface: Settings -> Reveal Config Vars

Rails Active Storage Preview "Internal 500 error" on Heroku

It was perfectly working yesterday when I had 1 file attached, then I changed to multiple files and previews are not shown anymore.
It seems like I did everything correctly, I even scaffolded another document.
<% #document.files.each do |file| %>
<p><%= link_to "View File", file, target: '_blank' %> |
<%= link_to "Download", file, download: '' %> </p>
<% if file.previewable? %>
<li>
<%= image_tag file.preview(resize_to_limit: [200, 200]) %>
</li>
<% end %>
It is shown like this now
Preview images are shown with 500 error
Maybe it has something to do with Heroku buildpacks, it worked last night or AWS?
Heruku logs:
2018-08-20T15:47:27.158676+00:00 app[web.1]: D, [2018-08-20T15:47:27.158541 #4] DEBUG -- : [1m[36mActiveStorage::Blob Load (5.4ms)[0m [1m[34mSELECT "active_storage_blobs".* FROM "active_storage_blobs" WHERE "active_storage_blobs"."id" = $1 LIMIT $2[0m [["id", 69], ["LIMIT", 1]]
2018-08-20T15:47:27.121163+00:00 heroku[router]: at=info method=GET path="/rails/active_storage/representations/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBSdz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--dfa8c38d16bcbc65de1a2107a086925141bf585e/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdCam9VY21WemFYcGxYM1J2WDJ4cGJXbDBXd2RwQWNocEFjZz0iLCJleHAiOm51bGwsInB1ciI6InZhcmlhdGlvbiJ9fQ==--7c2a4bf0af133050ffc289b97401f795dffdfdbc/EIN%20FunderHunt%20(1).pdf" host=desolate-temple-16025.herokuapp.com request_id=7281973b-884b-455b-83a7-5a85910e790f fwd="71.190.148.218" dyno=web.1 connect=1ms service=2689ms status=500 bytes=1891 protocol=https
2018-08-20T15:47:27.160499+00:00 app[web.1]: I, [2018-08-20T15:47:27.160404 #4] INFO -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] Performing ActiveStorage::AnalyzeJob (Job ID: d7e86d99-8801-4bdb-b63a-e1ba4946afad) from Async(default) with arguments: #>
2018-08-20T15:47:27.274034+00:00 app[web.1]: I, [2018-08-20T15:47:27.273887 #4] INFO -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] [36m S3 Storage (111.8ms) [0m[34mDownloaded file from key: wX8vV9RFa3J1bj5Lkynohj14[0m
2018-08-20T15:47:27.415290+00:00 app[web.1]: D, [2018-08-20T15:47:27.414877 #4] DEBUG -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] [1m[35m (11.8ms)[0m [1m[35mBEGIN[0m
2018-08-20T15:47:27.423905+00:00 app[web.1]: D, [2018-08-20T15:47:27.423324 #4] DEBUG -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] [1m[36mActiveStorage::Blob Update (3.2ms)[0m [1m[33mUPDATE "active_storage_blobs" SET "metadata" = $1 WHERE "active_storage_blobs"."id" = $2[0m [["metadata", "{\"identified\":true,\"width\":612,\"height\":792,\"analyzed\":true}"], ["id", 69]]
2018-08-20T15:47:27.433985+00:00 app[web.1]: D, [2018-08-20T15:47:27.433860 #4] DEBUG -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] [1m[35m (3.6ms)[0m [1m[35mCOMMIT[0m
2018-08-20T15:47:27.436119+00:00 app[web.1]: I, [2018-08-20T15:47:27.435928 #4] INFO -- : [ActiveJob] [ActiveStorage::AnalyzeJob] [d7e86d99-8801-4bdb-b63a-e1ba4946afad] Performed ActiveStorage::AnalyzeJob (Job ID: d7e86d99-8801-4bdb-b63a-e1ba4946afad) from Async(default) in 275.23ms
Heroku uses an ephemeral filesystem, meaning that any temporary or uploaded files stored in your application's filesystem will not be available once the dyno stops or restarts.
Everytime you deploy your application, Heroku restarts your dyno. Meaning any uploaded files are destroyed. You should try uploading your files to a service like AWS S3, which ActiveStorage supports.
Here is Heroku's documentation for this. Documentation Link
Here is also a link to an issue on help.heroku.com Why are my file uploads missing/deleted?

Error while reserving job: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly

Delayed jobs service is stopping after less than 1 hour of its starting time with the following log:
I, [2018-02-26T06:00:26.580458 #11439] INFO -- : 2018-02-26T06:00:26+0400: [Worker(delayed_job host:myhost pid:11439)] Starting job worker
I, [2018-02-26T06:00:26.664929 #11439] INFO -- : 2018-02-26T06:00:26+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41019) RUNNING
I, [2018-02-26T06:00:27.342994 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41019) COMPLETED after 0.6779
I, [2018-02-26T06:00:27.346526 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41020) RUNNING
I, [2018-02-26T06:00:27.470858 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41020) COMPLETED after 0.1242
I, [2018-02-26T06:00:27.474937 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41024) RUNNING
I, [2018-02-26T06:00:27.603043 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41024) COMPLETED after 0.1280
I, [2018-02-26T06:00:27.606702 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41025) RUNNING
I, [2018-02-26T06:00:27.725715 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41025) COMPLETED after 0.1189
I, [2018-02-26T06:00:27.728021 #11439] INFO -- : 2018-02-26T06:00:27+0400: [Worker(delayed_job host:myhost pid:11439)] 4 jobs processed at 3.4871 j/s, 0 failed
I, [2018-02-26T06:14:48.287220 #11439] INFO -- : 2018-02-26T06:14:48+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41027) RUNNING
I, [2018-02-26T06:14:48.414079 #11439] INFO -- : 2018-02-26T06:14:48+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41027) COMPLETED after 0.1267
I, [2018-02-26T06:14:48.416335 #11439] INFO -- : 2018-02-26T06:14:48+0400: [Worker(delayed_job host:myhost pid:11439)] 1 jobs processed at 7.3771 j/s, 0 failed
I, [2018-02-26T06:16:33.492435 #11439] INFO -- : 2018-02-26T06:16:33+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41028) RUNNING
I, [2018-02-26T06:16:33.613684 #11439] INFO -- : 2018-02-26T06:16:33+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41028) COMPLETED after 0.1211
I, [2018-02-26T06:16:33.615953 #11439] INFO -- : 2018-02-26T06:16:33+0400: [Worker(delayed_job host:myhost pid:11439)] 1 jobs processed at 7.8121 j/s, 0 failed
I, [2018-02-26T06:22:33.853678 #11439] INFO -- : 2018-02-26T06:22:33+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41030) RUNNING
I, [2018-02-26T06:22:33.967338 #11439] INFO -- : 2018-02-26T06:22:33+0400: [Worker(delayed_job host:myhost pid:11439)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=41030) COMPLETED after 0.1136
I, [2018-02-26T06:22:33.970307 #11439] INFO -- : 2018-02-26T06:22:33+0400: [Worker(delayed_job host:myhost pid:11439)] 1 jobs processed at 8.2735 j/s, 0 failed
I, [2018-02-26T06:38:24.595215 #11439] INFO -- : 2018-02-26T06:38:24+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:24.593926', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:24.593351' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:24.593398') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:29.597026 #11439] INFO -- : 2018-02-26T06:38:29+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:29.596061', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:29.595477' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:29.595524') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:34.598775 #11439] INFO -- : 2018-02-26T06:38:34+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:34.597856', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:34.597278' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:34.597325') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:39.600772 #11439] INFO -- : 2018-02-26T06:38:39+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:39.599713', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:39.599063' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:39.599110') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:44.602546 #11439] INFO -- : 2018-02-26T06:38:44+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:44.601568', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:44.601024' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:44.601072') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:49.604286 #11439] INFO -- : 2018-02-26T06:38:49+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:49.603369', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:49.602808' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:49.602863') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:54.606189 #11439] INFO -- : 2018-02-26T06:38:54+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:54.605111', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:54.604563' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:54.604613') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:38:59.608610 #11439] INFO -- : 2018-02-26T06:38:59+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:38:59.607243', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:38:59.606483' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:38:59.606539') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:39:04.610465 #11439] INFO -- : 2018-02-26T06:39:04+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:39:04.609457', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:39:04.608876' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:39:04.608926') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
I, [2018-02-26T06:39:09.612201 #11439] INFO -- : 2018-02-26T06:39:09+0400: [Worker(delayed_job host:myhost pid:11439)] Error while reserving job: PG::ConnectionBad: PQsocket() can't get socket descriptor: UPDATE "delayed_jobs" SET locked_at = '2018-02-26 02:39:09.611263', locked_by = 'delayed_job host:myhost pid:11439' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2018-02-26 02:39:09.610721' AND (locked_at IS NULL OR locked_at < '2018-02-25 22:39:09.610770') OR locked_by = 'delayed_job host:myhost pid:11439') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
database.yml
production:
adapter: postgresql
encoding: unicode
database: myapp
port: 5432
pool: 5
username: username
password: password
reconnect: true
Please can anyone just explain the reason of this error and how to avoid it:
Error while reserving job: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
Update:
I believe this issue is not related to delayed jobs, since I'm getting same error while I'm doing some normal DB queries. So DB is restarting for some reason and this why delayed job service is stopping.
As per commented by #LaurenzAlbe , below are some issues found in /var/log/postgresql/postgresql-9.3-main.log:
LOG: connection received: host=10.10.10.15 port=57322
LOG: replication connection authorized: user=MyDBUser
FATAL: must be superuser or replication role to start walsender
LOG: could not receive data from client: Connection reset by peer
LOG: disconnection: session time: 0:06:18.911 user=MyDBUser database=MyDB host=127.0.0.1 port=34040
./systemd: 36: kill: Operation not permitted
WARNING: skipping "delayed_jobs" --- only table or database owner can analyze it
After some research, I believe delayed jobs have a memory leak issue, which is solved by using config.cache_classes = true because delayed jobs keeps reloading classes every now and then.
I had the same issue where delayed job process used more than 90%+ of memory and it crashed with the same error, even though I had cached classes, but I was running delayed jobs without RAILS_ENV, which caused them to load in development and ignore the other environment settings.
This problem can also be caused by:
Rails's database idle_timeout setting, which is 300 seconds by default. This means that if a connection is idle for more than 300 seconds, the server will close it.
PgBouncer client_idle_timeout setting. This setting controls how long a connection can be idle before PgBouncer closes it. If this setting is set too low, it can cause the same errors as Rails's idle_timeout.
PostgreSQL idle_session_timeout settings (from PostgreSQL 14) and idle_in_transaction_session_timeout(for sessions that were idle in transaction).
TCP or other network or firewall timeouts(e.g. there is a case of the timeout in Azure Cloud)
And to solve this problem, there are a few possible solutions:
Speed up slow processing
Increase the idle_timeout(or another guilty timeout) setting to a higher value to prevent a server from closing idle connections too soon.
Add a database setting called reaping_frequency: 10, which will clear and restore connections more frequently to prevent them from becoming idle.
Add manual reconnection, like the code snippet below, which releases the connection and re-establishes it before saving a record to the database.
ActiveRecord::Base.connection_pool.release_connection
ActiveRecord::Base.connection_pool.with_connection do
# save record in database
end
Links:
(1)(2)(3)(4) Info about reaping_frequency
(5)(6) Info about Rails's idle_timeout
(7)(8)(9) Solutions with manual reconnection

Heroku Not Receiving Seed Data from Rails App

I deployed a Rails App to Heroku and the database migrated fine, except the db/seeds.rb file won't push to Heroku. I've tried running heroku run rake db:seed, as well as all the other solutions suggested online, but nothing is working. Heroku doesn't seem to have access to the seeds.rb file at all.
After running heroku run db:seed, nothing seems to happen of worth:
Running rake db:seed on ⬢ top5application... up, run.1885 (Free)
D, [2017-11-02T05:42:19.665233 #4] DEBUG -- :
ActiveRecord::SchemaMigration Load (1.3ms) SELECT
"schema_migrations".* FROM "schema_migrations"
The Heroku logs also don't seem to contain any useful information:
1d22032e7d7e fwd="69.201.160.131" dyno=web.1 connect=0ms service=17ms status=200 bytes=4542 protocol=https
2017-11-02T05:13:34.409329+00:00 app[web.1]: I, [2017-11-02T05:13:34.409232 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Started GET "/lists" for 69.201.160.131 at 2017-11-02 05:13:34 +0000
2017-11-02T05:13:34.410736+00:00 app[web.1]: I, [2017-11-02T05:13:34.410667 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Processing by ListsController#index as HTML
2017-11-02T05:13:34.413696+00:00 app[web.1]: D, [2017-11-02T05:13:34.413601 #4] DEBUG -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] [1m[36mList Load (0.9ms)[0m [1m[34mSELECT "lists".* FROM "lists" ORDER BY average_rating DESC[0m
2017-11-02T05:13:34.415282+00:00 app[web.1]: I, [2017-11-02T05:13:34.415205 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Rendering lists/index.html.erb within layouts/application
2017-11-02T05:13:34.417088+00:00 app[web.1]: D, [2017-11-02T05:13:34.416900 #4] DEBUG -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] [1m[36mList Load (0.8ms)[0m [1m[34mSELECT "lists".* FROM "lists" ORDER BY "lists"."created_at" DESC[0m
2017-11-02T05:13:34.418866+00:00 app[web.1]: D, [2017-11-02T05:13:34.418785 #4] DEBUG -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] [1m[36mTopic Load (0.7ms)[0m [1m[34mSELECT "topics".* FROM "topics"[0m
2017-11-02T05:13:34.419839+00:00 app[web.1]: I, [2017-11-02T05:13:34.419757 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Rendered lists/index.html.erb within layouts/application (4.3ms)
2017-11-02T05:13:34.422316+00:00 app[web.1]: D, [2017-11-02T05:13:34.422246 #4] DEBUG -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] [1m[36mUser Load (0.9ms)[0m [1m[34mSELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2[0m [["id", 1], ["LIMIT", 1]]
2017-11-02T05:13:34.423714+00:00 app[web.1]: I, [2017-11-02T05:13:34.423652 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Rendered layouts/_logged_in.erb (0.8ms)
2017-11-02T05:13:34.423973+00:00 app[web.1]: I, [2017-11-02T05:13:34.423915 #4] INFO -- : [44ea0b33-64d7-4516-90d1-1d22032e7d7e] Completed 200 OK in 13ms (Views: 7.0ms | ActiveRecord: 3.3ms)
I'd really appreciate any insight because this is very perplexing.

Rails and Heroku - Rake db:seed only populates some databases and not others?

I have a rails app that I've been working on locally and have deployed to heroku. I have a lot of data that needs to be seeded that has worked well locally, with a small issue that I didn't pay much mind to. Basically I have a number of tables covering:
Feature,
Addon,
Budget,
ProjectType
Industry
etc.
When I create/reset the local database and run the rake db:seed it seeds perfectly to each database.
However, as its a WIP I keep adding new tables and running the seed rake db:seed, and noticed that where it should have been doubling up all of the seeded data (because I never cleared the data), it actually only duplicated this info on the Addon, and Feature tables. The other tables were unchanged. I thought nothing of it, until I started trying to run heroku run rake db:seed to populate my production database and noticed that in the logs it was also only populating the Addon and Feature tables, and skipping over the rest.
An extract from some of my seed file (I've shortened with ... as they are very repetitive but all the same):
#Populate the features table
Feature.destroy_all
Feature.create(id: 1, name: 'Contact form')
Feature.create(id: 2, name: 'Blog')
Feature.create(id: 3, name: 'Mailing list signup')
...
...
#Populate the addons table
Addon.destroy_all
Addon.create(id: 1, name: 'Domain registration')
Addon.create(id: 2, name: 'Hosting')
Addon.create(id: 3, name: 'Create content')
...
...
#Populating the industries table
Industry.destroy_all
Industry.create(id: 1, name: 'Accounting')
Industry.create(id: 2, name: 'Airlines')
Industry.create(id: 3, name: 'Alternative Medicine')
...
...
# Populating the budgets table
Budget.destroy_all
Budget.create(id: 1, name: 'Micro', minimum: 250, maximum: 1000)
Budget.create(id: 2, name: 'Simple', minimum: 1000, maximum: 2500)
...
...
I noticed that when I try populate the heroku database with heroku run rake db:seed the data all seems to "rollback". Here is an extract from the console:
First I run heroku run rake db:migrate
D, [2016-09-01T12:24:31.463009 #3] DEBUG -- : (1.8ms) SELECT pg_try_advisory_lock(4467995963834188590);
D, [2016-09-01T12:24:31.476939 #3] DEBUG -- : ActiveRecord::SchemaMigration Load (1.9ms) SELECT "schema_migrations".* FROM "schema_migrations"
D, [2016-09-01T12:24:31.507280 #3] DEBUG -- : ActiveRecord::InternalMetadata Load (1.8ms) SELECT "ar_internal_metadata".* FROM "ar_internal_metadata" WHERE "ar_internal_metadata"."key" = $1 LIMIT $2 [["key", :environment], ["LIMIT", 1]]
D, [2016-09-01T12:24:31.518500 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:24:31.522494 #3] DEBUG -- : (1.7ms) COMMIT
D, [2016-09-01T12:24:31.524504 #3] DEBUG -- : (1.8ms) SELECT pg_advisory_unlock(4467995963834188590)
Then I run rake heroku run rake db:seed
For the tables where it doesn't work it seems to do this:
D, [2016-09-01T12:27:28.229540 #3] DEBUG -- : ActiveRecord::SchemaMigration Load (2.2ms) SELECT "schema_migrations".* FROM "schema_migrations"
D, [2016-09-01T12:27:28.261528 #3] DEBUG -- : Budget Load (2.3ms) SELECT "budgets".* FROM "budgets"
D, [2016-09-01T12:27:28.293954 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.320090 #3] DEBUG -- : (1.9ms) ROLLBACK
D, [2016-09-01T12:27:28.322421 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.325418 #3] DEBUG -- : (1.8ms) ROLLBACK
D, [2016-09-01T12:27:28.327617 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.331031 #3] DEBUG -- : (1.8ms) ROLLBACK
D, [2016-09-01T12:27:28.333423 #3] DEBUG -- : (1.8ms) BEGIN
D, [2016-09-01T12:27:28.338379 #3] DEBUG -- : (2.5ms) ROLLBACK
D, [2016-09-01T12:27:28.340601 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.344061 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.346208 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.349342 #3] DEBUG -- : (1.8ms) ROLLBACK
D, [2016-09-01T12:27:28.353205 #3] DEBUG -- : (3.4ms) BEGIN
D, [2016-09-01T12:27:28.358254 #3] DEBUG -- : (2.5ms) ROLLBACK
D, [2016-09-01T12:27:28.360392 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.363406 #3] DEBUG -- : (1.8ms) ROLLBACK
D, [2016-09-01T12:27:28.365488 #3] DEBUG -- : (1.6ms) BEGIN
D, [2016-09-01T12:27:28.367862 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.369869 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.372657 #3] DEBUG -- : (1.8ms) ROLLBACK
D, [2016-09-01T12:27:28.378093 #3] DEBUG -- : Industry Load (2.1ms) SELECT "industries".* FROM "industries"
D, [2016-09-01T12:27:28.393455 #3] DEBUG -- : (1.8ms) BEGIN
D, [2016-09-01T12:27:28.409125 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.411280 #3] DEBUG -- : (1.6ms) BEGIN
D, [2016-09-01T12:27:28.414223 #3] DEBUG -- : (1.6ms) ROLLBACK
D, [2016-09-01T12:27:28.416244 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.419335 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.421511 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.425412 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.427570 #3] DEBUG -- : (1.7ms) BEGIN
D, [2016-09-01T12:27:28.431136 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.433536 #3] DEBUG -- : (2.0ms) BEGIN
D, [2016-09-01T12:27:28.438208 #3] DEBUG -- : (1.7ms) ROLLBACK
D, [2016-09-01T12:27:28.440437 #3] DEBUG -- : (1.7ms) BEGIN
And then when it passes over the features and addons, it seems to work fine:
D, [2016-09-01T12:27:29.506570 #3] DEBUG -- : SQL (1.9ms) INSERT INTO "features" ("id", "name", "created_at", "updated_at") VALUES ($1, $2, $3, $4) RETURNING "id" [["id", 1], ["name", "Contact form"], ["created_at", 2016-09-01 12:27:29 UTC], ["updated_at", 2016-09-01 12:27:29 UTC]]
D, [2016-09-01T12:27:29.509515 #3] DEBUG -- : (2.5ms) COMMIT
D, [2016-09-01T12:27:29.512944 #3] DEBUG -- : (2.2ms) BEGIN
D, [2016-09-01T12:27:29.516551 #3] DEBUG -- : SQL (1.9ms) INSERT INTO "features" ("id", "name", "created_at", "updated_at") VALUES ($1, $2, $3, $4) RETURNING "id" [["id", 2], ["name", "Blog"], ["created_at", 2016-09-01 12:27:29 UTC], ["updated_at", 2016-09-01 12:27:29 UTC]]
D, [2016-09-01T12:27:29.519489 #3] DEBUG -- : (2.5ms) COMMIT
D, [2016-09-01T12:27:29.522216 #3] DEBUG -- : (2.5ms) BEGIN
D, [2016-09-01T12:27:29.526125 #3] DEBUG -- : SQL (1.9ms) INSERT INTO "features" ("id", "name", "created_at", "updated_at") VALUES ($1, $2, $3, $4) RETURNING "id" [["id", 3], ["name", "Mailing list signup"], ["created_at", 2016-09-01 12:27:29 UTC], ["updated_at", 2016-09-01 12:27:29 UTC]]
I did notice that of the tables, Addons and Features both have a has_and_belongs_to_many association with other tables while the rest of the tables that aren't working have a belongs_to association with some other tables. Not sure if that is just a coincidence? Thanks for any help!
Ok from what I can tell this appears to be an issue with seeding databases of belongs_to models. I worked around this by:
Going into each model and #'ing the belongs_to relationship. I then repushed with:
git add .
git commit -m "hashed belongs_to associations"
git push
git push heroku master
heroku run rake db:seed
When I checked the database, it's seems to be perfect now. I will now unhash the relationships, re git and re push to heroku.
Why dont you try using find_or_create_by! method which will first check if the record exist and only create if it does not exist.:
Feature.find_or_create_by!(name: 'Contact form')
Also it will take care of validation when you use the bang sign at the end of the method. you also do not have to mention the id here. As you are providing the id, that might be creating an issue for you.
Let me know if you still face any issue.
Since Rails 5.1 belongs_to-associated objects are required by default (see https://blog.bigbinary.com/2016/02/15/rails-5-makes-belong-to-association-required-by-default.html). Maybe some of the belongs_to-associations of your seeded models are not existing, that's why heroku won’t seed the models since presence-validation fails.
One easy solution is to flag the associations as optional:
class Budget
belongs_to :feature, optional: true
....
end

Resources