Heroku PostgreSQL restore errors - ruby-on-rails

I'm trying to restore a previous DB backup to a Heroku app. I'm seeing errors and success. Not sure how to proceed while keeping integrity of data.
$ heroku pg:backups:restore b162 DATABASE_URL
Starting restore of b162 to postgresql-snug-62365... done
Use Ctrl-C at any time to stop monitoring progress; the backup will continue restoring.
Use heroku pg:backups to check progress.
Stop a running restore with heroku pg:backups:cancel.
Restoring... !
▸ An error occurred and the backup did not finish.
▸
▸ pg_restore: creating FK CONSTRAINT "public.active_storage_attachments fk_rails_c3b3935055"
▸ pg_restore: warning: errors ignored on restore: 128
▸ pg_restore finished with errors
▸ waiting for download to complete
▸ download finished successfully
▸
▸ Run heroku pg:backups:info r167 for more details.
heroku run rails db:migrate:status tells me that this data is one migration behind so I've tried running migrations but that fails:
rails aborted!
StandardError: An error has occurred, this and all later migrations canceled:
PG::UndefinedTable: ERROR: relation "agencies" does not exist
So if running migrations can't, well, run the migration...what to do?
Edit: Moved a little farther by resetting the DB then rails db:schema:load and then trying g:backups:restore.... This churned for a few minutes (instead of failing quickly as before) but still failed with 128 similar errors:
2022-10-05 19:42:02 +0000 pg_restore: creating COMMENT "EXTENSION pg_stat_statements"
2022-10-05 19:42:02 +0000 pg_restore: from TOC entry 3850; 0 0 COMMENT EXTENSION pg_stat_statements
2022-10-05 19:42:02 +0000 pg_restore: error: could not execute query: ERROR: extension "pg_stat_statements" does not exist
2022-10-05 19:42:02 +0000 Command was: COMMENT ON EXTENSION pg_stat_statements IS 'track planning and execution statistics of all SQL statements executed';
...
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.index_users_on_remember_token"
2022-10-05 19:42:05 +0000 pg_restore: from TOC entry 3655; 1259 29170 INDEX index_users_on_remember_token postgres
2022-10-05 19:42:05 +0000 pg_restore: error: could not execute query: ERROR: relation "public.users" does not exist
2022-10-05 19:42:05 +0000 Command was: CREATE INDEX index_users_on_remember_token ON public.users USING btree (remember_token);
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.index_vaults_on_owner_id_and_owner_type"
2022-10-05 19:42:05 +0000 pg_restore: from TOC entry 3659; 1259 29172 INDEX index_vaults_on_owner_id_and_owner_type postgres
2022-10-05 19:42:05 +0000 pg_restore: error: could not execute query: ERROR: relation "public.vaults" does not exist
2022-10-05 19:42:05 +0000 Command was: CREATE INDEX index_vaults_on_owner_id_and_owner_type ON public.vaults USING btree (owner_id, owner_type);
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.read_marks_reader_readable_index"
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.taggings_idx"
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.taggings_idy"
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.taggings_taggable_context_idx"
2022-10-05 19:42:05 +0000 pg_restore: creating INDEX "public.unique_schema_migrations"
2022-10-05 19:42:05 +0000 pg_restore: creating FK CONSTRAINT "public.active_storage_variant_records fk_rails_993965df15"
2022-10-05 19:42:05 +0000 pg_restore: from TOC entry 3663; 2606 29178 FK CONSTRAINT active_storage_variant_records fk_rails_993965df05 postgres
2022-10-05 19:42:05 +0000 pg_restore: error: could not execute query: ERROR: relation "public.active_storage_variant_records" does not exist
2022-10-05 19:42:05 +0000 Command was: ALTER TABLE ONLY public.active_storage_variant_records
2022-10-05 19:42:05 +0000 ADD CONSTRAINT fk_rails_993965df15 FOREIGN KEY (blob_id) REFERENCES public.active_storage_blobs(id);
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000
2022-10-05 19:42:05 +0000 pg_restore: creating FK CONSTRAINT "public.taggings fk_rails_9fcd2e234b"
2022-10-05 19:42:05 +0000 pg_restore: creating FK CONSTRAINT "public.active_storage_attachments fk_rails_c3b3935257"
2022-10-05 19:42:05 +0000 pg_restore: warning: errors ignored on restore: 128
2022-10-05 19:42:05 +0000 pg_restore finished with errors
2022-10-05 19:42:05 +0000 waiting for download to complete
2022-10-05 19:42:05 +0000 download finished successfully
Thoughts?

Related

pgSql crash after migrate to containerd

i have k8s cluster (1.22.3) with harbor installation (2.5.0, installed via helm char 1.9.0)
harbor configured to use internal database and all work fine.
some time ago i remove docker from all nodes and reconfigure k8s to use containerd directly (based on https://kruyt.org/migrate-docker-containerd-kubernetes/)
all services are works normally after that but pgsql for harbor crashed periodically
in log of pod i cant see following:
2022-04-26 09:26:35.794 UTC [1] LOG: database system is ready to accept connections
2022-04-26 09:31:42.391 UTC [1] LOG: server process (PID 361) exited with exit code 141
2022-04-26 09:31:42.391 UTC [1] LOG: terminating any other active server processes
2022-04-26 09:31:42.391 UTC [374] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [374] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [374] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [364] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [364] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [364] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [245] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [245] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [245] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [157] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [157] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [157] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [22] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [22] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [22] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [123] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [123] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [123] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.391 UTC [244] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.391 UTC [244] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.391 UTC [244] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.392 UTC [243] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.392 UTC [243] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.392 UTC [243] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.392 UTC [246] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.392 UTC [246] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.392 UTC [246] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:42.432 UTC [69] WARNING: terminating connection because of crash of another server process
2022-04-26 09:31:42.432 UTC [69] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-04-26 09:31:42.432 UTC [69] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2022-04-26 09:31:43.031 UTC [375] FATAL: the database system is in recovery mode
2022-04-26 09:31:43.532 UTC [376] LOG: PID 243 in cancel request did not match any process
2022-04-26 09:31:46.992 UTC [1] LOG: all server processes terminated; reinitializing
2022-04-26 09:31:47.545 UTC [377] LOG: database system was interrupted; last known up at 2022-04-26 09:26:35 UTC
2022-04-26 09:31:47.545 UTC [378] LOG: PID 245 in cancel request did not match any process
2022-04-26 09:31:50.472 UTC [388] FATAL: the database system is in recovery mode
2022-04-26 09:31:50.505 UTC [398] FATAL: the database system is in recovery mode
2022-04-26 09:31:52.283 UTC [399] FATAL: the database system is in recovery mode
2022-04-26 09:31:56.528 UTC [400] LOG: PID 246 in cancel request did not match any process
2022-04-26 09:31:58.357 UTC [377] LOG: database system was not properly shut down; automatic recovery in progress
2022-04-26 09:31:59.367 UTC [377] LOG: redo starts at 0/63EFC050
2022-04-26 09:31:59.385 UTC [377] LOG: invalid record length at 0/63F6D038: wanted 24, got 0
2022-04-26 09:31:59.385 UTC [377] LOG: redo done at 0/63F6D000
2022-04-26 09:32:00.480 UTC [410] FATAL: the database system is in recovery mode
2022-04-26 09:32:00.511 UTC [420] FATAL: the database system is in recovery mode
2022-04-26 09:32:00.523 UTC [1] LOG: received smart shutdown request
2022-04-26 09:32:04.946 UTC [1] LOG: abnormal database system shutdown
2022-04-26 09:32:05.139 UTC [1] LOG: database system is shut down
and in the events of pod i see message about liveness/readness probe fail.
there is no resource problem (no memory limit, no storage limit, cpu almost idle)
so i think that there is some misconfiguration in containerd, because with docker all works fine
env info:
k8s: 1.22.3
os: ubuntu 20.04
containerd: 1.5.5

What's the matter this error when register to the apple's appstore?

I'm new in Objective-C.
I struggled to deal with error when registered to the appstore. I had first time to register to the appstore so I don't know what can I do this. Error occured too much but I write to sum up this. I masked private information with 'x'.
======================log======================
2017-07-11 10:34:09 +0000 Rejecting: (Offending Keys: (
"beta-reports-active" ))
2017-07-11 10:34:09 +0000 Filtering based on profile type: app-store
2017-07-11 10:34:09 +0000 Accepting:
2017-07-11 10:34:09 +0000 Accepting:
2017-07-11 10:34:09 +0000 selecting top entry in sorted array: ("",""
)
2017-07-11 10:34:19 +0000 [MT] Presenting: Error Domain=IDEFoundationErrorDomain Code=1 "Symbols tool failed" UserInfo={NSLocalizedDescription=Symbols tool failed}
2017-07-11 10:34:22 +0000 [MT] Proceeding to distribution step IDEDistributionResultViewController, context: ', distributionTask(resolved)='1', distributionMethod(resolved)='', teamID(resolved)='xxxxxxxx'>
Chain (10, self inclusive):
', distributionMethod='', teamID='(null)'>
2017-07-11 10:34:22 +0000 [MT] Upload failed for archive com.sbi.mid.iOS with issues:("~~~~~~")

CouchDB/Couchrest Errno::ECONNREFUSED Connection Refused - connect(2) error

At work, we have about 1500 test cases, and we manually clean the database using DB.recreate! method before each test. When running all tests using bundle exec rake spec, all tests rarely pass. There are number of tests that fail towards the end of suite with the "Errno::ECONNREFUSED Connection Refused - connect(2) error" errors.
Any help would be much appreciated!
I am using CouchDB 1.3.1, Ubuntu 12.04 LTS, Ruby 1.9.3, and Rails 3.2.12.
Thanks,
EDIT
I looked at the log file more carefully and matched the time tests started failing and error messages that were generated in couchdb log.
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] ** Generic server <0.23790.0> terminating
** Last message in was {'EXIT',<0.23789.0>,killed}
** When Server state == {file,{file_descriptor,prim_file,{#Port<0.14445>,20}},
79}
** Reason for termination ==
** killed
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] {error_report,<0.31.0>,
{<0.23790.0>,crash_report,
[[{initial_call,{couch_file,init,['Argument__1']}},
{pid,<0.23790.0>},
{registered_name,[]},
{error_info,
{exit,killed,
[{gen_server,terminate,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,[<0.23789.0>]},
{messages,[]},
{links,[]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,916}],
[]]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23808.0>] {error_report,<0.31.0>,
{<0.23808.0>,crash_report,
[[{initial_call,
{couch_ref_counter,init,['Argument__1']}},
{pid,<0.23808.0>},
{registered_name,[]},
{error_info,
{exit,
{noproc,
[{erlang,link,[<0.23790.0>]},
{couch_ref_counter,'-init/1-lc$^0/1-0-',1},
{couch_ref_counter,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]},
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,[<0.23793.0>,<0.23792.0>,<0.23789.0>]},
{messages,[]},
{links,[]},
{dictionary,[]},
{trap_exit,false},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,114}],
[]]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.103.0>] ** Generic server <0.103.0> terminating
** Last message in was {'EXIT',<0.88.0>,killed}
** When Server state == {db,<0.103.0>,<0.104.0>,nil,<<"1376681645837889">>,
<0.106.0>,<0.102.0>,<0.107.0>,
{db_header,6,1,0,
{1856,{1,0,1777},95},
{1951,1,83},
nil,0,nil,nil,1000},
1,
{btree,<0.102.0>,
{1856,{1,0,1777},95},
#Fun<couch_db_updater.10.55895019>,
#Fun<couch_db_updater.11.100913286>,
#Fun<couch_btree.5.25288484>,
#Fun<couch_db_updater.12.39068440>,snappy},
{btree,<0.102.0>,
{1951,1,83},
#Fun<couch_db_updater.13.114276184>,
#Fun<couch_db_updater.14.2340873>,
#Fun<couch_btree.5.25288484>,
#Fun<couch_db_updater.15.23651859>,snappy},
{btree,<0.102.0>,nil,
#Fun<couch_btree.3.20686015>,
#Fun<couch_btree.4.73514747>,
#Fun<couch_btree.5.25288484>,nil,snappy},
1,<<"_users">>,"/var/lib/couchdb/_users.couch",
[#Fun<couch_doc.8.106888048>],
[],nil,
{user_ctx,null,[],undefined},
nil,1000,
[before_header,after_header,on_file_open],
[create,
{before_doc_update,
#Fun<couch_users_db.before_doc_update.2>},
{after_doc_read,
#Fun<couch_users_db.after_doc_read.2>},
sys_db,
{user_ctx,
{user_ctx,null,[<<"_admin">>],undefined}},
nologifmissing,sys_db],
snappy,#Fun<couch_users_db.before_doc_update.2>,
#Fun<couch_users_db.after_doc_read.2>}
** Reason for termination ==
** killed
Ah.... The power of the community. I got the following answer from someone in the CouchDB mailing list.
In short, the solution is to change delayed_commit value to false. It's set to true by default, and rapidly recreating multiple databases at the beginning of each test case were creating a race condition (deleting non-existent db, etc.).
This definitely solved my problem.
One caveat is that it has doubled our test duration. That's another problem to tackle, but for now, I am happy with all passing tests.

Rails with PostgreSQL reconnecting multiple times per request

I recently encountered a problem where Rails is dropping connections to my PostgreSQL causing multiple reconnects per request, significantly slowing down requests. I am currently running everything local on Mac OS X with the following environment:
Mac OS X 10.8 (Mountain Lion)
Pow
NginX (ssl unwrapping - same app reconnect behavior even without this enabled)
PostgreSQL 9.2.2 via Homebrew
Rails 3.2.11
Here is my database.yml: (database and username redacted)
development:
adapter: postgresql
encoding: unicode
database: <database>
pool: 5
username: <user>
password:
This is the output from development.log during a typical request: (specific models and renders redacted)
Connecting to database specified by database.yml
Started GET "/" for 127.0.0.1 at 2013-02-05 12:25:38 -0800
Processing by HomeController#index as HTML
... <app performs model loads and render calls>
Completed 200 OK in 314ms (Views: 196.0ms | ActiveRecord: 60.9ms)
Connecting to database specified by database.yml
Connecting to database specified by database.yml
I also enabled statement logging in Postgres to get better insight into what exactly was happening on the database side during a given request. Here is the output from postgresql-<date>.log: (queries specific to my app redacted)
LOG: connection received: host=[local]
LOG: connection authorized: user=<user> database=<database>
LOG: statement: set client_encoding to 'UTF8'
LOG: statement: set client_encoding to 'unicode'
LOG: statement: SHOW client_min_messages
LOG: statement: SET client_min_messages TO 'panic'
LOG: statement: SET standard_conforming_strings = on
LOG: statement: SET client_min_messages TO 'notice'
LOG: statement: SET time zone 'UTC'
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: statement: SELECT 1
... <app makes queries for request>
LOG: disconnection: session time: 0:00:01.346 user=<user> database=<database> host=[local]
LOG: connection received: host=[local]
LOG: connection authorized: user=<user> database=<database>
LOG: statement: set client_encoding to 'UTF8'
LOG: statement: set client_encoding to 'unicode'
LOG: statement: SHOW client_min_messages
LOG: statement: SET client_min_messages TO 'panic'
LOG: statement: SET standard_conforming_strings = on
LOG: statement: SET client_min_messages TO 'notice'
LOG: statement: SET time zone 'UTC'
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: connection received: host=[local]
LOG: connection authorized: user=<user> database=<database>
LOG: statement: set client_encoding to 'UTF8'
LOG: statement: set client_encoding to 'unicode'
LOG: statement: SHOW client_min_messages
LOG: statement: SET client_min_messages TO 'panic'
LOG: statement: SET standard_conforming_strings = on
LOG: statement: SET client_min_messages TO 'notice'
LOG: statement: SET time zone 'UTC'
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: statement: SELECT 1
LOG: statement: SELECT 1
LOG: disconnection: session time: 0:00:00.750 user=<user> database=<database> host=[local]
LOG: disconnection: session time: 0:00:00.733 user=<user> database=<database> host=[local]
I'm happy to update with relevant configs or log outputs for command calls. Also, why all the SELECT 1 calls? What is their purpose?
Thanks!
The answer is to upgrade PostgreSQL to a newer version. I just upgraded to 9.2.4 from 9.2.2 as used in the original question and the problem went away entirely, with expected performance characteristics returning to my development application.

Rails errors causing Apache / Passanger internal server errors

Ok I thought I was close to getting passenger and Apache working. I notice that some gem files were not installed after navigating to the url to see if my rails app was working. Passanger error page let me know what gems were missing so I got them installed.
Now going to the URL I get a 500 Apache internal error page with no helpful info so I checked out the log file on the server and here is what I see.
Rails Error: Unable to access log file. Please ensure that /home/mydirectory/dev/vb/log/production.log exists and is chmod 0666. $
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
[Tue Dec 07 20:12:17 2010] [error] [client 64.58.208.22] Premature end of script headers:
[ pid=20653 thr=140618873321280 file=ext/apache2/Hooks.cpp:816 time=2010-12-07 20:12:17.617 ]: The backend application (proce$
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
[Tue Dec 07 20:12:43 2010] [error] [client 64.58.208.22] Premature end of script headers:
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
[Tue Dec 07 20:13:25 2010] [error] [client 64.58.208.22] Premature end of script headers:
[ pid=21932 thr=140618873321280 file=ext/apache2/Hooks.cpp:816 time=2010-12-07 20:13:25.168 ]: The backend application (proce$
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
[Tue Dec 07 20:13:31 2010] [error] [client 64.58.208.22] Premature end of script headers:
[ pid=20623 thr=140618873321280 file=ext/apache2/Hooks.cpp:816 time=2010-12-07 20:13:31.266 ]: The backend application (proce$
Rails Error: Unable to access log file. Please ensure that /home/mydirectory/dev/vb/log/production.log exists and is chmod 0666. $
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
[Tue Dec 07 20:24:56 2010] [error] [client 64.58.208.22] Premature end of script headers:
[ pid=20622 thr=140618873321280 file=ext/apache2/Hooks.cpp:816 time=2010-12-07 20:24:56.442 ]: The backend application (proce$
Rack: /home/mydirectory/dev/vb: symbol lookup error: /usr/local/rvm/gems/ruby-1.9.2-p0#prodset/gems/sqlite3-ruby-1.2.4/lib/sqlite$
anyone have any suggestions on what I should look at next. I have tried running bundler and also using rvm to install sqlite3 and I still have the same issue.
thanks again for any help
Did you checked the suggestion on the first line of the error log?
Rails Error: Unable to access log file. Please ensure that /home/mydirectory/dev/vb/log/production.log exists and is chmod 0666.

Resources