How to execute PostgreSQL slash methods on ActiveRecord? - ruby-on-rails

I am using Rails 4. I am trying to list all PostgreSQL databases with '\l' using 'execute' method from ActiveRecord. Connection is correctly established.
p = ActiveRecord::Tasks::PostgreSQLDatabaseTasks.new(configuration)
p.send 'establish_master_connection'
p.connection.execute('\l')
Here is the error:
ActiveRecord::StatementInvalid (PG::SyntaxError: ERROR: syntax error at or near "l"
When I use 'no slash' method it works fine
connection.execute("DROP DATABASE IF EXISTS NOTHING")
=> #<PG::Result:0xc5e6094 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0>
Any idea?

You can't. Commands beginning with \ are meta commands implemented by the psql shell itself - the database server itself doesn't know what they mean.
In the particular case of the commands listing various things, this normally boils down to querying tables in pg_catalog e.g.
SELECT * FROM pg_catalog.pg_database
for databases
SELECT * FROM pg_catalog.pg_table
for tables. These are documented here

Related

Aurora Postgresql's insert statements aren't recognizing existing key IDs

After migrating from RDS Postgresql to RDS Aurora PostgreSQL, I am having an issue to where new inserts are trying to start from key ID 2 instead of the last record.
In my rails app, here's what I'm seeing when I try to create a new record:
ActiveRecord::RecordNotUnique (PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "global_options_pkey"
DETAIL: Key (id)=(3) already exists.
):
app/controllers/schedules_controller.rb:45:in `create'
Not sure why this would be the case. To set up Aurora PSQL in Rails, I followed this tutorial: https://guides.rubyonrails.org/active_record_multiple_databases.html. It seems like everything is working fine (auto switching between reader/writer instances, etc.).
With the migration, I specifically used the AWS Schema Conversion Tool (SCT), followed by a migration with the Database Migration Service.
Is this error caused by something that was done incorrectly in the migration process or do I need to some post-migration processes to have this fixed?
Definitely a case of when you migrated old data, but not with INSERT and your sequences are still on the initial values.
Use sequence manipulation functions to check what is the current value with lastval(), and set it to max(id) with setval()
You can try
ActiveRecord::Base.connection.tables.each do |table_name|
ActiveRecord::Base.connection.reset_pk_sequence!(table_name)
end

How do I override ActiveModel type processing in Rails (JRuby)

I'm trying to create a rails model around an oracle table that has a column "SESSION_XML" using Oracle's XMLTYPE. Whenever I attempt to use the model to get data from the db, the connection adapter responds with:
SELECT "ZC_SESSION_DATA".* FROM "ZC_SESSION_DATA" WHERE ROWNUM <= 1
ActiveRecord::StatementInvalid: Java::JavaLang::NoClassDefFoundError: oracle/xdb/XMLType:SELECT "ZC_SESSION_DATA".* FROM "ZC_SESSION_DATA" WHERE ROWNUM <= 1
from oracle.jdbc.oracore.OracleTypeADT.applyTDSpatches(OracleTypeADT.java:1081)
It seems clear to me that Java (I'm using Rails on top of JRuby) is complaining that it doesn't have a type with which to parse the XMLTYPE column, so my question is this: How can I force rails to interpret the XMLTYPE as a flat string.
I'm fine with needing to parse the XML myself, but how do I get the adapter to stop trying to parse it?
This is an Oracle problem, not really an ActiveRecord problem:
https://community.oracle.com/thread/485754?start=0&tstart=0
TL;DR there's a jar that isn't being loaded that defines XMLType.

Postgresql dump not going to correct database upon importing data

My goal is to import the .sql dump from my server into the project_devel database. I've run into a frustrating problem where the -d [databasename] option seems to be ignored when I load data from a .sql file. Instead (as you can see about 12 lines down into the output), the .sql file may be telling the importer to import into another database. Any clue on why this is happening and how to force the data to go into the database I specify with -d?
I finally put a band-aid on it by changing my development database name in database.yml to project_prod, as my goal here is just to load the production data locally and debug something.
Command I use to import:
YeastFlakes:newproject new$ psql -h /tmp -d project_devel -f prod_dump_2013-02-07_09-00.sql
Output:
You are now connected to database "postgres" as user "new".
SET
SET
SET
psql:prod_dump_2013-02-07_09-00.sql:15: ERROR: role "project" already exists
ALTER ROLE
psql:prod_dump_2013-02-07_09-00.sql:17: ERROR: role "postgres" already exists
ALTER ROLE
psql:prod_dump_2013-02-07_09-00.sql:19: ERROR: role "replication" already exists
ALTER ROLE
psql:prod_dump_2013-02-07_09-00.sql:31: ERROR: database "project_prod" already exists
REVOKE
REVOKE
GRANT
GRANT
You are now connected to database "project_prod" as user "new".
SET
SET
SET
SET
SET
SET
CREATE EXTENSION
COMMENT
SET
SET
SET
psql:prod_dump_2013-02-07_09-00.sql:85: ERROR: relation "active_admin_comments" already exists
ALTER TABLE
psql:prod_dump_2013-02-07_09-00.sql:99: ERROR: relation "active_admin_comments_id_seq" already exists
output continues for a while longer....
The PostgreSQL documentation seems to take a very simple approach to SQL dump/restore;
Dump:
$ pg_dump dbname > outfile
Restore:
$ psql dbname < infile
Any particular reason not to use that method?
It fairly clearly says at the top of your output:
You are now connected to database "postgres" as user "new".
...
You are now connected to database "project_prod" as user "new".
If you look at the first twenty or so lines of your backup I don't suppose you see anything looking like a "connect" command? Possibly to database "postgresql" as user "new", followed by another to database "project_prod" again as user "new"?
Oh - and if you're not going to actually look inside the file, there's no point in dumping as raw SQL. Might as well use the "custom" format which is compressed and gives you the option of restoring individual/selected elements with pg_restore. See the extensive and detailed manuals for details.

Rails and Heroku PGError: column does not exist

This page I have been developing for my app has been working fine locally (using sqllite3) but when I push it to Heroku, which uses PostgreSQL I get this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ... \"photos\" WHERE (neighborhood = 52 AND isUpForCon...\n
From this line of code:
#photos = Photo.where(["neighborhood = ? AND isUpForConsideration = ?", #neighborhood.id, 1])
isUpForConsideration is defiantly part of the Photo column. All my migrations are up to date, and when I pull the db back locally isUpForConsideration is still there, and the app still works locally.
I've also tried:
#photos = #neighborhood.photos(:conditions => {:isUpForConsideration => 1})
and
#photos = #neighborhood.photos.where(["isUpForConsideration = 1"])
Which gives me this error:
NeighborhoodsController# (ActionView::Template::Error) "PGError: ERROR: column \"isupforconsideration\" does not exist\nLINE 1: ...tos\" WHERE (\"photos\".neighborhood = 52) AND (isUpForCon...\n
Any idea what I could be doing wrong?
Your problem is that table and column names are case sensitive in PostgreSQL. This is normally hidden by automatically converting these to all-lowercase when making queries (hence why you see the error message reporting "isupforconsideration"), but if you managed to dodge that conversion on creation (by double quoting the name, like Rails does for you when you create a table), you'll see this weirdness. You need to enclose "isUpForConsideration" in double quotes when using it in a WHERE clause to fix this.
e.g.
#photos = #neighborhood.photos.where(["\"isUpForConsideration\" = 1"])
Another way to to get this error by modifying a migration and pushing changes up to Heroku without rebuilding the table.
Caveat: You'll lose data and perhaps references, so what I'm about to explain is a bad idea unless you're certain that no references will be lost. Rails provides ways to modify tables with migrations -- create new migrations to modify tables, don't modify the migrations themselves after they're created, generally.
Having said that, you can run heroku run rake db:rollback until that table you changed pops off, and then run heroku run rake db:migrate to put it back with your changes.
In addition, you can use the taps gem to back up and restore data. Pull down the database tables, munge them the way you need and then push the tables back up with taps. I do this quite often during the prototyping phase. I'd never do that with a live app though.

Does schema_search_path in database.yml for Postgre Rails app ignores case?

I have to connect to a legacy DB which has a schema called "Financeiro".
I setup my database.yml to:
...
schema_search_path: Financeiro
...
when activerecord tries to find something I get the following error:
ActiveRecord::StatementInvalid: RuntimeError: ERROR C3F000 Mschema "financeiro" does not exist F.\src\backend\catalog\namespace.c L2898 Rassign_search_path: SET search_path TO Financeiro from
c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.0/lib/active_record/connection_adapters/abstract_adapter.rb:147:in `log'
Note that in the error message I get financeiro (downcase).
If I rename the schema to financeiro downcase it works well. But it is not possible in the production environment.
try "Financeiro" with the quotes. If that makes it through into the SQL, it should work.
In general, the SQL standard requires identifiers to be case insensitive, unless quoted.
You may have to do some special quoting to get the double-quotes through from the yml file to the actual SQL. I don't know Rails/Ruby at all.

Resources