Stored procedures with unqualified table names not working with Babelfish - stored-procedures

I have created a Babelfish-enabled Postgres database in RDS.
I connected with SSMS and created a Database named 'demo'.
Within 'demo' I created a Schema named 'biz'.
I created my tables and stored procedures in the 'biz' schema.
The stored procedures used unqualified table names.
Finally, I wrote a .Net program to do some testing.
I use the System.Data.SqlClient Connection and Command classes and I can connect to the database.
When I execute a stored procedure I get the 'relation "X" does not exist.' error.
If I alter my stored procedure and qualify the table names with the 'biz' schema the error goes away.
How do I avoid having to qualify the table names with the schema?
For example:
After creating a Babelfish enabled Postgres cluster I executed these statements in SSMS:
create database demo
use demo
create schema biz
create table [biz].[cities](
[city] varchar(128),
[state] varchar(128)
)
create procedure [biz].[p_getcities] as
begin
select * from cities
end
insert into [biz].[cities](city, state) values ('Portland', 'OR')
insert into [biz].[cities](city, state) values ('Richmond', 'VA')
exec [biz].p_getcities
And I get this error message after running p_getcities:
Msg 33557097, Level 16, State 1, Line 21
relation "cities" does not exist
When I switch to pgAdmin and try to run the stored procedure like this:
CALL biz.p_getcities()
I get a similar error:
ERROR: relation "cities" does not exist
LINE 1: select * from cities
^
QUERY: select * from cities
CONTEXT: PL/tsql function biz.p_getcities() line 2 at SQL statement
SQL state: 42P01
However, when I set the search_path like this:
set search_path to biz
And the execute the stored procedure I get the expected results:
Portland OR
Richmond VA
Is there an equivalent to search_path in Babelfish?

This explanation has been provided by Rob Verschoor of rcv-aws
What is happening here is that the name resolution inside the procedure biz.p_getcities does not correctly resolve the table name. It resolves it to 'dbo' schema while it should resolve it to 'biz' schema. As you noted, this is related to the search_path setting, and this is not set correctly in this case.
This is a known bug and we hope to fix it soon.
Until then, the workaround is to qualify the table name with the schema name, i.e. select * from biz.cities

Related

TinyTds::Error: Cannot insert the value NULL into column 'ID'

My Ruby on Rails system is moving from Oracle to Microsoft SQL Server 2012. The back end database has already been converted by a third part from Oracle to Microsoft SQL Server. I have no control over the schema structure. This cannot be changed.
Using activerecord-sqlserver-adapter, tiny_tds and Freetds I can connect to the new database. Most DB requests work well.
However, many of the tables have ID primary keys that are auto incremented by SQL Server, in other words, PK IDENTITY(1,1) columns. The same code that worked with Oracle connections fails under SQL Server with the following error:
TinyTds::Error: Cannot insert the value NULL into column 'ID'
I know why it is doing this as the ID column is indeed a primary_key, and IDENTITY(1,1) column and includes the NOT NULL restriction. That is fine.
Inserting into the table works fine if using raw SQL execute statements where of course I exclude the ID column from the INSERT statement.
However I have spent days googling and I cannot find a way of telling Ruby on Rails not to try and save the ID column.
So I have an instance of a class, #book, that contains
Library::Book(id: integer, title: string, isbn: string ...)
When I do a #book.save! it generates the error above.
#book = Library::Book.new( .. )
:
:
#book.save!
TinyTds::Error: Cannot insert the value NULL into column 'ID'
Rather that resort to bare metal SQL, how do I do things more Railsy and tell it I want to save the record but not try and save the ID field as it is auto incremented? So effectively I am trying to save
Library::Book(title: string, isbn: string ...) if a new insert entry
or
Library::Book(id: integer, title: string, isbn: string ...) if trying to update an entry.
Due to imposed restrictions I am using:
Ruby 2.3.3p222
Rails 4.0.13
activerecord (4.0.13)
activerecord-sqlserver-adapter (4.0.4)
tiny_tds (1.0.2)
Freetds 1.00.27
You can use ActiveRecord::Relation#find_or_initialize_by. This assumes that you have enough attributes known at the time to uniquely identify the record.
That will solve the last part of your question. But it sounds like the id column is not set to auto-increment. You need to set that so when Rails sends a null id the DB will set it properly.
Update: To make the column auto-increment you will need to run a migration. You can then do:
Alter table books modify column id int(11) auto_increment;
Update: If we cannot modify the database we can do:
class Book
private
def create_or_update(*args, &block)
self.id ||= Book.maximum(:id) + 1
super
rescue ActiveRecord::RecordNotUnique
self.id = nil
retry
end
end
It's pretty janky, and I would highly recommend updating the column if possible.

Possible with multiple database connections

New to the tSQLt world (great tool set) and encountered a minor issue with a stored procedure I am setting up a test for.
If I for some reason have a stored procedure which connects to mutiple databases or even multiple SQL servers (Linked Servers).
Is it possible to do unit tests with tSQLt in such a scenario?
I commented already, but I would like to add some more. So as I said already, that you can do anything that fits into the single transaction.
But for your case I would suggest to create synonyms for every cross database/instance object and then use synonyms everywhere.
I've created following function to mock view/tables synonyms. It has some limitations but at least it can handle simple use cases.
CREATE PROCEDURE [tSQLt].[FakeSynonymTable] #SynonymTable VARCHAR(MAX)
AS
BEGIN
DECLARE #NewName VARCHAR(MAX)= #SynonymTable+REPLACE(CAST(NEWID() AS VARCHAR(100)), '-', '');
DECLARE #RenameCmd VARCHAR(MAX)= 'EXEC sp_rename '''+#SynonymTable+''', '''+#NewName+''';';
EXEC tSQLt.SuppressOutput
#RenameCmd;
DECLARE #sql VARCHAR(MAX)= 'SELECT * INTO '+#SynonymTable+' FROM '+#NewName+' WHERE 1=2;';
EXEC (#sql);
EXEC tSQLt.FakeTable
#TableName = #SynonymTable;
END;
Without you providing sample code I am not certain of your exact use case but this information may help.
The alternative approach for cross-database testing (assuming both databases are on the same instance) is to install tSQLt in both databases. Then you can mock the objects in the remote database in the same way that you would if they were local.
E.g. If you had a stored procedure in LocalDb that referenced a table in RemoteDb, you could do something like this:
Imagine you have a procedure that selects a row from a table called localTable in the local database and inserts that row in to a table called remoteTable in the remote database (on the same instance)
create procedure [myTests].[test mySproc inserts remoteTable from local table]
as
begin
-- Mock the local table in the local database
exec tSQLt.FakeTable 'dbo.localTable' ;
-- Mock the remote table (not the three part object reference to remoteDb)
exec RemoteDb.tSQLt.FakeTable 'dbo.remoteTable' ;
--! Data setup ommitted
--! exec dbo.mySproc #param = 'some value' ;
--! Get the data from the remote table into a temp table so we can test it
select * into #expected from RemoteDb.dbo.remoteTable;
--! Assume we have already populated #actual with our expected results
exec tSQLt.AssertEqualsTable '#expected', '#actual' ;
end
The above code demonstrates the basics but I blogged about this in more detail some years ago here.
Unfortunately this approach will not work across linked servers,

Rails / ActiveRecord: How to quote protected column name using #select in custom query?

Running Rails 4.0.13 with TinyTDS connected to Microsoft SQL Server 2012, I'm trying to run the following query:
sql = Model.where(:foo => bar).select(:open, :high, :low, :close).to_sql
Model.connection.execute(sql)
The problem is, the generated sql is
"SELECT open, high, low, close FROM [models]"
Which gives me an error as the column names open and close are protected.
TinyTds::Error: Incorrect syntax near the keyword 'open'
If I use #pluck, I can see the correct SQL is generated (with column names escaped):
"SELECT [models].[open], [models].[high], [models].[low], [models].[close] FROM [models]"
However, this produces an array, which is not what I want.
My question is how can i get #select to correctly quote the column names?
Thank you
I don't think you can make the select method to protect your column names when using symbols (maybe because different DBMS use different quoting identifiers), but you could pass your selection as a string :
sql = Model.where(:foo => bar).select("[open], [high], [low], [close]").to_sql
Model.connection.execute(sql)
I attempted to submit a bug report to Rails, however in doing so I saw the problem did not appear to exists using SQLite test case, this leads me to believe the issue is with the SQL Server Adapter.
Since I am on Rails 4 and not the latest version of the adapter I left it and wrote the following (horrible) method as wrapping the column names was not enough, I needed to prefix the table to prevent ambiguous column names. Yuck
def self.quote(*columns, klass)
columns.map { |col| "[#{klass.table_name}].[#{col}]" }.join(', ')
end

Execute stored procedure created by a different user over dblink

I have created a stored procedure PROCA in Database A with user USERA and given
execute permission to USERB and I could execute this stored proc in Database A when logged in with USERB.
Now I logged in to Database X and created a dblink Akink and this dblink conntects to
Database A with user USERB. Now when I execute stored proc using below syntax ,
it got executed without any error but whatever DML operations stored proc have done,
are not committed.
Code to invoke stored proc from Databse X
declare
begin
USERA.PROCA#Alink();
COMMIT;
end;
Please suggest what could be the issue.
It seems there are no good solutions for such situations.
But here is a suggestion for you; try using this:
Exec dbms_utility.exec_ddl_statement#db_link('some ddl sql statment');
For example:
Exec dbms_utility.exec_ddl_statement#db_link('truncate table test_tab');

Postgres Query to find whether database is read-only mode

I am new to postgres. In mysql we can check whether the database is in read-only mode by triggering the below query.
SELECT ##global.read_only
Likewise can anyone pls help me with the query to do the same in postgres? I tried few things like below
SELECT schemaname||'.'||tablename FROM pg_tables
WHERE
has_table_privilege ( 'postgres', schemaname||'.'||tablename, 'select' )
AND schemaname NOT IN ( 'pg_catalog','information_schema');
But it is listing like below which I am not expecting.
?column?
----------------------------------------
public.schema_migrations
public.credential_methods
public.notifications
public.site_defaults
public.apis
public.client_applications
public.api_groups
public.operations
public.client_application_labels
public.client_application_label_values
public.roles
public.users
public.sdm_user_roles
public.permissions_roles
public.keys
public.o_two_access_tokens
public.settings
public.sdm_users
public.permissions
public.audits
public.oauth_requesttokens
public.oauth_access_tokens
public.oauth_verifiers
public.logged_exceptions
public.api_call_details
public.api_access_roles
public.api_access_users
public.login_attempts
public.system_scopes
public.keys_system_scopes
public.o_two_auth_codes
public.o_two_refresh_tokens
public.service_profiles
public.error_traces
I also tried "\du" but this one is working only in terminal but not from a ruby file.
query=ActiveRecord::Base.connection.execute("\du;")
ActiveRecord::StatementInvalid: PGError: ERROR: syntax error at or near "du"
LINE 1: du;
Thanks,
Rafiu
You probably want something of the has_*_privilege() family function for relevant tables and relevant privileges. See here. Other than that I'm not sure if postgres has a concept of read-only mode.
Well, there's also show transaction_read_only inside a read-only transaction, but that doesn't seem to be like what you're asking for. And I don't think that transaction being readonly affects privileges of the user.
I'm not sure what you expect from your query, but if you want something boolean, as in whether you have access anywhere, you can use count(*)!=0 (and, probably, not select).
If you have a multi-node instance cluster, and you have the hot standby configuration. The output of SELECT pg_is_in_recovery() can tell you if the cluster is in the read-only mode.

Resources