stored procedures in web2py - stored-procedures

I'm considering converting an app from php/MySQL to web2py (and MySQL or Postgres). The only SQL code in the php codebase for this app are calls to stored procedures...no SELECTs, no INSERTs, etc., in the php codebase. All SQL source in the php codebase is on the order of "CALL proc_Fubar(args...);"
How do I tell web2py, "Here's my INSERT stored procedure; here's my SELECT..."? I know I can executesql, but how about the returned rowset from a SELECT...I'd like to have that data returned as if it were the results of a web2py query from a table.
Yes, I know. I'm trying to get all the neat stuff that web2py does without keeping up my end of the bargain (by defining my SQL as web2py wants to see it).

You might try the following. First, define a model that matches the fields returned by your stored procedure (set migrate=False so web2py doesn't try to create that table in the db).
db.define_table('myfaketable', ..., migrate=False)
Then do:
raw_rows = db.executesql('[SQL code to execute stored procedure]')
rows = db._adapter.parse(raw_rows,
fields=[field for field in db.myfaketable],
colnames=db.myfaketable.fields)

Related

Possible with multiple database connections

New to the tSQLt world (great tool set) and encountered a minor issue with a stored procedure I am setting up a test for.
If I for some reason have a stored procedure which connects to mutiple databases or even multiple SQL servers (Linked Servers).
Is it possible to do unit tests with tSQLt in such a scenario?
I commented already, but I would like to add some more. So as I said already, that you can do anything that fits into the single transaction.
But for your case I would suggest to create synonyms for every cross database/instance object and then use synonyms everywhere.
I've created following function to mock view/tables synonyms. It has some limitations but at least it can handle simple use cases.
CREATE PROCEDURE [tSQLt].[FakeSynonymTable] #SynonymTable VARCHAR(MAX)
AS
BEGIN
DECLARE #NewName VARCHAR(MAX)= #SynonymTable+REPLACE(CAST(NEWID() AS VARCHAR(100)), '-', '');
DECLARE #RenameCmd VARCHAR(MAX)= 'EXEC sp_rename '''+#SynonymTable+''', '''+#NewName+''';';
EXEC tSQLt.SuppressOutput
#RenameCmd;
DECLARE #sql VARCHAR(MAX)= 'SELECT * INTO '+#SynonymTable+' FROM '+#NewName+' WHERE 1=2;';
EXEC (#sql);
EXEC tSQLt.FakeTable
#TableName = #SynonymTable;
END;
Without you providing sample code I am not certain of your exact use case but this information may help.
The alternative approach for cross-database testing (assuming both databases are on the same instance) is to install tSQLt in both databases. Then you can mock the objects in the remote database in the same way that you would if they were local.
E.g. If you had a stored procedure in LocalDb that referenced a table in RemoteDb, you could do something like this:
Imagine you have a procedure that selects a row from a table called localTable in the local database and inserts that row in to a table called remoteTable in the remote database (on the same instance)
create procedure [myTests].[test mySproc inserts remoteTable from local table]
as
begin
-- Mock the local table in the local database
exec tSQLt.FakeTable 'dbo.localTable' ;
-- Mock the remote table (not the three part object reference to remoteDb)
exec RemoteDb.tSQLt.FakeTable 'dbo.remoteTable' ;
--! Data setup ommitted
--! exec dbo.mySproc #param = 'some value' ;
--! Get the data from the remote table into a temp table so we can test it
select * into #expected from RemoteDb.dbo.remoteTable;
--! Assume we have already populated #actual with our expected results
exec tSQLt.AssertEqualsTable '#expected', '#actual' ;
end
The above code demonstrates the basics but I blogged about this in more detail some years ago here.
Unfortunately this approach will not work across linked servers,

Rails Brakeman SQL injection warning while accessing an oracle view/function

I have rails code that is consuming an oracle view/function.
This is my code:
def run_query
connection.exec_query(
"SELECT * FROM TABLE(FN_REQ(#{demo_type_param},#{demo_tid_param}}))")
end
When run Brakeman analyzer it warns of possible "sql injection attack"
I need to understand if this is a valid warning, if so, how do I remediate it?
Since this is a function & not an actual table, I am not sure what's the right way.
If it was a normal model, i would have just followed this pattern:
Model.where("mycolumn1= ? AND mycolumn2= ?", demo_type_param, demo_tid_param).first
Yes, it is real. Almost every time, you build any SQL query from simply concatenating variables, you are vulnerable to SQL injection. Generally, an SQL injection happens each time when data inserted into the query can look like valid SQL and can result in additional queries executed.
The only solution is to manually enforce appropriate escaping or to use prepared statements, with the latter being the preferred solution.
With ActiveRecord / Rails, you can use exec_query with binds directly
sql = 'SELECT * FROM TABLE(FN_REQ(?,?))'
connection.exec_query(sql, 'my query', [demo_type_param, demo_tid_param])
Here, Rails will prepare the statement on the database and add the parameters to it on execution, ensuring that everything is correctly escaped and save from SQL injection.

Why can't bcp execute procedures having temp table(#tempTable)?

Recently I was tasked with creating a SQL Server Job to automate the creation of a CSV file. There was existing code, which was using an assortment of #temp tables.
When I set up the job to execute using BCP calling the existing code (converted into a procedure), I kept getting errors:
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Native Client][SQL Server]Invalid object name #xyz
As described in other post(s), to resolve the problem lots of people recommend converting all the #tempTables to #tableVariables.
However, I would like to understand WHY BCP doesn't seem to be able to use #tempTables?
When I execute the same procedure from within SSMS it works though!? Why?
I did do a quick and simple test using global temp tables within a procedure and that seemed to succeed via a job using BCP, so I am assuming it is related to the scope of the #tempTables!?
Thanks in advance for your responses/clarifications.
DTML
You are correct in guessing that it's a scope issue for the #temp tables.
BCP is spawned as a separate process, so the tables are no longer in scope for the new processes. SSMS likely uses sub-processes, so they would still have access to the #temp tables.

Prepared statements in ruby/rails with postgres

I want to execute a rather nasty recursive update query in rails. This means I want to write some raw postgres sql, with parameters, and execute it inside a rails controller.
How do I do that? I can't find a PreparedStatement class in activerecord, there don't seem to be any methods named 'native', I have tried ActiveRecord::Base.connection.exec_delete, I have looked through the source - just cannot, cannot work it out.
I've looked everywhere - the documentation goes in circles.
How would I tell postgres to execute
delete from foo where foo.bar=?
bind 'baz' to the q-mark, and do it without using the active record objects, finders, you-beaut subset thingies and all the rest.
I just want to execute a prepared statement with some bindings. How hard can it be?
(PS, and no: I don't want to jam the parameters into the string myself and execute it as unparameterised sql. It's wrong and it means I have to worry about sanitising the data.)
See the discussion of PreparedStatements in Rails ('Using Prepared Statements') here - http://blog.daniel-azuma.com/archives/216 . Shows you which methods to call, and how to format your arguments.
UPDATE:
Paraphrased from the post:
For the delete method arguments use the template first, followed by a query name (which can be nil) and then an array of values to inject into the statement. So like this:
row_count = connection.delete("DELETE FROM foo WHERE foo.bar=$1", nil, [[nil, 'baz']])

Ensure my SQL is not injected receiving array of values

I need to receive an array of values of like:
['restaurant']
['restaurant', 'pharmacy']
I would like which approach to take to ensure that when I use this:
SELECT * FROM places WHERE type IN (array_joined_with_commas_and_quotes)
I don't get injection attacks.
I am writing the sentence without any library and I am working in Rails.
I don't have Active Record, I am doing a postgis query to an external server.
How about using the ActiveRecord functions for query building?
If you use rails, the gem should still be there.
Place.where(:type => ['foo', 'bar']).to_sql
You have two basic approaches (using ? for parameterization in these examples).
If this is an array, then your ['restaurant', 'pharmacy'] becomes '{"restaurant","pharmacy"}' and then you can:
SELECT * FROM places WHERE type = ANY (?);
You could also
SELECT * FROM places WHERE type IN (? ....);
Dynamically generating the number of parameters as you go. As scones mentioned ActiveRecord may be able to automate this for you. The point though is that these methods send data separately from the query, which means that the SQL cannot be injected into the query.

Resources