In database Stored Procedures are often used to handle business logic. (There is a debate that where the logic should be, but it's not the topic here)
When more and more SP are written, the system becomes very complex. The main reason I think is the dependencies (SP1 depends on SP2, SP2 depends on SP3 ...)
In OO world, there is the Dependency Injection pattern and also many IoC containers (such as Spring) to solve the dependency problem.
In SP world, can this patten be applied? How? Any tools?
Dependency injection mostly makes sense in an environment that supports polymorphism: you have an interface which can have one of many implementations, and you offload the discovery of proper implementation to a certain framework.
In SQL, there is no distance between Stored Procedure interface and its implementation -- you define the interface with the procedure. It is hard to think of a case when you actually define multiple procedures as interchangeable within a common signature.
Practically speaking, for dependency injection you would also need some key to tell the system what is it that you want to have injected. In OOP languages, very often the interface is a key. But you probably do not want SP signature as a key. And if you look up procedure by name, then how is it different from just executing it directly?
There is a reason why OOP exists and why it is a popular way to manage complexity. Dependency injection builds upon its core concepts and does not seem to be well-applicable to procedural environment.
Although it's not recommended. There is a way to use Dependency Injection in a SQL environment (with stored procedures, functions, views, and even tables). It involves a lot of Dynamic SQL, Triggers, and gets very messy after a while. But, just for the sake of answering the question, here goes:
First, you have to decide what kind of dependency you are injecting, and how it is accessed. Let's say, for this example, it is a Company ID, and you want to execute different objects based on the company, while remaining agnostic in higher-level procedures and follow D.R.Y (don't-repeat-yourself).
So you need a way to store this company ID in a way that can always be retrievable regardless of where you are and what is passed in. You don't want to always have to pass in the Company ID to the SQL object, so you use CONTEXT_INFO. This is a value that is constant through a thread, and is unique to that thread.
SET #Context = CAST(15 AS VARBINARY(128))
SET CONTEXT_INFO #Context;
Now you can access the context like this:
SELECT #Context = CONTEXT_INFO();
SET #CompanyID = CAST(#Context AS INT)
Are you with me so far?
This is where it starts to get really messy.
Create a table for yourself with the names of the company-specific objects.
CREATE TABLE CompanyObjectRegistry (ObjectSchema VARCHAR(100), ObjectName VARCHAR(100), CompanyID)
Store your objects here:
INSERT CompanyObjectRegistry
VALUES ('CompanyFifteen', 'DoSomethingCool', 15),
('CompanyTwelve', 'DoSomethingCool', 12);
Then you create the top-level procedure:
CREATE PROCEDURE DoSomethingCool (#Data VARCHAR(10), #Param2 INT)
AS
DECLARE #CompanyID INT = CAST(CONTEXT_INFO() AS INT),
#SQL NVARCHAR(MAX) = '
EXEC [SCHEMA].DoSomethingCool ''' + #Data + ''', '
+ CAST(Param2 AS VARCHAR(10)) + ';'
SET
SELECT #SQL = REPLACE(#SQL, '[SCHEMA]', ObjectSchema)
FROM CompanyObjectRegistry
WHERE CompanyID = #CompanyID
EXEC(#SQL)
This has a small overhead because of the DSQL, and the fact that it can't be compiled. You can remove some of this overhead by using a "generate" procedure which generates this "base" procedure automatically from the contents of CompanyObjectRegistry and uses IF statements.
If you need to, you can have a trigger add to CompanyObjectRegistry and generate the base object when one of the underlying objects is edited/added. You would use a DDL trigger for this.
Here's how you would do this for tables:
DECLARE #TableName VARCHAR(100) = 'DoSomethingCool';
SET #SQL = 'CREATE VIEW ' + #TableName + ' AS '
+ (SELECT '
SELECT * FROM ' + ObjectSchema + '.' + ObjectName + '
WHERE ' + CAST(CompanyID AS VARCHAR(10)) + ' = #CompanyID
UNION ALL ' -- You would have to remove the last 10 characters from the result
FROM CompanyObjectRegistry
WHERE ObjectName = #TableName
FOR XML PATH(''), TYPE).value('.', 'nvarchar(max)')
EXEC (#SQL)
Views can be injected in the same way, just select * from each view.
Functions are very hard to do because you can't do either of the above approaches. You have to use the header of the function and generate the body.
It will take a lot of time. So if you have a choice... don't.
Related
Consider simple commands:
INSERT INTO TABLE table_name (fldX) VALUES (valueX)
UPDATE table_name SET fldX = valueX WHERE table_id = ? AND version_id = ?
More often we insert or update only some of the table fields but all examples of CRUD Stored Procedures on INSERT/UPDATE contain all updatable table fields and to use this SPs we need to fill all those parameters.
Problems with such SPs arise when:
- user wants to set only subset (1+) of fields initially so he can't use SP which inserts all fields
- user doesn't always know about all table fields so he can't use SP which updates all fields. I don't want to write the same SP for possible subset of fields.
- user should't have access to SP which updates all fields.
- user/user action doesn't always have permission to change values of all table fields.
- user want to update just 1 field but needs to use all table fields in CALL {table}_update( ... )
In this examples user DOES have access to "PRIMARY KEY" and "VERSION" (timestamp/numeric) columns of record.
Posssible solutions:
Solution 0: Keep using INSERT/UPDATE statements
Advantages of this approach:
- it works
Disadvantages of this approach:
- security concerns when allowing users direct access to tables
- no way to save user's data when DELETEing record
Solution 1: Send part of DML statement as parameter and SP will use it to create Dynamic SQL stament:
CALL table_update(p_id AS INT, p_changes CHAR(1000))
-- Parameters: p_changes = "fld1 = 1, fld2 = '01.01.2019', fld3 = 'abc'"
st1 = 'UPDATE table SET' || p_changes || 'WHERE id = p_id';
EXECUTE st1;
Disadvantages of this approach:
- possible SQL injection
- no caching and optimization available on Dynamic SQL
- can't check input string, etc...
Solution2: Send not used values AS NULLs and add string parameter with list of columns with values to use actual parameters (even if it's null)
PROCEDURE table_update(upd_fields VARCHAR(1000), fld1 CHR(30), fld2 CHAR(30))
-- If we want to update only fld1 we should execute
CALL table_update('fld1', value1, NULL)
Disadvantages of this approach:
- If SP will change ordering of parameters in the future this system will break
- Statements again will be prepared dynamically so no caching.
- complexity of creating 'ins_fields' or 'upd_fields' parameters
Solution 3: Send updates using XML string with all changes.
PROCEDURE table_update(record_updates XML)
-- or even
PROCEDURE table_edit(table_changes XML) -- all INSERT/UPDATE/DELETE statemens together
Advantages of this approach:
- can be used for INSERTs/UPDATEs/DELETEs in 1 SP call
- only 1 parameter to list all updated fields.
Disadvantages of this approach:
- transfers more data (because XML)
- lower performance because of need to parse XML on server
- increases Stored Procedure complexity.
- increases Client code complexity (to create XML string)
So what solutions am I missing? Which are considered mainstream/best/universal?
How does this problem is solved in modern ORMs? DO they use 1st, 2nd or 3rd approach?
Personally I would like to use 3rd solution (with XML parameter) but I need examples:
1) Examples of schemas for such XML parameters.
2) Examples of Stored Procedures that parse XML parameter
Currently used environment: executing direct INSERT/UPDATE/DELETE statements using SQL passthrough (ODBC) from Visual FoxPro applcation. DBMS: DB2 for z/OS v10.
Simple answer; DB2 allows parameter overloading:
PROCEDURE update_widgets(p_id INTEGER, p_color VARCHAR(40) )
PROCEDURE update_widgets(p_id INTEGER, p_quantity INTEGER )
PROCEDURE update_widgets(p_id INTEGER, p_price DECIMAL(9,2) )
PROCEDURE update_widgets(p_id INTEGER, p_quantity INTEGER, p_price DECIMAL(9,2) )
PROCEDURE update_widgets(p_id INTEGER, p_color VARCHAR(40) , p_quantity INTEGER, p_price DECIMAL(9,2) )
. . .
As long as your arguments are not ambiguous, you can have as many variants as you like
Another solution is to have all possible updateable rows be nullable parameters, and use null as a no-update check:
UPDATE widgets SET price = :p_price WHERE id=:p_id AND :p_price IS NOT NULL;
UPDATE widgets SET color = :p_color WHERE id=:p_id AND :p_color IS NOT NULL;
. . .
I have 18 different pairs of table column names like:
name_1, surname_1, ... name_18, surname_18
I would like to generate 18 inserts with Informix SPL using something like:
define _counter Int;
define _name_1 varchar(20);
define _surname_1 varchar(20);
...
define _name_18 varchar(20);
define _surname varchar(20);
select name_1, surname_1, ..., name_18, surname_18
into _name_1, _surname_1, ..., _name_18, _surname_18
from names where name_id = 1;
for _counter = 1 to 18 loop
insert into person(name, surname) values (_name_+_counter, _surname_+_counter);
end loop
If I try this I get syntax error. I am stuck with the terrible table design. Could you please advise if there is some similar/correct way of accomplishing this?
Given the clearer outline of the question, I think you have to forgo the loop. The best you can do is either 18 consecutive INSERT statements, or 18 calls to a stored procedure that executes one statement on each call.
Informix SPL does not have an array type, and you can only really use the loop with an array. (I have seen loops with a CASE statement inside, one case for each iteration of the loop; they're seldom a good solution to a problem, and it isn't a sensible solution to this situation.)
I will repeat an observation from my previous comments: the design of a table with 18 pairs of columns is very sub-optimal. However, it appears that you are trying to transfer data from this sub-optimal schema to a more sensible one with one row per name.
You could also consider using an 18-way UNION:
INSERT INTO Person(Name, Surname)
SELECT Name_1, Surname_1 FROM Names -- WHERE name_id = 1
UNION
SELECT Name_2, Surname_2 FROM Names -- …
UNION …
SELECT Name_18, Surname_18 FROM Names -- …
If the requirement is truly to have just the row where name_id = 1, you will need to add that criterion to each of the 18 SELECT clauses within the UNION SELECT statement. There are other ways to add that filter condition, with different sets of trade-offs at the source code level (and perhaps different trade-offs in the optimizer). Informix does not (yet) support CTEs (common table expressions, aka WITH clauses), which is a pity in this context.
Note that the code shown transfers all the data from Names into Person in a single SQL statement. This might well be the closest to optimal process overall.
Maybe something like this is what you want (using informix 12.10FC6, not sure if it will work on previous versions):
CREATE PROCEDURE copy_paste_names (p_name_id INTEGER);
DEFINE
l_query_string VARCHAR(255);
DEFINE
iter INT;
FOR iter IN (1 TO 18 STEP 1)
LET l_query_string = 'INSERT INTO person (name, surname) SELECT name_' || iter || ', surname_' || iter || ' FROM names WHERE name_id = ' || p_name_id || ';';
EXECUTE IMMEDIATE l_query_string;
END FOR;
END PROCEDURE;
I assume that the names table will always have 18 pairs of columns named name_? and surname_?.
This procedure will just blindly try to copy each pair of name_?, surname_? columns from the names table into a new row in the person table. There isn't any kind of checks to see if there are actually values to be copied or if
I'm writing a delphi(7 ver) application and in some place I want to execute parameterized queries (for BDE and Paradox) which will be loaded at runtime into a TQuery by the user. These queries will be stored in text files (one text file for one query). The application then, will construct for any parameter of the query, one input control (Tedit) in order to be able to accept values by the user. Also there will be a button for the execution of query. My question is how can I recognize the datatype of the query's parameter? Is there a way to get this type without of cause to be included in some way in the text file containing the query?
Create a second query from the first, but modify its where clause to ensure no rows.
SELECT * FROM MYTABLE WHERE PKFIELD IS NULL
Name your parameters so that you can establish their datatypes from the fieldtypes of this second query.
I realise this only works for relatively simple cases, but it should get you some of the way.
the advantage of using a parameter is that you don't need to know its data type.
Use the string value from the tedit
"select * from mytable where myfield = :param1"
"parambyname('param1').asstring := edit1.text"
I've made this with MySQL database. you must define some parameters, Exemple:
SELECT * FROM MyTable WHERE MyField=[ANNEE];
in this case, i have an other table, called balise, that look like this
"ID" "BALISE" "CAPTION" "DEFAULT_VALUE" "CNDT" "COMPOSANT"
"1" "ANNEE" "Année" "2014" "Properties.MaxValue=2014||Properties.MinValue=2007" 1;
in runtime, this mean that:
Make in my Panel, a TLablel that have caption Année
Make in the same line an other component type 1 (That mean in my case TcxSpinEdit), this component have défault value 2014, have Two properties Max Value=2014 and Min Value=2007, (I use RTTI to modifie this value of parameters, in Delphi ver7, use TypeInfo).
An Other Button with function called Actualise, this function have Original query, must browse an array of TBalise that i have created, take the value (In my case, take TcxSpinEdit(MyObject).Value), and replace it in the copy of my query (AnsiReplaceStr(Requete, '[ANNEE]', MyValue)), so i have the final query to execute it.
I have module in complete projet, worked with this methode, and it workk fine.
I want to avoid using injection of parms in the query statement. Therefore we used the following instructions from the NEO4J .NET client class:
var queryClassRelationshipsNodes = client.Cypher
.Start("a", (NodeReference)sourceReference.Id)
.Match("a-[Rel: ***{relationshipType***} ]->foundClass")
.Where("Rel.RelationStartNode =" + "\'" + relationshipStart + "\'")
.AndWhere("Rel.RelationDomainNode =" + "\'" + relationshipDomain + "\'")
.AndWhere("Rel.RelationClassNode =" + "\'" + relationshipClass + "\'")
.WithParam("relationshipType", relationshipType)
.Return<Node<Dictionary<string, string>>>("foundClass")
.Results;
However this code does not work once executed by the server. For some reason the PARM: relationshipType is not connected with the variable which we put in between {}.
Can someone please help us debug the problem with this code? We would prefer to use WithParms rather than injecting variables inside the statement.
Thanks a lot!
Can someone please help us debug the problem with this code?
There's a section on https://bitbucket.org/Readify/neo4jclient/wiki/cypher titled "Debugging" which describes how to do this.
As for your core problem though, your approach is hitting a Cypher restriction. Parameters are for parts of the query that aren't compiled into the query plan. The match clause is however.
From the Neo4j documentation:
Parameters can be used for literals and expressions in the WHERE clause, for the index key and index value in the START clause, index queries, and finally for node/relationship ids. Parameters can not be used as for property names, since property notation is part of query structure that is compiled into a query plan.
You could do something like:
.Match("a-[Rel:]->foundClass")
.Where("type(Rel) = {relationshipType}")
.WithParam("relationshipType", relationshipType)
(Disclaimer: I've just typed that here. I haven't tested it at all.)
That will likely be slower though, because you need to retrieve all relationships, then test their types. You should test this. There's a reason why the match clause is compiled into the query plan.
Does anybody know (or care to make a suppostion as to) why TSqlDataset has a commandtext property (string) whereas TSqlQuery has a sql property (tstrings)?
Consider the sql statement
select id, name from
table
order by name
If I use a TSqlQuery, then I can change the table name in the query dynamically by accessing sql[1], but if I am using a TSqlDataset (as I have to if I need a bidrectional dataset, the dataset is connected to a provider and thence to a tclientdataset), I have to set the commandtext string literally. Whilst the above example is trivial, it can be a problem when the sql statement is much more involved.
Update:
Judging by the comments and answers so far, it seems that I was misunderstood. I don't care very much for improving the runtime performance of the components (what does one millisecond matter when the query takes one second) but I do care about the programmer (ie me) and the ability to maintain the program. In real life, I have the following query which is stored in a TSqlQuery:
select dockets.id, dockets.opendate, customers.name, statuses.statname,
dockets.totalcost, dockets.whopays, dockets.expected, dockets.urgent,
(dockets.totalcost - dockets.billed) as openbill,
(dockets.totalcost - dockets.paid) as opencost,
location.name as locname, dockets.attention,
statuses.colour, statuses.disporder, statuses.future, dockets.urgcomment
from location, statuses, dockets left join customers
on dockets.customer = customers.id
where dockets.location = location.id
and dockets.status = statuses.id
I haven't counted the number of characters in the string, but I'm sure that there are more than 255, thus precluding storing the query in a simple string. In certain circumstances, I want to filter the amount of data being displayed by adding the line 'and statuses.id = 3' or 'and customers.id = 249'. If the query were stored as TStrings, then I could add to the basic query the dummy line 'and 1 = 1', and then update this line as needed. But the query is one long string and I can't easily access the end of it.
What I am currently doing (in lieu of a better solution) is creating another TSqlDataSet, and setting its commandtext to the default TSqlDataSet's commandtext whilst appending the extra condition.
1) TSQLQuery is rather for compatibility with BDE TQuery. And BDE TQuery has SQL: TStrings property. TSQLDataSet is what supposed to be used for new applications.
2) Although SQL: TStrings is usefull for some tasks, it is also error prone. Often programmers forget to clear SQL property before filling again. Also if your query is a big one, the filling of SQL may lead to performance degradation. Because on each SQL.Add(...) call dbExpress code parses query when ParamCheck is True. That may be solved by using BeginUpdate / EndUpdate or setting ParamCheck to False. But note, setting ParamCheck to False stops automatic parameters creation.
SQLQuery1.SQL.BeginUpdate;
try
SQLQuery1.SQL.Clear;
SQLQuery1.SQL.Add('SELECT * FROM');
SQLQuery1.SQL.Add('Orders');
finally
SQLQuery1.SQL.EndUpdate;
end;
CommandText does not have such issues.
3) You can use Format function for building a dynamic SQL string:
var
sTableName: String;
...
sTableName := 'Orders';
SQLDataSet1.CommandText := Format('select * from %s', [sTableName]);
4) Other data access libraries, like AnyDAC, have macro variables, simplifying dynamic query text building. For example:
ADQuery1.SQL.Text := 'SELECT * FROM &TabName';
ADQuery1.Macros[0].AsRaw := 'Orders';
ADQuery1.Open;
I would have to say that the TSqlQuery uses TStrings (TWideStrings in Delphi 2010) because it is much more flexible.
Suppose your query was:
Select
Item1,
Item2,
Item3,
Item4
FROM MyTable
It's a lot easier to read
You can copy and paste into an external query tool and it stays formatted
It's easy to comment out sections
Select
Item1,
/*
Item2,
Item3,
*/
Item4
FROM MyTable
You can easily add items
Select
Item1,
Item2,
Item2a,
Item2b,
Item3,
Item3a,
Item3b,
Item4
FROM MyTable
Try doing that to a contiguous set of characters that goes on forever in one long line with no line breaks inside an edit window that is always to small for viewing that doesn't allow for wrapped text etc. etc. etc.
Just $0.02.