Isolation Level for stored procedure SQL Server? - stored-procedures

I want to add Isolation level in my procedure and for that I wanted to confirm that which one is the correct format from below:
Attempt #1 - setting isolation level before calling the stored procedure:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
EXEC [sp_GetProductDetails] 'ABCD','2017-02-20T11:51:37.3178768'
Attempt #2 - setting isolation level inside the stored procedure:
CREATE PROCEDURE MySP AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRAN
SELECT * FROM MyTable
SELECT * FROM MyTable2
SELECT * FROM MyTable3
COMMIT TRAN
END
Please suggest.

Both versions are "correct" - they just do different things:
Your attempt #1 sets the isolation level for that database and connection - that means, the chosen isolation level will be used for any future statement - until you change the isolation level again
Your attempt #2 sets the isolation level only INSIDE the stored procedure - so once the stored procedure is completed, the isolation level that existed on the database/connection level is restored again
So it really depends on what you want to do:
set the isolation level to a different level in general for your current connection to this database? Any future statement will be run under this isolation level --> choose #1
set the isolation level to a different setting for just a single stored procedure - regardless of what the connnection/database had before - then use #2

ISOLATION LEVEL READ COMMITTED is default ISOLATION for SQL database.
Attempt #2 is good practice to set ISOLATION LEVEL.

Related

SAP HANA - Parallel Execution of Stored Procedures

I wanted to ask if there is a way to call the same stored procedure with a different parameter (value) either from within another stored procedure or from an xsjs-Service parallel without the use of Jobs.
From my experience so for the calls run synchronous and wait for the first call of a procedure to return before calling it the second time.
Thank you in advance for your time and help.
Kind Regards...
SQLScript provides the option to run statements in anPARALLEL EXECUTION block like so:
DO
BEGIN PARALLEL EXECUTION
INSERT INTO mytab VALUES (1, 2, 3);
INSERT INTO myothertab VALUES (4, 5, 6);
END;
See the documentation for details here.
HOWEVER: as the documentation states, this does not include the CALL statement for executing procedures.
Link to the documentation:
https://help.sap.com/viewer/de2486ee947e43e684d39702027f8a94/2.0.02/en-US/8db200a4f585490c81c4930689ec1a5c.html
Restrictions and Limitations
The following restrictions apply:
Modification of tables with a foreign key or triggers are not allowed
Updating the same table in different statements is not allowed
Only concurrent reads on one table are allowed. Implicit SELECT and
SELCT INTO scalar variable statements are supported.
Calling procedures containing dynamic SQL (for example, EXEC,
EXECUTE IMMEDIATE) is not supported in parallel blocks
Mixing read-only procedure calls and read-write procedure calls in a
parallel block is not allowed.

How to acquire write lock on a node?

I want to do something like this (ruby, cypher queries):
// ACQUIRE WRITE LOCK ON NODE 1
// SOME CODE THAT DO READ AND THEN WRITE, e.g.:
results = neo.query("MATCH (n {id: 1}) RETURN n")
...
case results.size
when 0
neo.create_node(properties)
when 1
neo.update_node(results.first, properties)
...
// RELEASE WRITE LOCK ON NODE 1
According to docs, http://docs.neo4j.org/chunked/stable/transactions-isolation.html:
By default a read operation will read the last committed value unless a local modification within the current transaction exist. The default isolation level is very similar to READ_COMMITTED: reads do not block or take any locks so non-repeatable reads can occur. It is possible to achieve a stronger isolation level (such as REPETABLE_READ and SERIALIZABLE) by manually acquiring read and write locks.
http://docs.neo4j.org/chunked/stable/transactions.html:
One can manually acquire write locks on nodes and relationships to achieve higher level of isolation (SERIALIZABLE).
But there is nowhere mentioned anything about how to acquire the lock or how to change the isolation level.
There is no support at the moment for overriding the default READ_COMMITTED isolation level through the REST API. Manually overriding isolation level can be achieved only if using Neo4j embedded in your Java application.
We'll add a note to the documentation page you referenced to make that a bit more clear.

Questions about Memory models

When I read the book related to compiler , I saw that there are two major memory models.
Register to Register model and Memory to memory model.
In the book, it says that register-to-register models ignore machine limitations on the number of registers, and compiler back-ends must insert loads and stores. Is it because register-to-register models can use virtual registers...and this model keeps all values that can be stored in registers, so before finishing it must insert loads and stores (related to memory)?
Also, in the memory to memory part, the book says that the compiler back-end can remove redundant loads and stores. Does it mean that the model has to remove redundant uses of memory for optimization?
I'm going to answer your question in the context of compilers because that's what you said you were reading about. In a computer architecture context these answers will not apply, so read with caution.
Is it because register-to-register models can use virtual registers...and this model keeps all values that can be stored in registers, so before finishing it must insert loads and stores (related to memory)?
That's likely one reason. If the underlying machine does not support register/register operations, then the "virtual register" operations will need to be translated into loads and stores instead. Similarly, if your compiler assumes an infinite register machine during the IR phase, it might be necessary to spill some registers to memory during the register allocation phase (in which you map your infinite set of virtual registers to a finite set of real registers, using memory accesses when you run out).
Does it mean that the model has to remove redundant uses of memory for optimization?
Yes, this is something the compiler may do as an optimization step. If we do something like this:
register1 <- LOAD 1234
// Operation using register 1 that leaves the result in register 1
STORE register1, 1234
register1 <- LOAD 1234
// Another operation that uses register 1
STORE register1, 1235
This can be optimised to simply leave the value in the register instead, like this:
register1 <- LOAD 1234
// Operation using register 1 that leaves the result in register 1
// Another operation that uses register 1
STORE register1, 1235
This is clearly more efficient because it avoids additional DRAM accesses that are slow when compared to registers.

Delphi - How to genericize query result set

I am working with multiple databases within the same application. I am using drivers from two different companies. Both companies have tTable and tQuery Descendants that work well.
I need a way to have generic access to the data, regardless of which driver/tQuery component I am using to return a set of data. This data would NOT tie to components, just to my logic.
For example...(pseudocode) Let's create a function which can run against either tQuery component
function ListAllTables(NameOfDatabase : String) :ReturnSet??
begin
If NameOfDataBase = 'X' then use tQuery(Vendor 1)
else use tQuery(Vendor 2)
RunQuery;
Return Answer...
end;
When NORMALLY running a query, I do
Query.Open;
While not Query.EOF do
begin
Read my rows..
next;
end;
If I am CALLING ListAllTables, what is my return type so that I can iterate through the rows? Each tQuery Vendor is different, so I can't use that (can I, and if so, would I want to?) I could build a Memory Table, and pass that back, but that seems like extra work for ListAllRows to build a memory table, and then to pass it back to the calling routine so that it can "un-build", i.e. iterate through the rows...
What are your ideas and suggestions?
Thanks
GS
Almost all Delphi datasets descend from TDataset, and most useful behavior is defined on TDataset.
Therefore, if you assign each table or query to a variable of type TDataset, you should be able to perform your logic on that dataset in a vendor neutral fashion.
I would also isolate the production of the datasets into a set of factory functions that only create the vendor-specific dataset and return it as a TDataset. Each factory function goes in it's own unit. Then, only those small units need have any knowledge of the vendor specific components.
You can use IProviderSupport to generalize the query execution. I am expecting, that used query's are supporting IProviderSupport. This interface allows to set command text, parameters, execute commands, etc.
The common denominator for used query's is TDataSet. So, you will need pass TDataSet reference.
For example:
var
oDS: TDataSet;
...
if NameOfDataBase = 'X' then
oDS := T1Query.Create(nil)
else
oDS := T2Query.Create(nil);
(oDS as IProviderSupport).PSSetCommandText('select * from mytab');
oDS.Open;
while not oDS.Eof do begin
//
oDS.Next;
end;
Perhaps you could consider a universal data access component such as UniDAC or AnyDAC. This allows you to use only one component set to access different databases in a consistent way.
You might also be interested in DataAbstract from RemObjects. A powerful data abstraction, multi-tier, remoting solution with a lot of features. Not inexpensive, but excellent value for the money.
Relevant links:
http://www.da-soft.com/anydac/
http://www.devart.com/unidac/
http://www.remobjects.com/da/
A simple approach would be to have ListAllTables return (or populate) a stringlist.

Approaches for caching calculated values

In a Delphi application we are working on we have a big structure of related objects. Some of the properties of these objects have values which are calculated at runtime and I am looking for a way to cache the results for the more intensive calculations. An approach which I use is saving the value in a private member the first time it is calculated. Here's a short example:
unit Unit1;
interface
type
TMyObject = class
private
FObject1, FObject2: TMyOtherObject;
FMyCalculatedValue: Integer;
function GetMyCalculatedValue: Integer;
public
property MyCalculatedValue: Integer read GetMyCalculatedValue;
end;
implementation
function TMyObject.GetMyCalculatedValue: Integer;
begin
if FMyCalculatedValue = 0 then
begin
FMyCalculatedValue :=
FObject1.OtherCalculatedValue + // This is also calculated
FObject2.OtherValue;
end;
Result := FMyCalculatedValue;
end;
end.
It is not uncommon that the objects used for the calculation change and the cached value should be reset and recalculated. So far we addressed this issue by using the observer pattern: objects implement an OnChange event so that others can subscribe, get notified when they change and reset cached values. This approach works but has some downsides:
It takes a lot of memory to manage subscriptions.
It doesn't scale well when a cached value depends on lots of objects (a list for example).
The dependency is not very specific (even if a cache value depends only on one property it will be reset also when other properties change).
Managing subscriptions impacts the overall performance and is hard to maintain (objects are deleted, moved, ...).
It is not clear how to deal with calculations depending on other calculated values.
And finally the question: can you suggest other approaches for implementing cached calculated values?
If you want to avoid the Observer Pattern, you might try to use a hashing approach.
The idea would be that you 'hash' the arguments, and check if this match the 'hash' for which the state is saved. If it does not, then you recompute (and thus save the new hash as key).
I know I make it sound like I just thought about it, but in fact it is used by well-known softwares.
For example, SCons (Makefile alternative) does it to check if the target needs to be re-built preferably to a timestamp approach.
We have used SCons for over a year now, and we never detected any problem of target that was not rebuilt, so their hash works well!
You could store local copies of the external object values which are required. The access routine then compares the local copy with the external value, and only does the recalculation on a change.
Accessing the external objects properties would likewise force a possible re-evaluation of those properties, so the system should keep itself up-to-date automatically, but only re-calculate when it needs to. I don't know if you need to take steps to avoid circular dependencies.
This increases the amount of space you need for each object, but removes the observer pattern. It also defers all calculations until they are needed, instead of performing the calculation every time a source parameter changes. I hope this is relevant for your system.
unit Unit1;
interface
type
TMyObject = class
private
FObject1, FObject2: TMyOtherObject;
FObject1Val, FObject2Val: Integer;
FMyCalculatedValue: Integer;
function GetMyCalculatedValue: Integer;
public
property MyCalculatedValue: Integer read GetMyCalculatedValue;
end;
implementation
function TMyObject.GetMyCalculatedValue: Integer;
begin
if (FObject1.OtherCalculatedValue &LT;&GT; FObjectVal1)
or (FObject2.OtherValue &LT;&GT; FObjectVal2) then
begin
FMyCalculatedValue :=
FObject1.OtherCalculatedValue + // This is also calculated
FObject2.OtherValue;
FObjectVal1 := FObject1.OtherCalculatedValue;
FObjectVal2 := Object2.OtherValue;
end;
Result := FMyCalculatedValue;
end;
end.
In my work I use Bold for Delphi that can manage unlimited complex structures of cached values depending on each other. Usually each variable only holds a small part of the problem. In this framework that is called derived attributes. Derived because the value is not saved in the database, It just depends on on other derived attributes or persistant attributes in the database.
The code behind such attribute is written in Delphi as a procedure or in OCL (Object Constraint Language) in the model. If you write it as Delphi code you have to subscribe to the depending variables. So if attribute C depends on A and B then whenever A or B changes the code for recalc C is called automatically when C is read. So the first time C is read A and B is also read (maybe from the database). As long as A and B is not changed you can read C and got very fast performance. For complex calculations this can save quite a lot of CPU-time.
The downside and bad news is that Bold is not offically supported anymore and you cannot buy it either. I suppose you can get if you ask enough people, but I don't know where you can download it. Around 2005-2006 it was downloadable for free from Borland but not anymore.
It is not ready for D2009 as someone have to port it to Unicode.
Another option is ECO with dot.net from Capable Objects. ECO is a plugin in Visual Studio. It is a supported framwork that have the same idea and author as Bold for Delphi. Many things are also improved, for example databinding is used for the GUI-components. Both Bold and ECO use a model as a central point with classes, attributes and links. Those can be persisted in a database or a xml-file. With the free version of ECO the model can have max 12 classes, but as I remember there is no other limits.
Bold and ECO contains lot more than derived attributes that makes you more productive and allow you to think on the problem instead of technical details of database or in your case how to cache values. You are welcome with more questions about those frameworks!
Edit:
There is actually a download link for Embarcadero registred users for Bold for Delphi for D7, quite old... I know there was updates for D2005, ad D2006.

Resources