I wanted to ask if there is a way to call the same stored procedure with a different parameter (value) either from within another stored procedure or from an xsjs-Service parallel without the use of Jobs.
From my experience so for the calls run synchronous and wait for the first call of a procedure to return before calling it the second time.
Thank you in advance for your time and help.
Kind Regards...
SQLScript provides the option to run statements in anPARALLEL EXECUTION block like so:
DO
BEGIN PARALLEL EXECUTION
INSERT INTO mytab VALUES (1, 2, 3);
INSERT INTO myothertab VALUES (4, 5, 6);
END;
See the documentation for details here.
HOWEVER: as the documentation states, this does not include the CALL statement for executing procedures.
Link to the documentation:
https://help.sap.com/viewer/de2486ee947e43e684d39702027f8a94/2.0.02/en-US/8db200a4f585490c81c4930689ec1a5c.html
Restrictions and Limitations
The following restrictions apply:
Modification of tables with a foreign key or triggers are not allowed
Updating the same table in different statements is not allowed
Only concurrent reads on one table are allowed. Implicit SELECT and
SELCT INTO scalar variable statements are supported.
Calling procedures containing dynamic SQL (for example, EXEC,
EXECUTE IMMEDIATE) is not supported in parallel blocks
Mixing read-only procedure calls and read-write procedure calls in a
parallel block is not allowed.
Related
I have setup pipelinedb and it works great! I would like to know if its possible to stream data out of a continuous view after the value in the view has been updated? That is, have some external process act on changes to a view.
I wish to stream metrics generated from the views into a dashboard, and I do not want to use polling the db to achieve this.
As of 0.9.5, continuous triggers have been removed in favour of using output streams and continuous transforms. (First suggested by DidacticTactic). The output of a continuous view is essentially a stream, which means you can create continuous views or transforms based on it.
Simple Example:
First create a stream and continuous view.
CREATE STREAM s (
x int
);
CREATE CONTINUOUS VIEW hourly_cv AS
SELECT
hour(arrival_timestamp) AS ts,
SUM(x) AS sum
FROM s GROUP BY ts;
Every continuous view now has a output stream. You can create a transform based on the output of the view using output_of. In the transform you have access to the tuples old and new which represent the old values and new values respectively. (0.9.7 has a third called delta) So you can create a transform that uses the output of 'hourly_cv' like so:
CREATE CONTINUOUS TRANSFORM hourly_ct AS
SELECT
(new).sum
FROM output_of('hourly_cv')
THEN EXECUTE PROCEDURE update();
In this example I'm calling update which we still need to define. It needs to be a function that returns a trigger.
CREATE OR REPLACE FUNCTION update()
RETURNS trigger AS
$$
BEGIN
// Do anything you want here.
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
I found the 0.9.5 release notes blog post helpful to understand output streams and why continuous triggers are no more.
Check out the sections in our technical docs on output streams and continuous transforms for help on how to do this, and feel free to ping us in our Gitter channel if you need help beyond what you find in the docs.
I feel like a bit of an idiot trying to figure out what the answer of this could be like using the tools Didactic provided. Maybe I am blind but I have still not found a way. I found the 9.3 version of the DB which included continuous triggers but this has since been removed and I don't wish to switch to an older version of the DB.
This is a bit sad but I suppose it was moved out of the open source version of the project to accommodate the Real Time analytics dashboard project that the same company provides.
Either way. I solved this issue by using a stored procedure. It's probably slightly inefficient compared to what a built-in function would provide but I am hitting the DB a few thousand time a minute and my VM CPU and RAM just yawn at me.
CREATE OR REPLACE FUNCTION all_insert(text,text)
RETURNS void AS
$BODY$
DECLARE
result text;
BEGIN
INSERT INTO all_in (streamid, generalinput) values($1, $2);
SELECT array_to_json(array_agg(json_build_object('streamId', streamid, 'total', count)))::text into result from totals;
PERFORM pg_notify('totals', result);
END
$BODY$
LANGUAGE plpgsql;
So my insert and notify are done by querying this single stored procedure. Then my application simply has to listen for PSQL db notify events and handle them appropriately. In the example above, the application would receive a JSON object with the particular stream id and the total associated with it.
In some code I'm maintaining, I see two different methods used in TClientDataSet.OnCalcFields event handlers:
with DataSet do
begin
// 1. Call FieldByName twice
if AMinDate > FieldByName(SPlanAllocatieFromDate).AsDateTime then
AMinDate := FieldByName(sPlanAllocatieFromDate).AsDateTime;
// 2. Put the retrieved FieldByName value in a temp var
lEmpID := FieldByName(SPlanAllocatieEmpID).AsInteger;
if lEmpID <> 0 then lTSAllocatedEmpIDs.Add(IntToStr(lEmpID));
end;
Will the compiler (Delphi XE2, Win32 app) optimize method 2 to use a temp var? The two FieldByNames are quite close, you could even say nested.
If not, I should rewrite 1. because OnCalcFields executes often.
BTW. I know about Fields[] versus FieldByName(), or using a temp TField var when running an EOF loop, those are not the issue here.
No version of the Delphi compiler does anything like this.
Such optimizations would require the compiler to be able to prove that the two calls to FieldByName would always give the same result, and there is currently no provision for flagging a method as being deterministic.
Note that it is quite possible in theory (if unlikely in reality) for the two calls NOT to give the same result, in this case e.g. if a different thread deletes a field out of the collection between the first and second call. Generally, the compiler does not know or care at the call site what a particular method call actually does.
Does the compiler optimize (close) identical FieldByName calls?
No it does not.
The compiler does not look inside function calls to see what is within. It therefore has no way to prove that the value returned by successive calls to a function would be the same. Likewise it has no way to prove that the function has no side-effects. These are the two prerequisites for the optimisation under consideration.
You will need to perform the optimisation yourself, by explicitly adding and using a local variable to store the value returned by a single call to FieldByName.
Beyond the consideration of performance, I would argue that the use of a local variable to hold the field is semantically much better. This makes it clear to the reader that all actions are performed on the same field. That reason alone would be enough to persuade me to make the change you describe. Don't repeat yourself.
And while we are in code review mode, you might care to reconsider the use of with.
I need to write a long procedure which generates a report for a company.
Since report involves multiple data to be fetched i have written many small procedures to fetch the different records .
Is it the write approach to write many sub programs in the main program and calling them in the main program?
please help or is there any other way to do this.
Unless you really go wild (**) and build a 'tree' of stored procedures each calling the other one I don't see any problems with this. There might in fact be benefits to this as
it's easier to maintain smaller pieces of code
(re)compilation of smaller stored procedures is going to be faster
**: There is a 'limit' in MSSQL in that the stack is limited to 32 levels. That is, if procedure1 calls procedure1_1 and that procedure calls procedure1_1_1 and that one calls another etc... you'll get an error when you get over 32 calls 'deep'. Calling multiple stored procedures sequentially isn't a problem though.
The only thing to keep in mind is the context of the variables/temporary tables you're using. If you want to pass values around you'll need to use parameters. (using `OUTPUT can be useful to keep track of a #rowcount variable for instance).
Is there limit to either the number of params or to the overall size of a params in a TStoredProc ExecProc call?
Currently running a system that is still using the BDE to connect to Oracle and a recent change to the number of parameters to a package procedure as started producing access violations. The params count is now up to 291 and the AV is being created in the ExecProc call of TStoredProc.
If we remove a single param from the list (any param, does not have to be a specific param), the ExecProc call works fine.
I have debugged through the code and the access violation is being thrown with the TStoredProc.BindParams procedure within DBTables.pas. I have several watches set up, one of which is SizeOf(FRecordBuffer) and as I step through this procedure, the value is 65535. This is MaxWord (Windows.pas). I don't see is any specified limits within the DBTables code.
The callstack is TStoredProd.ExecProc -> TStoredProc.CreateCursor -> TStoredProc.GetCursor -> TStoredProc.BindParams and the access violation is thrown in the for-loop that iterates through the FParams.
Thanks in advance, we need to find something we can pinpoint so we can steer clear.
I'm not at all versed in Oracle SQL, but since you're maintaining the thing, I would see if I could change the call with all that parameters to a single insert into a new dedicated table (with that many columns plus an autonumber primary key), and change the stored procedure to take this key as input and call the values from this new record to do its job. This may just deliver a bit faster than finding out what's the maximum number of parameters and try to find a fix there. (Though it's a bit of a strange number, as in not a power of 2, it may well be 291...)
I am working with multiple databases within the same application. I am using drivers from two different companies. Both companies have tTable and tQuery Descendants that work well.
I need a way to have generic access to the data, regardless of which driver/tQuery component I am using to return a set of data. This data would NOT tie to components, just to my logic.
For example...(pseudocode) Let's create a function which can run against either tQuery component
function ListAllTables(NameOfDatabase : String) :ReturnSet??
begin
If NameOfDataBase = 'X' then use tQuery(Vendor 1)
else use tQuery(Vendor 2)
RunQuery;
Return Answer...
end;
When NORMALLY running a query, I do
Query.Open;
While not Query.EOF do
begin
Read my rows..
next;
end;
If I am CALLING ListAllTables, what is my return type so that I can iterate through the rows? Each tQuery Vendor is different, so I can't use that (can I, and if so, would I want to?) I could build a Memory Table, and pass that back, but that seems like extra work for ListAllRows to build a memory table, and then to pass it back to the calling routine so that it can "un-build", i.e. iterate through the rows...
What are your ideas and suggestions?
Thanks
GS
Almost all Delphi datasets descend from TDataset, and most useful behavior is defined on TDataset.
Therefore, if you assign each table or query to a variable of type TDataset, you should be able to perform your logic on that dataset in a vendor neutral fashion.
I would also isolate the production of the datasets into a set of factory functions that only create the vendor-specific dataset and return it as a TDataset. Each factory function goes in it's own unit. Then, only those small units need have any knowledge of the vendor specific components.
You can use IProviderSupport to generalize the query execution. I am expecting, that used query's are supporting IProviderSupport. This interface allows to set command text, parameters, execute commands, etc.
The common denominator for used query's is TDataSet. So, you will need pass TDataSet reference.
For example:
var
oDS: TDataSet;
...
if NameOfDataBase = 'X' then
oDS := T1Query.Create(nil)
else
oDS := T2Query.Create(nil);
(oDS as IProviderSupport).PSSetCommandText('select * from mytab');
oDS.Open;
while not oDS.Eof do begin
//
oDS.Next;
end;
Perhaps you could consider a universal data access component such as UniDAC or AnyDAC. This allows you to use only one component set to access different databases in a consistent way.
You might also be interested in DataAbstract from RemObjects. A powerful data abstraction, multi-tier, remoting solution with a lot of features. Not inexpensive, but excellent value for the money.
Relevant links:
http://www.da-soft.com/anydac/
http://www.devart.com/unidac/
http://www.remobjects.com/da/
A simple approach would be to have ListAllTables return (or populate) a stringlist.