Is there a limit to the number of parameters in a TStoredProc? - delphi

Is there limit to either the number of params or to the overall size of a params in a TStoredProc ExecProc call?
Currently running a system that is still using the BDE to connect to Oracle and a recent change to the number of parameters to a package procedure as started producing access violations. The params count is now up to 291 and the AV is being created in the ExecProc call of TStoredProc.
If we remove a single param from the list (any param, does not have to be a specific param), the ExecProc call works fine.
I have debugged through the code and the access violation is being thrown with the TStoredProc.BindParams procedure within DBTables.pas. I have several watches set up, one of which is SizeOf(FRecordBuffer) and as I step through this procedure, the value is 65535. This is MaxWord (Windows.pas). I don't see is any specified limits within the DBTables code.
The callstack is TStoredProd.ExecProc -> TStoredProc.CreateCursor -> TStoredProc.GetCursor -> TStoredProc.BindParams and the access violation is thrown in the for-loop that iterates through the FParams.
Thanks in advance, we need to find something we can pinpoint so we can steer clear.

I'm not at all versed in Oracle SQL, but since you're maintaining the thing, I would see if I could change the call with all that parameters to a single insert into a new dedicated table (with that many columns plus an autonumber primary key), and change the stored procedure to take this key as input and call the values from this new record to do its job. This may just deliver a bit faster than finding out what's the maximum number of parameters and try to find a fix there. (Though it's a bit of a strange number, as in not a power of 2, it may well be 291...)

Related

Unable to verify UPPAAL properties

I am verifying a very small model. But I receive the memory exhaust message. I changed the model several times but having same problem.
I thought that problem would be due to using a user defined function or using the select option to get the random number. Then I changed the model and didn't call the function nor used the Selection option but still....
I am wondering either it's UPPAAL's issue or in my model. There is no error other than memory exhaust. Once the value of "r1" and "r2" are changed after that ctl property doesn't work.
CTL works for all values of r1 and r2 before the increment.
The model increments several variables (r1, r2 and cntr): if there is no upper bound for them (and it seems there is not, but I cannot see all the functions), then the state space is going to be huge (all values multiplied times the number of locations, times clock zones) and thus exhaust all the memory.
Either make those variables bounded (do not allow increments passed some value), or declare them as meta (don't do it if you do not understand the consequences).

How to stream output from a continous view in pipelinedb?

I have setup pipelinedb and it works great! I would like to know if its possible to stream data out of a continuous view after the value in the view has been updated? That is, have some external process act on changes to a view.
I wish to stream metrics generated from the views into a dashboard, and I do not want to use polling the db to achieve this.
As of 0.9.5, continuous triggers have been removed in favour of using output streams and continuous transforms. (First suggested by DidacticTactic). The output of a continuous view is essentially a stream, which means you can create continuous views or transforms based on it.
Simple Example:
First create a stream and continuous view.
CREATE STREAM s (
x int
);
CREATE CONTINUOUS VIEW hourly_cv AS
SELECT
hour(arrival_timestamp) AS ts,
SUM(x) AS sum
FROM s GROUP BY ts;
Every continuous view now has a output stream. You can create a transform based on the output of the view using output_of. In the transform you have access to the tuples old and new which represent the old values and new values respectively. (0.9.7 has a third called delta) So you can create a transform that uses the output of 'hourly_cv' like so:
CREATE CONTINUOUS TRANSFORM hourly_ct AS
SELECT
(new).sum
FROM output_of('hourly_cv')
THEN EXECUTE PROCEDURE update();
In this example I'm calling update which we still need to define. It needs to be a function that returns a trigger.
CREATE OR REPLACE FUNCTION update()
RETURNS trigger AS
$$
BEGIN
// Do anything you want here.
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
I found the 0.9.5 release notes blog post helpful to understand output streams and why continuous triggers are no more.
Check out the sections in our technical docs on output streams and continuous transforms for help on how to do this, and feel free to ping us in our Gitter channel if you need help beyond what you find in the docs.
I feel like a bit of an idiot trying to figure out what the answer of this could be like using the tools Didactic provided. Maybe I am blind but I have still not found a way. I found the 9.3 version of the DB which included continuous triggers but this has since been removed and I don't wish to switch to an older version of the DB.
This is a bit sad but I suppose it was moved out of the open source version of the project to accommodate the Real Time analytics dashboard project that the same company provides.
Either way. I solved this issue by using a stored procedure. It's probably slightly inefficient compared to what a built-in function would provide but I am hitting the DB a few thousand time a minute and my VM CPU and RAM just yawn at me.
CREATE OR REPLACE FUNCTION all_insert(text,text)
RETURNS void AS
$BODY$
DECLARE
result text;
BEGIN
INSERT INTO all_in (streamid, generalinput) values($1, $2);
SELECT array_to_json(array_agg(json_build_object('streamId', streamid, 'total', count)))::text into result from totals;
PERFORM pg_notify('totals', result);
END
$BODY$
LANGUAGE plpgsql;
So my insert and notify are done by querying this single stored procedure. Then my application simply has to listen for PSQL db notify events and handle them appropriately. In the example above, the application would receive a JSON object with the particular stream id and the total associated with it.

Does the compiler optimize (close) identical FieldByName calls?

In some code I'm maintaining, I see two different methods used in TClientDataSet.OnCalcFields event handlers:
with DataSet do
begin
// 1. Call FieldByName twice
if AMinDate > FieldByName(SPlanAllocatieFromDate).AsDateTime then
AMinDate := FieldByName(sPlanAllocatieFromDate).AsDateTime;
// 2. Put the retrieved FieldByName value in a temp var
lEmpID := FieldByName(SPlanAllocatieEmpID).AsInteger;
if lEmpID <> 0 then lTSAllocatedEmpIDs.Add(IntToStr(lEmpID));
end;
Will the compiler (Delphi XE2, Win32 app) optimize method 2 to use a temp var? The two FieldByNames are quite close, you could even say nested.
If not, I should rewrite 1. because OnCalcFields executes often.
BTW. I know about Fields[] versus FieldByName(), or using a temp TField var when running an EOF loop, those are not the issue here.
No version of the Delphi compiler does anything like this.
Such optimizations would require the compiler to be able to prove that the two calls to FieldByName would always give the same result, and there is currently no provision for flagging a method as being deterministic.
Note that it is quite possible in theory (if unlikely in reality) for the two calls NOT to give the same result, in this case e.g. if a different thread deletes a field out of the collection between the first and second call. Generally, the compiler does not know or care at the call site what a particular method call actually does.
Does the compiler optimize (close) identical FieldByName calls?
No it does not.
The compiler does not look inside function calls to see what is within. It therefore has no way to prove that the value returned by successive calls to a function would be the same. Likewise it has no way to prove that the function has no side-effects. These are the two prerequisites for the optimisation under consideration.
You will need to perform the optimisation yourself, by explicitly adding and using a local variable to store the value returned by a single call to FieldByName.
Beyond the consideration of performance, I would argue that the use of a local variable to hold the field is semantically much better. This makes it clear to the reader that all actions are performed on the same field. That reason alone would be enough to persuade me to make the change you describe. Don't repeat yourself.
And while we are in code review mode, you might care to reconsider the use of with.

Writing many stored procedures

I need to write a long procedure which generates a report for a company.
Since report involves multiple data to be fetched i have written many small procedures to fetch the different records .
Is it the write approach to write many sub programs in the main program and calling them in the main program?
please help or is there any other way to do this.
Unless you really go wild (**) and build a 'tree' of stored procedures each calling the other one I don't see any problems with this. There might in fact be benefits to this as
it's easier to maintain smaller pieces of code
(re)compilation of smaller stored procedures is going to be faster
**: There is a 'limit' in MSSQL in that the stack is limited to 32 levels. That is, if procedure1 calls procedure1_1 and that procedure calls procedure1_1_1 and that one calls another etc... you'll get an error when you get over 32 calls 'deep'. Calling multiple stored procedures sequentially isn't a problem though.
The only thing to keep in mind is the context of the variables/temporary tables you're using. If you want to pass values around you'll need to use parameters. (using `OUTPUT can be useful to keep track of a #rowcount variable for instance).

Optimizing looping through dataset

I have a code some thing like this
dxMemOrdered : TdxMemData;
while not qrySandbox2.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := qrySandbox2.FieldByName('TOTAL').asCurrency;
dxMemOrdered.post;
qrySandbox2.Next;
end;
this code executes in a thread. When there are huge records say "400000" it is taking around 25 minutes to parse through it. Is there any way that i can reduce the size by optimizing the loop? Any help would be appreciated.
Update
Based on the suggestions i made the following changes
dxMemOrdered : TdxMemData;
qrySandbox2.DisableControls;
while not qrySandbox2.Recordset.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := Recordset.Fields['TOTAL'].Value;
dxMemOrdered.post;
qrySandbox2.Next;
end;
qrySandbox2.EnableControls;
and my output time have improved from 15 mins to 2 mins. Thank you guys
Without seeing more code, the only suggestion I can make is make sure that any visual control that is using the memory table is disabled. Suppose you have a cxgrid called Grid that is linked to your dxMemOrdered memory table:
var
dxMemOrdered: TdxMemData;
...
Grid.BeginUpdate;
try
while not qrySandbox2.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := qrySandbox2.FieldByName('TOTAL').asCurrency;
dxMemOrdered.Post;
qrySandbox2.Next;
end;
finally
Grid.EndUpdate;
end;
Some ideas in order of performance gain vs work to do by you:
1) Check if the SQL dialect that you are using lets you use queries that directly SELECT from/INSERT to. This depends on the database you're using.
2) Make sure that if your datasets are not coupled to visual controls, that you call DisableControls/EnableControls around this loop
3) Does this code have to run in the main program thread? Maybe you can send if off to a separate thread while the user/program continues doing something else
4) When you have to deal with really large data, bulk insertion is the way to go. Many databases have options to bulk insert data from text files. Writing to a text file first and then bulk inserting is way faster than individual inserts. Again, this depends on your database type.
[Edit: I just see you inserting the info that it's TdxMemData, so some of these no longer apply. And you're already threading, missed that ;-). I leave this suggestions in for other readers with similar problems]
It's much better to let SQL do the work instead of iterating though a loop in Delphi. Try a query such as
insert into dxMemOrdered (total)
select total from qrySandbox2
Is 'total' the only field in dxMemOrdered? I hope that it's not the primary key otherwise you are likely to have collisions, meaning that rows will not be added.
There's actually a lot you could do to speed up your thread.
The first would be to look at the problem in a broader perspective:
Am I fetching data from a cached / fast disk, possibly moved in memory?
Am I doing the right thing, when aggregating totals by hand? SQL engines are expecially optimized to do those things, all you'd need to do is to define an additional logical field where to store the SQL aggregated result.
Another little optimization that may bring an improvement over large amounts of looping is to not use constructs like:
Recordset.Fields['TOTAL'].Value
Recordset.FieldByName('TOTAL').Value
but to add the fields with the fields editor and then directly accessing the right field. You'll save a whole loop through the fields collection, that otherwise is performed on every field, on every next record.

Resources