MyBatis set fetch size on ResultSet as out parameter of procedure - stored-procedures

I have stored procedure that I need to call using MyBatis. Anyway I managed to call this stored procedure. Procedure has multiple out parameters. One of out parameter is oracle cursor. I need to iterate over Oracle Cursor, but when I do this without any fine-tuning of jdbc driver using fetchSize attribute, it goes row by row and this solution is very slow.
I am able to set on procedure call fethcSize attribute:
<select id="getEvents" statementType="CALLABLE" parameterMap="eventInputMap" fetchSize="1000">
{call myProc(?, ?, ?, ?, ?)}
</select>
But this doesn't helps at all. I think that this doesn't work because of multiple out parameters - so program doesn't know where this fetch size should be applied - on which out parameter. Is there any way to set fetch size on ResultSet(Oracle cursor)? Like when I use CallableStatemen from java.sql package I am able to set on ResultSet fetch size.
Here are mapping files and procedure call from program:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
PUBLIC "-//ibatis.apache.org//DTD Mapper 3.0//EN"
"http://ibatis.apache.org/dtd/ibatis-3-mapper.dtd">
<mapper namespace="mypackage.EventDao">
<resultMap id="eventResult" type="Event">
<result property="id" column="event_id" />
<result property="name" column="event_name" />
</resultMap>
<parameterMap id="eventInputMap" type="map" >
<parameter property="pnNetworkId" jdbcType="NUMERIC" javaType="java.lang.Integer" mode="IN"/>
<parameter property="pvUserIdentityId" jdbcType="VARCHAR" javaType="java.lang.String" mode="IN"/>
<parameter property="result" resultMap="eventResult" jdbcType="CURSOR" javaType="java.sql.ResultSet" mode="OUT" />
<parameter property="success" jdbcType="INTEGER" javaType="java.lang.Integer" mode="OUT"/>
<parameter property="message" jdbcType="VARCHAR" javaType="java.lang.String" mode="OUT"/>
</parameterMap>
<select id="getEvents" statementType="CALLABLE" parameterMap="eventInputMap" fetchSize="1000">
{call myProc(?, ?, ?, ?, ?)}
</select>
</mapper>
And call from program:
SqlSession session = sqlSessionFactory.openSession();
Map<String, Object> eventInputMap = new HashMap<String, Object>();
try {
EventDao ed = session.getMapper(EventDao.class);
eventInputMap.put("pnNetworkId", networkId);
eventInputMap.put("pvUserIdentityId", identityId);
eventInputMap.put("success", 0);
eventInputMap.put("message", null);
ed.getEvents(eventInputMap);
session.selectList("EventDao.getEvents", eventInputMap);
} catch (Exception e) {
e.printStackTrace();
}finally{
session.close();
}
Thanks in advance!

The provided code works. I have even checked 3 ways to write it: like here with parameterMap, without parameterMap (mapping directly in the statement), and through annotations, everything works.
I used to thing the fetchSize setting was not propagated from main statement to OUT param resultSet, until I really tested that, lately.
To apprehend whether the fetch size is used or not and how much effect it has, the result must contain a large enough number of rows. And of course, the poorest is the latency from app to DB, the more noticeable is the effect.
For my test, the cursor used by procedure returned 5400 rows of 120 columns (but the most important is the row count).
To give an order of magnitude, I have measured fetching times, i.e from the stored procedure return to the statement return, with a result list filled with data fetched from cursor. Then I log the instanciation of the first object mapped, this occurs near the beginning of the global fetching, probably after the first fetch:
public static boolean firstInstance = true;
public Item() {
if (firstInstance) {
LOGGER.debug("Item first instance");
firstInstance=false;
}
}
And I log again just at the end, after the session.selectList returns.
This is for test purpose only. Do not let that is your code. Find a clean way to do it.
Here are some timing depending on the configured fetch size:
- fetchSize=1 => 13000 ms
- fetchSize=10 => 5300 ms
- fetchSize=100 => 3800 ms
- fetchSize=300 => 3700 ms
- fetchSize=500 => 3650 ms
- fetchSize=1000 => 3600 ms
Oracle JDBC driver default fetchSize is 10.
Testing with fetchSize=1 allows proving the supplied setting is used.
With 100, here, 30% are saved. Beyond, the gain is negligible (with this use case, and environment)
Anyway, it would be interesting to be able to know when procedure execution finishes and when result fetch starts.
Unfortunately, Mybatis logs very few. I thought custom result handler could help, but looking at the source code of class org.apache.ibatis.executor.resultset.DefaultResultSetHandler, I notice that
unlike method handleResultSet (used for simple select statements) that allows using a custom result handler, method handleRefCursorOutputParameter (used here for procedure OUT cursor) does not. Then no need to trying passing a custom result handler: it will be ignored.
I am interested in a solution if anyone has one. But it seems an evolution request will be required.

Related

Receiving responses of different formats for the same query in vespa

I have such schema
schema embeddings {
document embeddings {
field id type int {}
field text_embedding type tensor<double>(d0[960]) {
indexing: attribute | index
attribute {
distance-metric: euclidean
}
}
}
rank-profile closeness {
num-threads-per-search:1
inputs {
query(query_embedding) tensor<double>(d0[960])
}
first-phase {
expression: closeness(field, text_embedding)
}
}
Such services:
...
<container id="query" version="1.0">
<search/>
<nodes>
<node hostalias="query" />
</nodes>
</container>
<content id='mind' version='1.0'>
<redundancy>1</redundancy>
<documents>
<document type='embeddings' mode="index"/>
</documents>
<nodes>
<node hostalias="content1" distribution-key="0"/>
</nodes>
</content>
...
Then I have the number of queries of the same format:
{
'yql': 'select * from embeddings where ({approximate:false, targetHits:100} nearestNeighbor(text_embedding, query_embedding));',
'timeout': 5,
"hits":100,
'input': {
'query(query_embedding)': [...],
},
'ranking': {
'profile': 'closeness',
},
}
which are then run via app.query_batch(test_queries)
The problem is some responses look like this (and contain id field as integers, just like I inserted):
{'id': 'id:embeddings:embeddings::786559', 'relevance': 0.5703559830732123, 'source': 'mind', 'fields': {'sddocname': 'embeddings', 'documentid': 'id:embeddings:embeddings::786559'}}
and others look like this (neither containing int id as I inserted, nor keeping the format of the previous example):
{'id': 'index:mind/0/b0dde169c545ce11e8fd1a17', 'relevance': 0.49024561522459087, 'source': 'mind'}
How can I make all responses look like the first one? Why are they different at all?
Some of them are filled with content and some are not, I suppose because it timed out. Check the coverage info, and run with traceLevel=3 to see more details.
Some more background info on what's going on:
Searches are executed in two phases: First, minimal information on each hits hit is returned from each content node up to the issuing container. These partial lists are then merged to produce the final hits length list of matches. For those we execute phase two, which is to fill the content of the final hits. This involves doing another request to each of the content nodes to get the relevant content.
If there's little time left, or lots of data, or expensive summary features to compute, or a slow disk subsystem or network, or a node in some kind of trouble, this may time out leaving only some hits filled so that you'll see this.
Why are the id's not the true document id in these cases? The text string id is stored in the disk document blob but not in memory as an attribute, so it needs to be fetched in the fill phase too. If it is not, an internally generated unique id is used instead.

Parameter order for webSdk procedures

I need to create some reports using the websdk pertaining to orders and price quotes.
I tried following the documentation but even the example given in the developer portal is lacking a lot of crucial information.
Specifically, I need to work with these two procedures:
ESH_WWWSHOWORDER3
ESH_WWWSHOWCPROF2
I tried writing some logic arround the sample found on https://prioritysoftware.github.io/api/procedure/#Introduction
const procedure = priority.procStart(PROC_NAME,"R", () => {},customerName, function(procSuccess){
logger.info('Proc start OK, received documentOptions');
logger.info(JSON.stringify(procSuccess));
logger.info('Specifying format...');
resolve(new Promise((resolve, reject) => {
procSuccess.proc.documentOptions(1,1,2,procSuccess => {
logger.info('Received inputFields');
logger.info(JSON.stringify(procSuccess));
procSuccess.proc.inputFields(1,{ORDNUM: ordernum}, procSuccess => {
logger.info('Received url');
logger.info(JSON.stringify(procSuccess));
});
}, procError => {
reject(procError)
})
}));
}, function(procError){
logger.error('Proc start error');
logger.error(procError);
reject(procError);
});
}).catch(err => {
logger.error(err);
})
Where PROC_NAME is WWWSHOWORDER
I am trying to understand what the process is asking of me, but it's not really clear. I tried supplying some values in the order specified by the documentation but I get errors that are not very descriptive to someone who doesn't know the ins and outs of Priority.
The logs look somehting like this
Starting Procedure WWWSHOWORDER
2019-11-26T11:55:31.718Z info: Proc start OK, received documentOptions
2019-11-26T11:55:31.719Z info: {"proc":{"name":"WWWSHOWORDER"},"type":"message","message":"No such Tabula entity"}
2019-11-26T11:55:31.721Z info: Specifying format...
2019-11-26T11:55:32.124Z info: Received inputFields
2019-11-26T11:55:32.125Z info: {"proc":{"name":"WWWSHOWORDER"},"type":"message","message":"Failure to p...
Unfortunately my logs are truncated for some strange reason...
EDIT:
I switched trying to work with ESH_WWWSHOWORDER3, I am getting an inputFields parameter, and I hope what I need to do, is take the iinputFields.input.EditFields object, and populate the value field with the name of the order, I think, unfortunately this results in a 500 error from the server...
{"proc":{"title":"(אישור הזמנה עם מפרט ללקוח (פריו","name":"ESH_WWWSHOWORDER3"},"type":"inputFields","input":{"EditFields":[{"field":1,"helpstring":"","ispassword":0,"mandatory":0,"operator":0,"readonly":0,"title":"Order","type":"text","code":"Str","value":"*","value1":"","maxlength":56,"zoom":"Zoom","format":""}],"Operators":[{"name":"= ","op":0,"title":"equals"},{"name":"< ","op":1,"title":"less than"},{"name":"<=","op":2,"title":"less than or equal to"},{"name":"> ","op":3,"title":"greater than"},{"name":">=","op":4,"title":"greater than or equal to"},{"name":"<>","op":5,"title":"not equal to"},{"name":"- ","op":6,"title":"between"},{"name":"! ","op":7,"title":"ולא"},{"name":"| ","op":8,"title":"או"}],"text":"","title":"Parameter Input"}}
2019-12-02T09:07:10.129Z info: Specifying input...
2019-12-02T09:07:10.550Z error: ###Can't connect to server. HTTP Response: 500, Internal Server Error
details: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><s:Header><o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><u:Timestamp u:Id="_0"><u:Created>2019-12-02T09:07:07.700Z</u:Created><u:Expires>2019-12-02T09:12:07.700Z</u:Expires></u:Timestamp></o:Security></s:Header><s:Body><s:Fault><faultcode xmlns:a="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatcher">a:InternalServiceFault</faultcode><faultstring xml:lang="he-IL">Object reference not set to an instance
of an object.</faultstring><detail><ExceptionDetail xmlns="http://schemas.datacontract.org/2004/07/System.ServiceModel" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><HelpLink i:nil="true"/><InnerException i:nil="true"/><Message>Object reference not set to an instance of an object.</Message><StackTrace> at WCFService.ProcEditFieldsOKMob(String session, Boolean save, Byte[] xml)
at SyncInvokeProcEditFieldsOKMob(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage11(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)</StackTrace><Type>System.NullReferenceException</Type></ExceptionDetail></detail></s:Fault></s:Body></s:Envelope>
Again, documentation does not seem to be very detailed on this topic, the EditFields object itself is not very well described...
I am also not an experienced priority user, so I am not sure where I need to navigate to check out these procedures and what else.
It seems that you are passing "R" as the second parameter, which means that you are calling for a report. Try passing "P" instead.
Also, what is customerName? according to the Web SDK I believe this should be dname: "Internal name of the company in which the procedure is run."
You can tell that WWWSHOWORDER is a Procedure and not a Report by going to System Management -> Generators -> Procedures -> Procedure Generator and querying for it. If you find it in the procedure generator - its a procedure (in this case it is a procedure which generates a report), and you should call it with a "P" parameter.
If you find the entity in Management -> Generators -> Reports-> ReportGenerator than its a plain Report and you should call it with the "R" Parameter.

Hot to "pass" OLDROW and NEWROW to Stored Procedure in DB2 trigger?

I have a bunch of triggers on tables in DB2 for the iseries. They are directly writing to other tables, but I've been asked to change them so that the triggers instead call something else to do the writing.
This something else part is a little vague I know. I would imagine I can't call a trigger from another trigger (correct me if I am wrong), so I imagine it has to be a stored procedure. Is there such a thing as something like a "program" in DB2 that I could call instead?
Assuming it is a stored procedure, how do I pass in the OLDROW and NEWROW to be handled in the SP?
For example, the trigger could look something like this (stripped down of course):
CREATE TRIGGER MYSCHEMA.WHATEVERTABLE_AFTER_INSERT_TRIGGER
AFTER INSERT ON MYSCHEMA.MYTABLE
REFERENCING NEW AS NEWROW
FOR EACH ROW
MODE DB2ROW
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *CS ,
DECRESULT = (31, 31, 00) ,
DFTRDBCOL = *NONE ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
WHEN ( 1 = 1 )
BEGIN ATOMIC
INSERT INTO NEWSCHEMA.NEWTABLE
(
AAA,
BBB,
CCC
)
VALUES
(
NEWROW.AAA,
NEWROW.BBB,
NEWROW.CCC
);
END;
The trigger just need to replicate a table most of the time, so I need access to OLDROW sometimes, and NEWROW other times, and sometimes both. However, I believe these only make sense in context of the trigger, so how do I pass this info to the SP?
On DB2 for IBM i...
Every stored procedure IS a program and vice versa. One of the many benefits of having an OS that is a DB.
I'm pretty sure you can't call a stored procedure from an SQL trigger and pass the entire NEWROW implicitly. You could try
CALL MYSP (NEWROW.*);
You might be able to use dynamic SQL to query the meta data and build a dynamic call
But even if one of those works, you'd probably shouldn't do it. Because now you've got a dependency between the called SP and the structure of the table (like you'd have with old fashioned RPG). One of the reasons to use SQL is to avoid that dependency.
Now, if you where using RPG or COBOL to define your trigger, then it's pretty easy to pass around the
pointer to the trigger buffer the DB provides to the program. But RPG & COBOL are lower level languages compared to SQL.
What's the reason for asking for the "something else" to do the writing?

PXDatabase should accept PXDbType.Udt in Acumatica ERP

How can I call a stored procedure in Acumatica via PXDataBase which has as input parameter User defined type?
For example, I have the following type:
CREATE TYPE [dbo].[string_list_tblType] AS TABLE(
[RefNbr] [nvarchar](10) NOT NULL,
PRIMARY KEY CLUSTERED
(
[RefNbr] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
I have the following stored procedure:
CREATE PROCEDURE [dbo].[GetListOfAPInvoices]
#APInvoices as string_list_tblType readonly,
AS
BEGIN
select * from APInvoice a where a.RefNbr in (select RefNbr from #APInvoices)
END
and following fragment of C# code:
var par = new SqlParameter("APInvoices", dt);
par.SqlDbType = SqlDbType.Structured;
par.TypeName = "dbo.string_list_tblType";
par.UdtTypeName = "dbo.string_list_tblType";
par.ParameterName = "APInvoices";
PXSPParameter p1 = new PXSPInParameter("#APInvoices", PXDbType.Udt, par);
var pars = new List<PXSPParameter> { p1};
var results = PXDatabase.Execute(sqlCommand, pars.ToArray());
but when I execute my C# code I'm receiving error message:
UdtTypeName property must be set for UDT parameters
When I debugged with reflector class PXSqlDatabaseProvider, method
public override object[] Execute(string procedureName, params PXSPParameter[] pars)
I noticed that
using (new PXLongOperation.PXUntouchedScope(Thread.CurrentThread))
{
command.ExecuteNonQuery();
}
command.Parameters.Items has my method parameters, but item which is related to Udt type is null. I need to know how to pass user defined table type. Has anybody tried this approach?
Unfortunately UDT parameters are not supported in Acumatica's PXDatabase.Execute(..) method and there is no way to pass one to a stored procedure using the built-in functionality of the platform.
Besides, when writing data-retrieval procedures like the one in your example, you should acknowledge that BQL-based data-retrieval facilities do a lot of work to match company masks, filter out records marked as DeletedDatabaseRecord and apply some other internal logic. If you chose to fetch data with plain select wrapped into a stored procedure you bypass all this functionality. Hardly is this something that you want to achieve.
If you absolutely want to use a stored procedure to get some records from the database but don't want the above side-effect, one option is to create an auxiliary table in the DB and select records into it using a procedure. Then in the application you add a DAC mapped to this new table and use it to get data from the table by means of PXSelect or similar thing.
Coming back to your particular example of fetching some ARInvoices by the list of their numbers, you could try using dynamic BQL composition to achieve something like this with Acumatica data access facilities.

Datasnap & Fmx Mobile App How to send a dataset containing a blob field

I had a multi tier project in which i would collect data from a microsoft sql 2005 through a FDStoredProc with a function and the function would return a dataset to the client. When the server assigns the dataset to the result of the function and the function tries to send it to the client i get this error. Project etctec.exe raised exception class TDBXError with message 'TDBXTypes.BLOB value type cannot be accessed as TDBXTypes.Bytes value type'.
In another project i used the StoredProc of a different database with TFDStoredProc in exactly the same way and it works fine. Any ideas what would raise this error?
This is what i do in the server.
function TServerMethods1.getCategories(): TDataSet;
begin
FDStoredProc1.ParamByName('#val1').AsInteger:= 1;
FDStoredProc1.ParamByName('#val2').AsInteger:= 0;
FDStoredProc1.ParamByName('#val3').AsInteger:= 1;
FDStoredProc1.ParamByName('#val4').AsInteger:= 1;
FDStoredProc1.Open();
result:= FDStoredProc1;
end;
and the client calls it like this...
dataset:=ClientModule1.ServerMethods1Client.getCategories();
Problem comes from some fields that are of type NVARCHAR(max), anyone knows a workaround to this error without changing the field type?
I tried changing the dataset's field type to a string or something with no success. The only thing i can temporarily do is get these fields separately, put them in a stringlist or something like that and pass it to the client.
I think you should use some of similar methods below:
http://docwiki.embarcadero.com/Libraries/XE4/en/Data.DB.TDataSet.CreateBlobStream
http://docwiki.embarcadero.com/Libraries/XE4/en/Data.DB.TDataSet.GetBlobFieldData
You should define you field as a blob field and then put data in/out with the functions described in the links.
Here is also example how to copy data:
http://docwiki.embarcadero.com/Libraries/XE4/en/Data.DB.TDataSet.CreateBlobStream

Resources