CLR Stored Procedure, limit imposed on column name length - oledb

Does anyone know how to work around the 128 character limit imposed on the ‘name’ constructor parameter (column name) of the class ‘Microsoft.SqlServer.Server.SqlMetaData’? Or know of an alternative method of returning data to the SQLPipeline, that doesn’t have a similar restriction.
Background:
A number of years ago we created a .Net (C#) CLR Stored Procedure, to replace one that was implemented in vb6 and used the ‘TrueOLEDBProviderLib’ (TOLAP). The driving force behind the change was the switch to 64bit SQL Server, which meant the vb6 code could no longer run in process. (vb6 doesn’t do 64bit)
Issue:
The core function of our CLR Stored Procedure is, based on a list of ‘data point identifiers’, retrieve and process data from a number of sources (DCOM components), then output a table of data to the SQLPipeline. For the table of data that is returned, we set the column name to the ‘data point identifiers’.
Note: That the ‘data point identifiers’ are created based on a hierarchy, so are quite long, with a maximum length of around 256 characters.
The problem we have recently discovered, is that when attempting to output the results to the SQLPipeline, if ‘data point identifiers’ longer than 128 characters are used, then the CLR throws an exception on the length of the ‘name’ (column name). (See ‘.Net Framework Code behaviour’ below)
But using the same ‘data point identifiers’ on the old vb6 CLR implementation, it works without error. With the returned table contains column names longer than 128 characters.
Supplementary Question:
I know it is a different technology, but why was there no 128 character limit imposed within the SQL Server implementation of ‘TrueOLEDBProviderLib’ (TOLAP). The question I need to provide an answer to is, “if TOLAP can return tables of data that contains column names longer than 128 characters, why can’t the .Net (C#) CLR Stored Procedure”.
Workaround:
The obvious fix would be to truncate the ‘data point identifiers’ down to 128 characters. However, as this is a change in functionality from the original vb6 CLR implementation, I need to explore all the alternatives first.
.Net Framework Code behaviour:
Within the internal constructor of ‘SqlMetaData’, the method ‘AssertNameIsValid’ is called, where the length of the ‘name’ parameter is checked to be less than ‘SmiMetaData.MaxNameLength’ (128 character), if not an exception is thrown.
https://referencesource.microsoft.com/#System.Data/fx/src/data/System/Data/Sql/SqlMetaData.cs
I understand that the value of this limit is set based on the 128 character limit SQL Server has for ‘Column_Length’.
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-server-info-transact-sql?view=sql-server-ver15
** Additional Info Update: **
The old implementation was a vb6 DLL on the file system, called by a Stored Procedure.
Last version that the vb6 implementation ran on was SQL Server 2008 R2 SP2 (32bit).
The .Net CLR implementation was first run on SQL Server 2012 SP2 (64bit), current version is 2014 SP3 (64bit).
The column names ‘data point identifier’ will have all come from the SP parameters, as there is nothing like this hard coded in the vb6 version. All ‘data point identifier’ are user defined on the deployed system.

Related

DB2 CLOB size in stored procedure

A json is assigned to result which is greater than 50MB in a DB2 stored procedure. But the size of the result which is inserted or returned is only approx. 1MB. Should I have to define the size of the CLOB?
CREATE OR REPLACE PROCEDURE CALCULATE_CLOB
(
OUT result CLOB
)
LANGUAGE SQL
SPECIFIC SQL2433453455
BEGIN
DECLARE result CLOB;
SET result = ... json
INSERT INTO DATA
VALUES(result);
return result;
The default length of a CLOB in Db2 is 1M (1 megabyte) for z/os, luw, and i-series. So CLOB is the same as CLOB(1M).
When you want a larger clob, you must specify the size when you declare a variable, or when you specify a parameter to a routine. For example, CLOB(50M), or CLOB(1G) etc. The maximum length is 2G.
This is per the Db2 documentation, see the create table statement for your Db2-server platform and version.
Always specify your Db2-server platform (z/os, i-series, linux/unix/windows/cloud) and Db2-server version when asking for help, because the answer can depend on these facts.
As you appear to be learning how to code stored procedures, you may benefit from studying the examples provided by IBM for SQL PL, which are available in various places. For example, online in the Db2 Knowledge Centre for your version of db2-server, and on github, and also in the samples subdirectory tree of your Db2-LUW server installation (if installed). Spending time on such examples will help you because you will be able to answer simple questions without waiting for answers on stackoverflow.

SQL Server varchar values retrieved incorrectly when your database and client have different codepages

I have an SQL Server database with plenty of iso_1 (iso8859-1) columns that are retrieved incorrectly on a Windows desktop with utf-8 codepage (65001) while they are retrieved fine on Windows desktops with Windows-1252 (iso8819-1) codepage.
Error: [FireDAC][DatS]-32. Variable length column [nom] overflow. Value length - [51], column maximum length - [50]
This is due because characters like Ñ are retrieved incorrectly coded as a couple of characters.
SQL Server Management Studio retrieves those columns correctly, so I guess the problem is configuring the FireDAC connection of my application, but I can't see anywhere a charset property to indicate the codepage of the original data.
How do you indicate the transcode needed on a FireDAC connection when the codepage is different in the database and the desktop running your application ?

OLEDB Transactions

I am trying to use transactions to speed up the insertion of a large number of database entries.
I am using SQL Server Compact 4.0 and the ATL OLEDB API based on C++.
Here is the basic sequence:
sessionobj.StartTransaction();
tableobject.Insert(//);
tableobject.Insert(//);
tableobject.Insert(//);
...
sessionobj.Commit();
NOTE: the tableobj object is a CTable that is initialized by the sessionobj CSession object.
I should be seeing a sizeable performance increase but I am not. Does anyone know what I am doing wrong?

What happens when memory "wraps" on an IA-32 supporting machine?

I'm creating a 64-bit model of IA-32 and am representing memory as a 0-based array of 2**64 bytes (the language I'm modeling this in uses ** as the exponentiation operator). This means that valid indices into the array are from 0 to 2**64-1. Now, to model the possible modes of accessing that memory, one can treat one element as an 8-bit number, two elements as a (little-endian) 16-bit number, etc.
My question is, what should my model do if they ask for a 16-bit (or 32-bit, etc.) number from location 2**64-1? Right now, what the model does is say that the returned value is Memory(2**64-1) + (8 * Memory(0)). I'm not updating any flags (which feels wrong). Is wrapping like this the correct behavior? Should I be setting any flags when the wrapping happens?
I have a copy of Intel-64-ia-32-ISA.pdf which I'm using as a reference, but it's 1,479 pages, and I'm having a hard time finding the answer to this particular question.
The answer is in Volume 3A, section 5.3: "Limit checking."
For ia-32:
When the effective limit is FFFFFFFFH (4 GBytes), these accesses [which extend beyond the end of the segment] may or may not cause the indicated exceptions. Behavior is implementation-specific and may vary from one execution to another.
For ia-64:
In 64-bit mode, the processor does not perform rumtime limit checking on code or data segments. Howver, the processor does check descriptor-table limits.
I tested it (did anyone expect that?) for 64bit numbers with this code:
mov dword [0], 0xDEADBEEF
mov dword [-4], 0x01020304
mov rdi, [-4]
call writelonghex
In a custom OS, with pages mapped as appropriate, running in VirtualBox. writelonghex just writes rdi to the screen as a 16-digit hexadecimal number. The result:
So yes, it does just wrap. Nothing funny happens.
No flags should be affected (though the manual doesn't say that no flags should be set for address wrapping, it does say that mov reg, [mem] doesn't affect them ever, and that includes this case), and no interrupt/trap/whatever happens (unless of course one or both pages touched are not present).

Is there a limit to the number of parameters in a TStoredProc?

Is there limit to either the number of params or to the overall size of a params in a TStoredProc ExecProc call?
Currently running a system that is still using the BDE to connect to Oracle and a recent change to the number of parameters to a package procedure as started producing access violations. The params count is now up to 291 and the AV is being created in the ExecProc call of TStoredProc.
If we remove a single param from the list (any param, does not have to be a specific param), the ExecProc call works fine.
I have debugged through the code and the access violation is being thrown with the TStoredProc.BindParams procedure within DBTables.pas. I have several watches set up, one of which is SizeOf(FRecordBuffer) and as I step through this procedure, the value is 65535. This is MaxWord (Windows.pas). I don't see is any specified limits within the DBTables code.
The callstack is TStoredProd.ExecProc -> TStoredProc.CreateCursor -> TStoredProc.GetCursor -> TStoredProc.BindParams and the access violation is thrown in the for-loop that iterates through the FParams.
Thanks in advance, we need to find something we can pinpoint so we can steer clear.
I'm not at all versed in Oracle SQL, but since you're maintaining the thing, I would see if I could change the call with all that parameters to a single insert into a new dedicated table (with that many columns plus an autonumber primary key), and change the stored procedure to take this key as input and call the values from this new record to do its job. This may just deliver a bit faster than finding out what's the maximum number of parameters and try to find a fix there. (Though it's a bit of a strange number, as in not a power of 2, it may well be 291...)

Resources