If ADOQuery is used to connect to ORACLE database using Oracle OLEDB provider search on unicode strings for NVARCHAR2 fields fails
In the Oracle (11g) database, there is a table like
create table unicodetest
(Code Number, Name NVarchar2(100))
Now I have inserted data into it
(done through sql developer)
insert into unicodetest
values (1, N'ユニコード')
Now from delphi (XE2) application, I used TADOQuery to connect to ORACLE database using ORAOLEDB provider.
Now if I search for unicode string using following query
select * from unicodetest
where
NAME like N'%ユニコード%'
It returns 0 records
tried with out using N but same result
Related
I am using unidac components in a Delphi 7 project to connect to a SQLite database.
Connecting and quering works fine, except for calculated fields.
My query is this :
select c.CardID,
c.FirstName,
c.SurName,
c.Street,
c.City,
c.PostCode,
c.Points,
(select count(1) from FullCard f where f.CardID = c.CardID and f.Paid = 1) as PaidCards,
(select count(1) from FullCard f where f.CardID = c.CardID and f.Paid = 0) as OpenCards,
(select count(1) from FullCard f where f.CardID = c.CardID) as FullCards
from Card c
This query returns a correct resultset when I run it in SQLiteStudio, but when I run it in delphi the Calculated fields are all empty.
The unidac version is 5.0.1 for Delphi 7.
I have a UniConnection component and a UniQuery component. The connection properties seem correct since I can connect and query from the database.
The UniQuery component has the SQL property filled with the above query, and all fields are made persistent.
When I do UniQuery1.Open a DBGrid fills up with all records, but the fields PaidCards, OpenCards and FullCards are all empty.
The same query does returns these fields properly filled when executing it in SQLiteStudio so I guess there is nothing wrong with the query itself.
I am hoping someone else has encountered the same problem and can point me to a solution for this problem.
The workaround for this bug is not using persistent fields.
When I don't create persistent fields than all fields are filled properly and everything works perfect.
Only downfall is I have to use query1.FieldByName('FirstName').asString in stead of query1FirstName.asString in my code.
At my Co-Op my manager is asking me to take my SAS output tables that I've gathered and then to execute a stored procedure that will upload and update any data that has changed into an online KPI(Key Performance Indicators) excel sheet.
Apparently my boss isn't too sure of how to do this even and he's been programming for quite some time.
In laymen's terms this is what I need to do:
create a table of gathered KPI's (Done)
Send the table to the Stored Procedure (I don't want to use ProcSQL in SAS 9.3 because I would be hardcoding in too many fields)
Have the Stored procedure read into the online datasheet (done)
Replace KPI's if they have changed (done)
Here is the ProcSQL that I have figured: Ambiguous names have been given to keep anominity:
%let id = 'HorseRaddish';
%let pwd = 'ABC321';
proc sql;
connect to odbc (dsn='JerrySeinfeld' uid=&id pwd=&pwd);
execute (spKPIInsertUpdateKPIData '411', '7.2', '8808', 'M', 'NANANA', 'WorkStation', 'Testing1212', '1', '8/3/2013 10:42AM') by odbc;
disconnect from odbc;
quit;
run;
Above code works fine, but like I said it's a pain to hard code in KPI calues for hundreds of fields.
If it were me, and I had flexibility to do so, I would rewrite the SP to pull the parameters from a table and upload the table, then call the SP. That's got to be faster.
If it's not, you can script that SP line fairly easily. You will still run it in PROC SQL, but you don't have to write it out by hand.
Something like:
proc sql;
select cats("execute(spKPIInsertUdateKPIData '",var1,"''",var2,"','",var3,<... more ...>,"') by odbc") into :execlist separated by ';';
quit;
That creates a macro variable &execlist that contains the calls to the SP. Then you just do
proc sql;
connect to odbc ... ;
&execlist.
disconnect from odbc;
quit;
That does have some length limits, you might have to do it a bit differently (either cut it up or use %include) if you are over ~20k characters.
But again, this is probably not a very good way to do this - better is load to table and have the SP update from that table. Something like:
libname sqldb oledb init_string=whatever;
proc sql;
drop table sqldb._tempSP_KPI;
create table sqldb._tempSP_KPI as select * from <dataset containing values>;
connect to oledb (init_string=whatever);
<exec SP that uses the _tempSP_KPI table)>
quit;
quit;
Using SQL Server Integration Services (SSIS) I am migration a table from Oracle (11g) to SQL Server 2008 R2. The table's fields are int, string (data type Unicode strings [DT_WSTR]) and blob type converted to image data type.
SQL Server's collation is "Latin1_General_100_CI_AS".
The workflow is pretty straighforward:
1) An ADO NET element gathers data from the Oracle source.
2) A script component maps the inout column with the output columns with some data conversion
3) A SQL Server destination element stores the records to the target database
During the data migration (in total just 20'000 records) some string fields are stored with asian characters, while other that have same value are moved properly.
As example:
ID CODE USRNAME DOCNAME
---------------------------------------------------------
120 B-0000001 OAS2491 Help.pdf
121 D-0000465 Charlie Doc1.pdf
122 D-0000465 Charlie Doc2.pdf
123 殹榁鴀ځᡑ䇜쿫 Ɫ灿풑뾧껳쮏⽏� Doc3.pdf
124 D-0000465 Alpha Doc2.pdf
As first thing I thought to some special characters in the source table, but I checked the affected records and they are exactle the same as in the other rows properly migrated.
Row with ID 123 has the same values as row 122, that is displayed fine.
On Oracle: CODE is a VARCHAR2 (15 Byte) USRNAME is a VARCHAR2 (36 Byte)
On SQL Server: CODE is a nvarchar(15) USRNAME is a nvarchar(36)
Why some rows are migrated with wrong characters when others not, even if the content is the same?
I could solve the issue by replacing the "SQL Server destination" component with an "OLE DB Destination" element.
With the lattest all rows are properly imported without any wrong characters.
I have a .NET app that retrieves a SYS_REFCURSOR output from an Oracle 9i stored proc. I would like to take that cursor and pass it into another stored proc to get a different one in return.
Loose psudocode:
CREATE OR REPLACE PROCEDURE get_Addresses(
userList IN SYS_REFCURSOR,
addressList OUT SYS_REFCURSOR)
IS
OPEN addressList FOR (
SELECT * FROM Addresses A
WHERE A.UserID in (SELECT UserID from userList)
This way i can pass a list (dataset) of user info to the stored proc and get a list of addresses that match the user list passed in.
I'm not much of an oracle developer, but I was hoping there was a way to do this rather than loop through the dataset in .NET and open/close an Oracle connection for each line.
I'm using Zeos 7, and Delphi 2009 and want to check to see if a value is already in the database under a specific field before I post the data to the database.
Example: Field Keyword
Values of Cheese, Mouse, Trap
tblkeywordKEYWORD.Value = Cheese
What is wrong with the following? And is there a better way?
zQueryKeyword.SQL.Add('IF NOT EXISTS(Select KEYWORD from KEYWORDLIST ='''+
tblkeywordKEYWORD.Value+''')INSERT into KEYWORDLIST(KEYWORD) VALUES ('''+
tblkeywordKEYWORD.Value+'''))');
zQueryKeyword.ExecSql;
I tried using the unique constraint in IBExpert, but it gives the following error:
Invalid insert or update value(s): object columns are
constrained - no 2 table rows can have duplicate column values.
attempt to store duplicate value (visible to active transactions) in unique index "UNQ1_KEYWORDLIST".
Consider to use UPDATE OR INSERT or MERGE statements:
update or insert into KEYWORDLIST (KEYWORD) values(:KEYWORD) matching(KEYWORD)
For details check the following documents in your Firebird installation folder:
doc\sql.extensions\README.update_or_insert.txt
doc\sql.extensions\README.merge.txt