We have an old application that reads in SQL text files and sends them to Sybase ASE 12.51. Our legacy app was written in Delphi 5 and is using the BDE TQuery component for this process and accessing Sybase with the BDE SQLinks for Sybase.
Pseudo code:
SQLText=readSQLFile;
aTQuery.SQL.add(SQLText);
aTQuery.ExecSQL;
Recently we moved our DB access layer to the Delphi XE ADO implementation - TADOQuery, using the Sybase supplied ADO provider, still using same model:
SQLText=readSQLFile;
aTADOQuery.SQL.add(SQLText)
aTADOQuery.ExecSQL;
After migrating to ADO, we discovered that certain data was missing. We traced the failure to this SQL construct:
Select myColumn from myTable
Where tranID = null
Knowing that this construct is sematically questionable at best, I did a 'double take' when I saw this code, but Sybase 12.5 accepts it - however using ADO, this segment fails.
We decided to change:
Where tranID = null
to
Where tranID is null
Then the missing data was loaded - problem solved, for this segment and several others as well.
Does anyone have an explanation for this behavior? Where/why did ADO apparently intercept and reject this seqment whereas the BDE passed it thru?
TIA
"NULL" has a very special meaning, and SQL needs a special handling. You can't compare a value to "NULL", that's why there is the special operator IS (NOT) NULL to check for it. An exhaustive explanation would take some space, here a simple explanation.
From a "mathematical" point of view, NULL can be thought as "infinity". You can't compare two infinite values easily, for example think about the set of integer numbers and even numbers. Both are infinite, but it seems logical the former is larger than the latter. All you can say IS both sets are infinite.
That's also helps to explain for example why 1 + NULL returns NULL and so on (usually only aggregate functions like SUM() ecc. may ignore NULL values - ignore, not turn NULLs into zeroes).
This metaphor may not hold true in sorting, because some databases choose to consider NULL less than any value (some kind of -infinity and thereby returning them first in ascending order), other the other way round. Some have an option to set where to return NULLs.
Check your database documentation about NULL arithmetics and NULL comparisons. field = NULL should have never been used, don't know if Sybase accepts it, but most SQL implementations don't, and I guess it is not SQL standards compliant. It is far better to get used to the IS (NOT) NULL syntax.
Update: Sybase has a "set ansinull" option, from the documentation (always RTFM!), in 12.5.1 it was expanded to disallow the '= NULL' syntax (when set to ON to make it compliant with SQL standards). When set to OFF '= NULL' works as 'IS NULL'.
Maybe the SQL Links or the ADO provider set it to one value or the other.
Related
I have a dynamic stored procedure, which I'm using to run multiple select queries. I have defined something like below.
CREATE PROCEDURE DYNAMIC
(IN IN_COLUMN1 VARCHAR(150),
IN IN_COLUMN2 VARCHAR(500),
IN IN_COLUMN3 VARCHAR(500),
IN IN_COLUMN4 CHAR(08),
IN IN_COLUMN5 DATE,
IN IN_COLUMN6 CHAR(05),
OUT OUT_COLUMN1 CHAR(01),
OUT OUT_COLUMN2 DEC(4,0),
OUT OUT_COLUMN3 DEC(4,0),
OUT OUT_COLUMN4 CHAR(04),
OUT OUT_COLUMN5 DATE
The problem here, when I run Query1, I will have input passed from COBOL DB2 program in IN_COLUMN1,IN_COLUMN2,IN_COLUMN3 and OUTPUT will be fetched into OUT_COLUMN1. I will initialize all INPUT in program, Due to the other OUTPUT parameters like OUT_COLUMN2,OUT_COLUMN3,OUT_COLUMN4 and OUT_COLUMN5 will have null, I'm getting SQLCODE "-305".
To fix this I tried to set OUTPUT parameters to default like below and got error while deploying.
OUT OUT_COLUMN2 DEC(4,0) DEFAULT NULL,
Is there any way to handle this. I'm using COBOL to call the stored procedure running in DB2.
Assuming IBM Enterprise COBOL,
To handle null values in COBOL host variables in SQL you need to assign a "null indicator" for each nullable variable.
See: Using host variables in SQL Statements (with examples in COBOL).
The null indicator will typically be negative if the result variable is null.
By default, all parameters to a DB2 for z/OS Native SQL Stored Procedure are nullable. (Db2 for z/OS 12.0.0 - Creating Native SQL Procedures)
For other ways of solving what I perceive is the task at hand,
Whilst I think at least DB2 User-defined Functions may support function overloading, stored procedures does not. That could have been an alternative.
Otherwise might I suggest returning a dynamic result set from your stored procedure? Then it would be up to the caller of the stored procedure to handle different configurations of the result sets, which is totally doable in COBOL.
Regarding UDF Overloading:
"A schema can contain several functions that have the same name but each of which have a different number of parameters or parameters with different data types. Also, a function with the same name, number of parameters, and types of parameters can exist in multiple schemas."
https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/sqlref/src/tpc/db2z_functionresolution.html
Hope it was to some help.
I have a really long insert query with more than 40 fields (from an 'inherited' Foxpro database) processed using OleDb, that produces the exception 'Data type mismatch.' Is there any way to know which field of the query is producing this exception?
By now I'm using the force brute method of reducing the number of fields on the insert until I locate the buggy one, but I guess it must be a more straight way to found it...
There isn't really any shortcut beyond taking a guess at which 20 might be the problem, chopping out the other 20 and testing, and repeating that reductive process until you hit it.
Or alternatively looking at the table structure(s) in the DBF and making sure the field types match to the OleDB types you're using. The details of how .NET types are mapped to Visual FoxPro table field types is here.
If you have access to the Visual FoxPro IDE you could probably do that a lot quicker by knocking up a little program or even just doing it in the Command Window.
You are not telling us the language you use, so that we could possibly give a sample to handle it.
Basically what you would do is:
Get the structure,
Parse the insert statement and get values,
Compare data types.
It should be a short code to make this check.
I'm just starting to learn Neo4j and I've just stumbled across some issue.
It looks like Neo4j is using strong typing without on the fly type conversion, i.e. RETURN '17' = 17 results in false and RETURN '10' > 5 results in syntax error.
It looks very strange to me for NoSQL and schema-less database to implement such a strict behavior. Even strong typed schema-based databases such as MySQL and Postgresql allows type conversions in statements. Is this an ideology behind Neo4j? If so why?
Github issue.
In Neo4j 2.1 there were type conversion functions added, like toInt and toFloat that take care of the conversion.
In 2.0.x you can use str(17) = str('17') in the other direction.
Neo4j itself is less strict on structural information. But more strict on values. I.e. the value you put into a property is returned exactly like that and you have to convert it to a different type yourself. Some of that stems from its Java history and was already loosened for cypher.
I'm already applying that and it works as if the integer value from the database is 0, the java boolean variable becomes false and vice versa. But I'm wondering if it's possible to have it the other way around to map 0 for true and 1 for false.
Which got me thinking, i've earlier mapped java enums to integers which is pretty well explained in the documentations and I'm wondering if Datanucleus is flexible enough to map any database type to any java type when either persisting the value or loading it. For example, to map database database below a specified value to java boolean false and exceeding that value to java boolean true. Or mapping strings to integers (string length may be).
Perhaps you just use standard JDO metadata and define the "jdbc-type" and see what happens. I wouldn't have thought its feasible to support every possible combination (and indeed many of them would be just plain daft).
I have a problem understanding co-existence of "null" and Option in F#. In a book I have read that null value is not a proper value in F# because this way F# eliminates the excessive null checking. But it still allows null-initialized references in F#. In other words, you can have null values but you don't have the weapons to defend yourself with. Why not completely replace nulls with Options. Is it because of compatibility issues with .NET libraries or languages that it's still there? If yes can you give an example that shows why it can't be replaced by Option?
F# avoids the use of null when possible, but it lives in the .NET eco-system, so it cannot avoid it completely. In a perfect world, there would be no null values, but you just sometimes need them.
For example, you may need to call .NET method with null as an argument and you may need to check whether a result of .NET method call was null.
The way F# deals with this is:
Null is used when working with types that come from .NET (When you have a value or argument of type declared in .NET, you can use null as a value of that type; you can test if it equals null)
Option is needed when working with F# types, because values of types declared in F# cannot be null (And compiler prohibits using null as a value of these types).
In F# programming, you'll probably use option when working with .NET types as well (if you have control over how their values are created). You'll just never create null value and then use option to have a guarantee that you'll always handle missing values correctly.
This is really the only option. If you wanted to view all types as options implicitly when accessing .NET API, pretty much every method would look like:
option<Control> GetNextChild(option<Form> form, option<Control> current);
...programming with API like this would be quite painful.