Crystal Reports formula field: IF ISNULL(decimal) THEN 0.00 ... does not work correctly - crystal-reports-xi

I have two values that I am pulling from my database:
{Command.AmountPaid} (of type Decimal(12,2))
{Command.AmountRefunded} (of type Decimal(12,2))
I am trying to create a formula field that will return {Command.AmountPaid} minus {Command.AmountRefunded}. Here is some pseudocode:
numbervar Paid := IF ISNULL({Command.AmountPaid}) THEN 0.00 ELSE {Command.AmountPaid};
numbervar Refund := IF ISNULL({Command.AmountRefunded}) THEN 0.00 ELSE {Command.AmountRefunded};
Paid - Refunded;
When null values are pulled, the ISNULL function is not recognizing them as null and is not returning 0.00. What am I doing wrong here?

I know this doesn't solve the exact question you posed, but why not use ISNULL([AmountPaid], 0) AS AmountPaid within the command itself so that you have confidence that the values for those fields will always contain numbers?
(I know ISNULL is what you'd use if you were using SQL Server, I'm sure other DBs have similar functionality.)

Related

Kdb+/q: How to bulk insert into a KDB+ table with an index?

I am trying to bulk insert multiple records simultaneously into a KDB+ database:
> trades:([]time:`datetime$();side:`symbol$();qty:`float$();price:`float$();exch:`symbol$();sym:`symbol$())
> t: .z.z / intentionally the same time
> `trades insert (t t;`buy `sell;10 10;10 10;`exch `exch;`sym `sym)
However It raises an error at the sym column
'sym
[0] `depths insert (t t;`buy `sell;10 10;10 10; `exch `exch;`sym `sym)
^
Have no Idea what I could be doing wrong here, but it seems to be value invariant i.e. it always raises an error on the last column irrespective of the value provided.
Could someone please advise me how I should go about inserting bulk records into kdb+ with an time index as depicted above.
Thanks
In your original insert statement, you had spaces between
`sym `sym
,
`exch `exch
and `buy `sell. The spaces between the symbols makes it an apply or index instead of a list which you desire.
Additionally, because you have specified your qty and price as
float
, you would have to specify the numbers as float when you are inserting to the
trades
table.
The following line should accomplish what you are intending to do:
`trades insert (2#t;`buy`sell;10 10f;10 10f;`exch`exch;`sym`sym)
Lastly, I would recommend changing the schema for the qtycolumn to int/long, as quantity generally does not require decimal points.
Hope this helps!
Daniel is on the money. To expand on his answer, q will collate space-separated lists into a single object for numeric values, and even then the type specification must be only present for the last item. Further details on list creation can be found here.
q)a:10f 10f
'10f
q)a:10 10f
Secondly, it's common for those learning kdb to often encounter type errors when appending to tables. The problem in this case is that kdb is not promoting a list of homogeneous atoms to a wider type (which is expected behaviour). The following is a useful little lambda for letting you know where you are going wrong when performing insert or upsert operations:
q)trades:([]time:`datetime$();side:`symbol$();qty:`float$();price:`float$();exch:`symbol$();sym:`symbol$())
q)rows:(t,t;`buy`sell;10 10;10 10;`exch`exch;`sym`sym)
q)insertTest:{[tab;rows] m:0!meta tab; wh: where not m[`t] ~' rt:.Q.ty each rows; #[flip;;enlist] `item`currType`expectedType!(m[`c] wh;rt wh; m[`t] wh)}
item currType expectedType
---------------------------
qty j f
price j f

Denodo: How to aggregate varchar data types?

I'm creating an aggregate from a anstime column in a view table in Denodo and I'm using a Cast to convert it to float and it works only for those numbers with period (example 123.123) but does not work for the numbers without period (example 123). Here's my code which only works for those numbers with period:
SELECT row_date,
case
when sum(cast(anstime as float)) is null or sum(cast(anstime as float)) = 0
then 0
else sum(cast(anstime as float))
end as xans
FROM table where anstime like '%.%'
group by row_date
Can someone please help me how to handle those without period?
My guess is you've got values in anstime which are are not numeric, hence why not having the where anstime like '%.%' predicate causes a failure, as has been mentioned in other comments.
You could try adding in an intermediate view before this one which strips out any non numeric values (leaving the decimal point character of course) and this might then allow you to not have to use the where anstime like '%.%' filter.
Perhaps the REGEXP function which would possibly help there
Your where anstime like '%.%' clause is going to restrict possible responses to places where anstime has a period in it. Remove that if you want to allow all values.
I appreciate those who responded to my concern. In the end we had to reach out to our developers to fix the data type of the column from varchar to float rather than doing a workaround.

Rails/Postgres treat string column as integer

I got a table named companies and a column named employees, which is a string column.
My where condition to find companies which have between 10 and 100 employees:
where("companies.employees >= ? AND companies.employees <= ?", 10, 100)
The problem is: The column needs to remain a string column so I can't just convert it to integer but I also want to compare the employee numbers. Is there any way to do this?
This may work, it is a ruby question, I don't know ruby :-) In postgres I would write the query as Craig says, like this:
select * from companies where employees::integer >= 10 and employees::integer <= 100;
(Of course there is substitution, etc, but this gets the concept across. One of the problems you run in to when you don't use the correct type in postgres is that indices don't work right. Since you are casting the employees to an integer type, you have to fetch every record, convert it to an integer, then filter using the greater/less than stuff. Every record in the table will be fetched, casted, then compared. If this was an integer type to start with, and there was an index on the table, then the postgres engine can do a lot better performance wise by selecting only the relevant records. Anyway...
Your ruby may work modified like this:
where("companies.employees::integer >= ? AND companies.employees::integer <= ?", 10, 100)
But, that makes me curious about the substitution. If the type is gleaned from the type of the argument, then it might work because the 10 and 100 are clearly integers. If the substitution gets weird, you might be able to do this:
where("companies.employees::integer >= cast(? as integer) AND companies.employees::integer <= cast(? as integer)", 10, 100)
You can use that syntax for the entire query as well:
where("cast(companies.employees as integer) >= cast(? as integer) AND cast(companies.employees as integer) <= cast(? as integer)", 10, 100)
One of these variants might work. Good Luck.
-g

Paradox SetRange does not provide correct result when querying 3 fields

I have a problem with setting a range on a secondary index in a Paradox 7 table using Delphi2010.
The relevant fields are:
FeatureType (int); YMax (int); XMax (int); YMin (int); Xmin (int).
The secondary index contains all these fields in this order.
I tested using a SetRange statement like so (not necessary to add all field values, rest is assumed NULL and all values are included):
table1.IndexName := 'YMaxIndex';
table1.SetRange([101, 280110400],[101, 285103294]); //386236 records
And tried to get a 0 result by adding to the constraints:
table1.IndexName := 'YMaxIndex';
table1.SetRange([101, 280110400, 1],[101, 285103294, 1]); //386236 records
But still gets 3863236, which is clearly incorrect when checking the values in the XMax field in the table.
Can someone please explain to me what I am not understanding about Paradox index and SetRange? I have used similar code frequently but not necessarily with 3 fields specifying the range.
Update
See Uwe's response below. The final code solution follows (new ranges for XMax):
Table1.SetRange([101,280110400], [101,285103294]);
Table1.Filter := 'XMax > 100000 and XMax < 110000';
Table1.Filtered := true;
An index range is always taken as a whole over all fields and not looking for each field individually. The result set will contain every record that is in between those ranges. The comparison is made for each index field in the given order.
In your case it will check if the record's FeatureType lies in between 101..101. If the field contains 101 it is taken into consideration. As the field value lies at the border of the range, the next fields are checked.
If the YMax field lies in between 280110400..285103294 and the value doesn't match the borders (280110400 or 285103294), it is taken into the result set without any further checking. In that case the remaining index fields are not checked.
The result you are trying to get is only possible with a filter condition - or with an appropriate SQL Select clause.
for range set with
table1.SetRange([101, 280110400, 1],[101, 285103294, 1]);
Folow values are in range
101 280110400 1
101 280110400 2
101 280110400 3
....
101 280110401 -maxint
....
101 280110401 maxint
....
101 285103294 0
101 285103294 1
A little clarification to the previous answers:
SetRange checks separately the range start and end conditions, for example we have
SetRange([1,2], [2,2])
and record (1, 3);
Range start: we have 1 = 1 for the first field (boundary), so we check the second field (2 < 3) - the range start condition is satisfied.
Range end: we have 1 < 2 for the first field (not boundary), so the second field is not checked - the range end condition is satisfied.
The record is in range.

DELPHI - help need with a ClientDataSet

I have a ClientDataSet with the following data
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
3 E3 7 B7
4 E4 1 B1
5 E5 2 B2
6 E6 7 B7
7 E7 1 B1
I need to transform this data into
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
4 E4 1 B1
5 E5 2 B2
The only fields of importance are the BRANCH_ID and BRANCH
and BRANCH_ID must be unique
As there is a lot of data I do not what two have copies of it.
QUESTION:
Can you suggest a way to transfrom the data using a Cloned version of the original data ?
Cloning won't allow you to actually change data in a clone and not have same change reflected in the original, so if that's what you want you might rethink the cloning idea.
Cloning does give you a separate cursor into the clone and allows you to filter and index (i.e. order) it independently of the master clientdataset. From the data you've provided it looks like you want to filter some branch data and order by branch_id. You can accomplish that by setting up a new filter and index on the clone. Here's a good article that includes examples of how to do that:
http://edn.embarcadero.com/article/29416
Taking a second look at your question, seems like all you'd need to do would be to set up a unique index on branch_id on the cloned dataset. Linked article above has info on how to set up index; check docs on clientdataset.addindex function for more details and info on setting the index to show only unique values, if I recall it may just mean you set branch_id as the primary key.
I can't think of a slick way to do this, but you could index on BRANCH_ID, add an fkInternalCalc boolean field to your dataset, then initialize that field to True on the first row of each branch (using group state or manually) and then filter the clone on the value of the field. You'd have to update the field on data changes though.
I have a feeling that a better solution would be to have a master dataset with a row for each branch.
You don't provide many details about your use case so I'll try to give you some hints:
"A lot of data" suggests that you might have it from a SQL backend. Using a 'SELECT DISTINCT...' or 'SELECT ... GROUP BY BRANCH_ID' (or similar syntax depending on what SQL backend do you) will give the desired result with ease and speed. Please confirm and I'll give you more details.
As the others said a simple 'clone' wouldn't work. The most simpler (and perhaps quicker) sollution, assuming that ussually the brances are few in number WRT to data is to have an index outside of your dataset. If really you want to filter your original data then add a status field (eg boolean) on your data and put a flag (eg. 'True') on the first occurence.
PseudoCode:
(Let's asume that:
your ClientDataSet is cds1
your cds1 have a status field cds1Status (boolean) - this is optional, needed only if you want to sort/filter/search the cds1
you have a lIndex which is a TStringList)
lIndex.Clear;
lIndex.Sorted:=True;
with cds1 do
try
DisableControls;
First;
while not Eof do //scan the dataset
begin
cVal:=cds1Branch_ID.AsString;
Edit; //we anyway update the Status field
if lIndex.Find(cVal, nDummy) then //nDummy - we don't use it.
begin //already in index
cds1Status.AsBoolean:=False; //we say here "No, isn't the 1st occurence"
end
else
begin //Not found! - Well, let's add it...
lIndex.Append(cVal); //update the index
cds1Status.AsBoolean:=True; //mark the first occurence
end;
Post; //save the changes in the status field
Next;
end; //scan
finally
EnableControls; //housekeeping
end;
//WARNING! - NOT tested. I wrote it from my head but I think that you got the idea...
...Depending on what you try to accomplish (which would be the best thing that you might share with us) and what selectivity do you have on BRANCH_ID perhaps the Status engine isn't needed at all. If you have a very low selectivity on that field (selectivity = no. of unique values / no. of records) perhaps it's much faster to have a new dataset and copy there only the unique values rather than putting each record of the original cds in Edit + Post states. (Changing dataset states are costly operations. Especially if your cds is linked to a remote data storage - ie. a server).
hth,
PS: My sollution is intended to be mostly simple. Also you can test with lIndex.Sorted:=False and use lIndex.IndexOf instead of Find. In some (rare) cases is better. Depends on your data. If you want to complicate the things and the speed is really a concern you can implement a full-blown BTree index to do your searces (libraries available). Also you can use the index engine of CDS and index the BRANCH_ID and do many 'Locate' on a clone but because your selectivity is clearly < 1 scaning the cds's entire index theorethically should be slower that a scan on a unique index, especially if your custom-made index is tailored to your data type, structure, distribuition etc.
just my2c

Resources