vb6 - Greater/Less Than statements giving incorrect output - textbox

I have a VB6 form with a text boxes for minimum and maximum values. The text boxes have a MaxLength of 4, and I have code for the keyPress event to limit it to numeric entry. The code checks to make sure that max > min, however it is behaving very strangely. It seems to be comparing the values in scientific notation or something. For example, it evaluates 30 > 200 = true, and 100 > 20 = false. However if I change the entries to 030 > 200 and 100 > 020, then it gives me the correct answer. Does anyone know why it would be acting this way?
My code is below, I am using control arrays for the minimum and maximum text boxes.
For cnt = 0 To 6
If ParameterMin(cnt) > ParameterMax(cnt) Then
MsgBox ("Default, Min, or Max values out of range. Line not updated.")
Exit Sub
End If
Next cnt

That is how text comparison behaves for numbers represented as variable length text (in general, not just VB6).
Either pad with zeros to a fixed length and continue comparing as text (as you noted)
OR
(preferable) Convert to integers and then compare.

If I understood correctly, you can alter the code to
If Val(ParameterMin(cnt)) > Val(ParameterMax(cnt)) Then
I wish to advise one thing -(IMHO...) if possible, avoid checking data during key_press/key_up/key_down .
Can you change the GUI to contain a "submit" button and check your "form" there ?
Hope I helped...

Related

How can I have a validation formula for a Decimal field that is only tested if the field is non-blank

In Form Builder, I have a control of Data Type: Decimal. Required is set to No. I want to have a validation error if a value entered is not positive. But I don't want a validation error if the field is blank. I have tried a lot of formulas such as these:
xxf:is-blank(xs:string($control-2)) or $control-2 > 0
string-length(xs:string($control-2)) = 0 or $control-2 > 0
xxf:is-blank($control-2) or $control-2 > 0
string-length($control-2) = 0 or $control-2 > 0
If the field is non-blank I can convert it to string with xs:string() without any issue but if it's blank then this conversion fails. Is there any formula to test for a blank Decimal value?
If in Form Builder you make the field non-required and Decimal, in the XForms, you'll have a type="xf:decimal". Then, as for the constraint, just writing xxf:is-blank() or . > 0 seems to work for me. I see that this is pretty much equivalent to your 3rd expression, so I might be missing something. If that is the case, feel free to let me know in the comments.
And for reference, this is what I had in the Validations and Alerts tab:

set of WideChar: Sets may have at most 256 elements

I have this line:
const
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
The above does not compile, with error:
[Error] Sets may have at most 256 elements
But this line does compile ok:
var WS: WideString;
if WS[1] in [WideChar('A')..WideChar('Z')] then...
And this also compiles ok:
const
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
...
if WS[1] in MY_SET then...
Why is that?
EDIT: My question is why if WS[1] in [WideChar('A')..WideChar('Z')] compiles? and why MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')]; compiles? aren't they also need to apply to the set rules?
A valid set has to obey two rules:
Each element in a set must have an ordinal value less than 256.
The set must not have more than 256 elements.
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
Here you declare a set type (Set of WideChar) which has more than 256 elements -> Compiler error.
if WS[1] in [WideChar('A')..WideChar('Z')]
Here, the compiler sees WideChar('A') as an ordinal value. This value and all other values in the set are below 256. This is ok with rule 1.
The number of unique elements are also within limits (Ord('Z')-Ord('A')+1), so the 2nd rules passes.
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
Here you declare a set that also fulfills the requirements as above. Note that the compiler sees this as a set of ordinal values, not as a set of WideChar.
A set can have no more than 256 elements.
Even with so few elements the set already uses 32 bytes.
From the documentation:
A set is a bit array where each bit indicates whether an element is in the set or not. The maximum number of elements in a set is 256, so a set never occupies more than 32 bytes. The number of bytes occupied by a particular set is equal to
(Max div 8) - (Min div 8) + 1
For this reason only sets of byte, (ansi)char, boolean and enumerations with fewer than 257 elements are possible.
Because widechar uses 2 bytes it can have 65536 possible values.
A set of widechar would take up 8Kb, too large to be practical.
type
Capitals = 'A'..'Z';
const
MY_SET: set of Capitals = [WideChar('A')..WideChar('Z')];
Will compile and work the same.
It does seem a bit silly to use widechar if your code ignores unicode.
As written only the English capitals are recognized, you do not take into account different locales.
In this case it would be better to use code like
if (AWideChar >= 'A') and (AWideChar <= 'Z') ....
That will work no matter how many chars fall in between.
Obviously you can encapsulate this in a function to save on typing.
If you insist on having large sets, see this answer: https://stackoverflow.com/a/2281327/650492

checkValidity() of number input type in Dartlang

I've the below code to check the input validity, so that the field become '0' if the input is not a number
innerInput.onKeyUp.listen((e){
if(innerInput.checkValidity() == false)innerInput.value='0';
});
this works very fine with INTEGER numbers, but one I enter "." followed by any number, and the input field become '0', that is the checkValidity() understood the x.y as something other than valid input number!
any thought?
input.checkValidity() will check all constraints. This means for input[type=number] it will check min, max and step. So if you want to enter floating point numbers you have to make sure, min, max and step are correct. As you can leave min and max open you at least have to specify step. If you want to enter floating point numbers with a precision of 5 digits (e.g. 0.00001) then your step attribute has to be 0.00001 or smaller.

INSPECT verb in COBOL program

Here is an example I used for INSPECT verb.
INSPECT FUNCTION REVERSE (WS-ADDR(7:4)) TALLYING WS-COUNT FOR LEADING SPACES
DISPLAY 'COUNT :' WS-COUNT
COMPUTE WS-LENGTH = 4 - WS-COUNT
DISPLAY 'LENGTH :' WS-LENGTH
I am not getting the right output with the below input,
Input-1 - 43WE
WS-COUNT = 0
LENGTH = 4
Input-2 - 85
WS-COUNT = 2
LENGTH = 2
Input-3 - 74OI
WS-COUNT = 2
LENGTH = 2
For the input-3 the WS-COUNT should come as 0 but I am getting 2 as a value.
Please find the console window screen shot below,
IN-VALUES :%ORIG243WE
COUNT :000
LENGTH :004
ADDRESSLINES: 43WE<br>
WS-SUB :004
IN-VALUES :%ORIG385
COUNT :002
LENGTH :002
ADDRESSLINES: 85<br>
WS-SUB :005
IN-VALUES :%ORIG474OI
COUNT :002
LENGTH :002
Could you anyone help me to resolve this.
You must initialize identifier-2 before execution of the INSPECT
statement begins.
So says the IBM Enterprise COBOL Language Reference, and the words will be similar in any COBOL manual.
identifier-2 is the target field of your TALLLYING.
If you do not set this to an initial value prior to the INSPECT, the current value will be used to add to (or not, in your case).
This is useful sometimes, but if you do not want to make use of it, you must set the identifier-2 field to zero before the INPSECT.
In your case that would be, for example (you could also use INITIALIZE, SET an 88 which has a zero in the first position of its VALUE clause, etc):
MOVE ZERO TO WS-COUNT
If you show your data-definitions, representative sample input and expected output, you may even get some hints about doing what it is that you want in a tidier way. If you tell us what it is you want.

adding a big offset to an os.time{} value

I'm writing a Wireshark dissector in lua and trying to decode a time-based protocol field.
I've two components 1)
local ref_time = os.time{year=2000, month=1, day=1, hour=0, sec=0}
and 2)
local offset_time = tvbuffer(0:5):bytes()
A 5-Byte (larger than uint32 range) ByteArray() containing the number of milliseconds (in network byte order) since ref_time. Now I'm looking for a human readable date. I didn't know this would be so hard, but 1st it seems I cannot simple add an offset to an os.time value and 2nd the offset exceeds Int32 range ...and most function I tested seem to truncate the exceeding input value.
Any ideas on how I get the date from ref_time and offset_time?
Thank you very much!
Since ref_time is in seconds and offset_time is in milliseconds, just try:
os.date("%c",ref_time+offset_time/1000)
I assume that offset_time is a number. If not, just reconstruct it using arithmetic. Keep in mind that Lua uses doubles for numbers and so a 5-byte integer fits just fine.

Resources