The COBOL RANDOM function documentation doesn't give sufficient information on the range of accepted values for argument-1.
Perhaps someone can shed light on my following questions:
What range of seed values is accepted?
How are values treated that are exceeding the allowed range?
Are they truncated?
Are only the lower bits used?
Or the upper bits?
Are the leftmost digits used?
Or the rightmost?
How many of them?
Is a MOD function applied to the seed value?
In short:
Is there a specification in the COBOL standard defining which digits of a value like 01 myRandomSeed PIC 9(50). are being used?
For the COBOL standard have a look at the current draft standard (the files available there vary depending on the current state of the committee work), which has the RANDOM function under "15 intrinsic functions".
The format is:
FUNCTION RANDOM [ ( [ argument-1 ] ) ]
With the rules that the optional argument-1 shall be of class numeric and either be zero or a positive integer.
For the returned value:
The implementor shall specify the subset of the domain of argument-1 values that will yield distinct sequences of pseudo-random numbers. This subset shall include the values from 0 through at least 32767.
Related
A simple question that turned out to be quite complex:
How do I turn a float to a String in GForth? The desired behavior would look something like this:
1.2345e fToString \ takes 1.2345e from the float stack and pushes (addr n) onto the data stack
After a lot of digging, one of my colleagues found it:
f>str-rdp ( rf +nr +nd +np -- c-addr nr )
https://www.complang.tuwien.ac.at/forth/gforth/Docs-html-history/0.6.2/Formatted-numeric-output.html
Convert rf into a string at c-addr nr. The conversion rules and the
meanings of nr +nd np are the same as for f.rdp.
And from f.rdp:
f.rdp ( rf +nr +nd +np – )
https://www.complang.tuwien.ac.at/forth/gforth/Docs-html/Simple-numeric-output.html
Print float rf formatted. The total width of the output is nr. For
fixed-point notation, the number of digits after the decimal point is
+nd and the minimum number of significant digits is np. Set-precision has no effect on f.rdp. Fixed-point notation is used if the number of
siginicant digits would be at least np and if the number of digits
before the decimal point would fit. If fixed-point notation is not
used, exponential notation is used, and if that does not fit,
asterisks are printed. We recommend using nr>=7 to avoid the risk of
numbers not fitting at all. We recommend nr>=np+5 to avoid cases where
f.rdp switches to exponential notation because fixed-point notation
would have too few significant digits, yet exponential notation offers
fewer significant digits. We recommend nr>=nd+2, if you want to have
fixed-point notation for some numbers. We recommend np>nr, if you want
to have exponential notation for all numbers.
In humanly readable terms, these functions require a number on the float-stack and three numbers on the data stack.
The first number-parameter tells it how long the string should be, the second one how many decimals you would like and the third tells it the minimum number of decimals (which roughly translates to precision). A lot of implicit math is performed to determine the final String format that is produced, so some tinkering is almost required to make it behave the way you want.
Testing it out (we don't want to rebuild f., but to produce a format that will be accepted as floating-point number by forth to EVALUATE it again, so the 1.2345E0 notation is on purpose):
PI 18 17 17 f>str-rdp type \ 3.14159265358979E0 ok
PI 18 17 17 f.rdp \ 3.14159265358979E0 ok
PI f. \ 3.14159265358979 ok
I couldn't find the exact word for this, so I looked into Gforth sources.
Apparently, you could go with represent word that prints the most significant numbers into supplied buffer, but that's not exactly the final output. represent returns validity and sign flags, as well as the position of decimal point. That word then is used in all variants of floating point printing words (f., fp. fe.).
Probably the easiest way would be to substitute emit with your word (emit is a deferred word), saving data where you need it, use one of available floating pint printing words, and then restoring emit back to original value.
I'd like to hear the preferred solution too...
According to the algorithm for ISBN10, the check digit can be X, it means, it may not be a number.
But the output of google API: https://www.googleapis.com/books/v1/volumes?q=isbn:xxx uses an integer type for ISBN10 and ISBN13. Why?
P.S.:
The following is part of the google API output:
industryIdentifiers = (
{
identifier = 7542637975;
type = "ISBN_10";
},
{
identifier = 9787542637970;
type = "ISBN_13";
}
);
ISBN-10 in theory should have been replaced by ISBN-13 by 2007. Obviously, this is not possible for already-sold publications, and some publishers still maintain ISBN-10 rather than changing to ISBN-13 (in the same way as some manufacturers maintain UPC-A instead of EAN-13 or GSi.)
To convert a USBN-10 to USBN-13, simply take the first 9 digits of the USBN, prefix it with 978 and then calculate the check digit using the standard EAN algorithm. Use the 13-digit result as your key to locate the item (as a 64-bit unsigned integer).
You can always extract the ISBN-10 by removing the first 3 digits and the check-digit and using the ISBN-10 algorithm to re-calculate the check character.
This way, you only need to record the 13-digit version. If you have an ISBN-10 (read by scanner) or you need to produce an ISBN-10 (for whatever purpose) it's simply a matter of applying the appropriate conversion algorithm.
Depending on your application, you may wish to consider what to do with ISMN (for music) or ISSN (periodical) numbers. Periodicals are the more problematic. The barcode extension usually yields month or week of publication, but the -13 version remains the same. That's fine for a seller or such items, as whatever-01 would be January, and these would be well out-of-date (and hence no-longer-in-stock) by the following January when the same number would be used. Not so good for an archival function like a library, though...
Thank you all for the helpful answers. I finally find out the original response from google is in string format. It's printed like an integer just because of my json printer.
I have a compute statement that uses fields like so:
WS-COMPUTE PIC 9(14).
WS-NUM-1 PIC 9(09).
WS-NUM-2 PIC 9(09).
WS-NUM-3 PIC S9(11) COMP-3.
WS-DENOM PIC 9(09).
And then there is logic to make a computation
COMPUTE WS-COMPUTE =
((WS-NUM-1 - WS-NUM-2 + WS-NUM-3)
/ WS-DENOM) * 100
The * 100 is in there because a number < 1 is expected from the division, but 0 is what was always stored in WS-COMPUTE.
We got a workaround by declaring another field that did have implied decimals, and then moving that to value to WS-COMPUTE, but I was lost on why the original would always populate WS-COMPUTE with 0?
The number of decimal places for the results of intermediate calculations are directly related to the number of decimal places in your the final result field (you can consult the manual in the case where you have multiple result fields) when there are no decimal places in the individual operands. COBOL is not going to use a predetermined number of decimal places for intermediate results. If neither actual operands in question nor final result contain decimal places, the intermediate result will not contain decimal places.
The relationship is: number of decimal places in intermediate results = number of decimal places in final result field. The only thing which can modify this is the specification of ROUNDED. If ROUNDED is specified, one extra decimal place is kept for the intermediate result fields, and that will be used to perform the rounding of the final result.
You have no decimal places on your final result, and no ROUNDED. So the intermediate results will have no decimal places. If you get a value of less than zero, then it is gone before anything can happen to it. It is stored as zero, because there is no decimal part available to store it in.
You need to understand COMPUTE before you use it. Nowhere near enough people do. There is absolutely no need to specify excessive lengths of fields or decimal places where none are needed. These a common ways to "deal with" a problem, but are unnecessary, as the actual problem is a poorly-formed COMPUTE.
If your COMPUTE contains multiplication, do that first. If it contains division, do that last. This may require re-arranging a formula, but this will give you the correct result. Subject to truncation, which comes in two parts, as Bruce Martin has indicated. There is the one you are getting, decimal truncation through not specifying enough (any) decimal places when you expect a decimal-only value for an intermediate result, and high-order truncation if your source fields are not big enough. Always remember that the result field controls the size (decimal and integer) of the intermediate results. If you do those things, your COMPUTEs will always work.
And consider whether you want the final result rounded. If so, use ROUNDED. If you want intermediate results to be rounded, you need to do that yourself with separate COMPUTEs or DIVIDEs or MULTIPLYs.
If you don't take these things into account, your COMPUTEs will work by accident, or sometimes, or not at all, or when you specify excessive size or decimal places. Always remember that the result field controls the size (decimal) of the intermediate results where operands contain no decimal places.
If you don't need any decimal places in the final result, use Bruce Martin's first COMPUTE:
COMPUTE WS-COMPUTE = (((WS-NUM-1 - WS-NUM-2 + WS-NUM-3) * 100) / WS-DENOM
If you do need decimal places, use Bruce Martin's first COMPUTE (yes, the same one) with the decimals defined on the final result (WS-COMPUTE).
If you need the result to be rounded (0-4 down, 5-9 up) use ROUNDED. If you need some other rounding, specify the final result with an extra decimal place beyond what you need, and do your own rounding to your specification.
If you look at the column to the right of your question, under Related, you'll find existing questions here which would/should have answered this one for you.
You do not need to add spurious digits or spurious decimal places to everything in sight. Ensure your final result is big enough, has enough decimal places, and pay attention to the order of things. Read your manual which should document intermediate results. If your manual does not cover this, the IBM Enterprise COBOL manuals are an excellent general reference, as well as specific ones. The Programming Guide devotes an entire Appendix to intermediate results.
It sounds like you are using the TRUNC(STD) option, the compiler takes the picture clause to decide what precision to use for intermediate results. You can either add implied decimals to all your intermediate fields or try something like TRUNC(BIN) or TRUNC(OPT), though in this case, I don't think they will help.
Truncates final intermediate results. OS/VS COBOL has the TRUNC and NOTRUNC options (NOTRUNC is the default). VS COBOL II , IBM COBOL, and Enterprise COBOL have the TRUNC(STD|OPT|BIN) option.
TRUNC(STD)
Truncates numeric fields according to PICTURE specification of the binary receiving field
TRUNC(OPT)
Truncates numeric fields in the most optimal way
TRUNC(BIN)
Truncates binary fields based on the storage they occupy
TRUNC(STD) is the default.
For a complete description, see the Enterprise COBOL Programming Guide.
The default for Cobol is normally to truncate !!. This includes intermediate results.
So the decimal places will be truncated in your calculation
You could try:
COMPUTE WS-COMPUTE = (((WS-NUM-1 - WS-NUM-2 + WS-NUM-3) * 100) / WS-DENOM
This could result in loosing top order digits.
Alternatively you could
Use 2 computes
Add decimals to the input declaration
Use floating point fields (comp-1, comp-2). As they are rarely used in Cobol, I do not advise it.l
03 WS-Temp Pic 9(11)V9999 comp-3.
Compute WS-Temp = WS-NUM-1 - WS-NUM-2 + WS-NUM-3.
Compute WS-Temp = (WS-Temp / WS-DENOM) * 100.
Compute WS-COMPUTE = WS-Temp.
Change the field definition:
WS-COMPUTE PIC 9(14).
WS-NUM-1 PIC 9(09)V999.
WS-NUM-2 PIC 9(09)V999.
WS-NUM-3 PIC S9(11)V999 COMP-3.
WS-DENOM PIC 9(09).
I have a function that returns a float value like this:
1.31584870815277
I need a function that returns TRUE comparing the value and the two numbers after the dot.
Example:
if 1.31584870815277 = 1.31 then ShowMessage('same');
Sorry for my english.
Can someone help me? Thanks
Your problem specification is a little vague. For instance, you state that you want to compare the values after the decimal point. In which case that would imply that you wish 1.31 to be considered equal to 2.31.
On top of this, you will need to specify how many decimal places to consider. A number like 1.31 is not representable exactly in binary floating point. Depending on the type you use, the closest representable value could be less than or greater than 1.31.
My guess is that what you wish to do is to use round to nearest, to a specific number of decimal places. You can use the SameValue function from the Math unit for this purpose. In your case you would write:
SameValue(x, y, 0.01)
to test for equality up to a tolerance of 0.01.
This may not be precisely what you are looking for, but then it's clear from your question that you don't yet know exactly what you are looking for. If your needs are specifically related to decimal representation of the values then consider using a decimal type rather than a binary type. In Delphi that would be Currency.
If speed isn't the highest priority, you can use string conversion:
if Copy(1.31584870815277.ToString, 1, 4) = '1.31' then ShowMessage('same');
In the Erlang shell, I can do the following:
A = 300.
300
<<A:32>>.
<<0, 0, 1, 44>>
But when I try the following:
B = term_to_binary({300}).
<<131,104,1,98,0,0,1,44>>
<<B:32>>
** exception error: bad argument
<<B:64>>
** exception error: bad argument
In the first case, I'm taking an integer and using the bitstring syntax to put it into a 32-bit field. That works as expected. In the second case, I'm using the term_to_binary BIF to turn the tuple into a binary, from which I attempt to unpack certain bits using the bitstring syntax. Why does the first example work, but the second example fail? It seems like they're both doing very similar things.
The difference between in a binary and a bitstring is that the length of a binary is evenly divisible by 8, i.e. it contains no 'partial' bytes; a bitstring has no such restriction.
This difference is not your problem here.
The problem you're facing is that your syntax is wrong. If you would like to extract the first 32 bits from the binary, you need to write a complete matching statement - something like this:
<<B1:32, _/binary>> = B.
Note that the /binary is important, as it will match the remnant of the binary regardless of its length. If omitted, the matched length defaults to 8 (i.e. one byte).
You can read more about binaries and working with them in the Erlang Reference Manual's section on bit syntax.
EDIT
To your comment, <<A:32>> isn't just for integers, it's for values. Per the link I gave, the bit syntax allows you to specify many aspects of binary matching, including data types of bound variables - while the default type is integer, you can also say float or binary (among others). The :32 part indicates that 32 bits are required for a match - that may or may not be meaningful depending on your data type, but that doesn't mean it's only valid for integers. You could, for example, say <<Bits:10/bitstring>> to describe a 10-bit bitstring. Hope that helps!
The <<A:32>> syntax constructs a binary. To deconstruct a binary, you need to use it as a pattern, instead of using it as an expression.
A = 300.
% Converts a number to a binary.
B = <<A:32>>.
% Converts a binary to a number.
<<A:32>> = B.