Understanding call x"91" - cobol

Can someone please help me understand call x"91" function 11 and function 12 with simple example. I have tried to search and couldn't understand it. Right now I an using this code in COBOL under UNIX environment,Does this call works in windows environment as well?

http://opencobol.add1tocobol.com/#what-are-the-xf4-xf5-and-x91-routines
The CALL's X"F4", X"F5", X"91" are from MF.
You can find them in the online MF doc under
Library Routines.
F4/F5 are for packing/unpacking bits from/to bytes.
91 is a multi-use call. Implemented are the subfunctions
get/set cobol switches (11, 12) and get number of call params (16).
Use
CALL X"F4" USING
BYTE-VAR
ARRAY-VAR
RETURNING STATUS-VAR
to pack the last bit of each byte in the 8 byte ARRAY-VAR into corresponding bits of the 1 byte BYTE-VAR.
The X”F5” routine takes the eight bits of byte and moves them to the corresponding occurrence within array.
X”91” is a multi-function routine.
CALL X"91" USING
RESULT-VAR
FUNCTION-NUM
PARAMETER-VAR
RETURNING STATUS-VAR
As mentioned by Roger, OpenCOBOL supports FUNCTION-NUM of 11, 12 and 16.
11 and 12 get and set the on off status of the 8 (eight) run-time OpenCOBOL switches definable in the SPECIAL-NAMES paragraph. 16 returns the number of call parameters given to the current module.

x'91' is a general library routine, for a complete list of those see the MF documentation.
This documentation also specifies what its function 11 and function 12 do: they set/read the COBOL runtime switches 0-7 and the internal debugging mode switch.
Other than these library routines you can also read them one by one from COBOL and set "some" switches via the SET statement.

Related

NodeMCU Lua integer max value is 2^31

Lua 5.1.4 on SDK 3.0.1-dev(fce080e)
Trying to use node.dsleepMax() and it is returning a really smaller number (147324921). Then I tried to manually set the sleep time in node.dsleep to the 32-bit max value (4294967295) and it only remained sleeping for around 30 min or so.
Tried the following:
sleeptime = 4294967295
>
=print(sleeptime)
2147483647
which is 2^31 -1.
Also did a loop adding to a variable, and it becomes negatve when it reaches 2^31.
Questions:
Why is the variable wrapping at 2^31?
Isn't node.dsleep supposed to accept a 64-bit value with SDK 2.1 and above?
Regards,
Cesar
You already got some feedback regarding int vs. float. As for dsleep the documentation doesn't explicitly state that it accepts 64bit values but that's indeed what's happening as per https://github.com/nodemcu/nodemcu-firmware/pull/2358 (since April 2018).

Swift InputStream status does not match any possible enum values

I have an InputStream in my iOS app (Swift 4) and check the status a few seconds after opening it. The returned value is 7
print(String(self.inputStream!.streamStatus.rawValue)) // prints 7
But there is no corresponding entry with the value 7 in the enum described in the docs: https://developer.apple.com/documentation/foundation/stream.status
How can the value be 7? How can I get a textual representation of the status?
My guess was that the 7 is maybe a combination of other values that can be obtained by interpreting the 7 binary and looking which bits are 1. So in this case the first 3 bits are 1 (decimal 1,2,4) and they belong to (opening=1, open=2, writing=4). But this seems strange to me
EDIT: As Martin said in the comments, 7 is error.

Other ways to call/eval dynamic strings in Lua?

I am working with a third party device which has some implementation of Lua, and communicates in BACnet. The documentation is pretty janky, not providing any sort of help for any more advanced programming ideas. It's simply, "This is how you set variables...". So, I am trying to just figure it out, and hoping you all can help.
I need to set a long list of variables to certain values. I have a userdata 'ME', with a bunch of variables named MVXX (e.g. - MV21, MV98, MV56, etc).
(This is all kind of background for BACnet.) Variables in BACnet all have 17 'priorities', i.e., every BACnet variable is actually a sort of list of 17 values, with priority 16 being the default. So, typically, if I were to say ME.MV12 = 23, that would set MV12's priority-16 to the desired value of 23.
However, I need to set priority 17. I can do this in the provided Lua implementation, by saying ME.MV12_PV[17] = 23. I can set any of the priorities I want by indexing that PV. (Corollaries - what is PV? What is the underscore? How do I get to these objects? Or are they just interpreted from Lua to some function in C on the backend?)
All this being said, I need to make that variable name dynamic, so that i can set whichever value I need to set, based on some other code. I have made several attempts.
This tells me the object(MV12_PV[17]) does not exist:
x = 12
ME["MV" .. x .. "_PV[17]"] = 23
But this works fine, setting priority 16 to 23:
x = 12
ME["MV" .. x] = 23
I was trying to attempt some sort of what I think is called an evaluation, or eval. But, this just prints out function followed by some random 8 digit number:
x = 12
test = assert(loadstring("MV" .. x .. "_PV[17] = 23"))
print(test)
Any help? Apologies if I am unclear - tbh, I am so far behind the 8-ball I am pretty much grabbing at straws.
Underscores can be part of Lua identifiers (variable and function names). They are just part of the variable name (like letters are) and aren't a special Lua operator like [ and ] are.
In the expression ME.MV12_PV[17] we have ME being an object with a bunch of fields, ME.MV12_PV being an array stored in the "MV12_PV" field of that object and ME.MV12_PV[17] is the 17th slot in that array.
If you want to access fields dynamically, the thing to know is that accessing a field with dot notation in Lua is equivalent to using bracket notation and passing in the field name as a string:
-- The following are all equivalent:
x.foo
x["foo"]
local fieldname = "foo"
x[fieldname]
So in your case you might want to try doing something like this:
local n = 12
ME["MV"..n.."_PV"][17] = 23
BACnet "Commmandable" Objects (e.g. Binary Output, Analog Output, and o[tionally Binary Value, Analog Value and a handful of others) actually have 16 priorities (1-16). The "17th" you are referring to may be the "Relinquish Default", a value that is used if all 16 priorities are set to NULL or "Relinquished".
Perhaps your system will allow you to write to a BACnet Property called "Relinquish Default".

how to get the available memory on the device

I'm trying to obtain how much free memory I have on the device. To do this I call the cuda function cuMemGetInfo from a fortran code, but it returns negative values for the free amount of memory, so there's clearly something wrong.
Does anyone know how I can do that?
Thanks
EDIT:
Sorry, in fact my question was not very clear. I'm using OpenACC in Fortran and I call the C++ cuda function cudaMemGetInfo. Finally I could fix the code, the problem was effectively the kind of variables that I was using. Switching to size_ fixed everything. This is the interface in fortran that I'm using:
interface
subroutine get_dev_mem(total,free) bind(C,name="get_dev_mem")
use iso_c_binding
integer(kind=c_size_t)::total,free
end subroutine get_dev_mem
end interface
and this the cuda code
#include <cuda.h>
#include <cuda_runtime.h>
extern "C" {
void get_dev_mem(size_t& total, size_t& free)
{
cuMemGetInfo(&free, &total);
}
}
There's one last question: I pushed an array on the gpu and I checked its size using cuMemGetInfo, then I computed it's size counting the number of bytes, but I don't have the same answer, why? In the first case it is 3052mb large, in the latter 3051mb. This difference of 1mb could be the size of the array descriptor? Here there's the code that I used:
integer, parameter:: long = selected_int_kind(12)
integer(kind=c_size_t) :: total, free1,free2
real(8), dimension(:),allocatable::a
integer(kind=long)::N, eight, four
allocate(a(four*N))
!some OpenACC stuff in order to init the gpu
call get_dev_mem(total,free1)
!$acc data copy(a)
call get_dev_mem(total,free2)
print *,"size a in the gpu = ",(free1-free2)/1024/1024, " mb"
print *,"size a in theory = ", (eight*four*N)/1024/1024, " mb"
!$acc end data
deallocate(a)
Right, so, like commenters have suggested, we're not sure exactly what you're running, but filling in the missing details by guessing, here's a shot:
Most CUDA API calls return a status code (or error code if you will); this is true both in C/C++ and in Fortran, as we can see in the Portland Group's CUDA Fortran Manual:
Most of the runtime API routines are integer functions that return an error code; they return a value of zero if the call was successful, and a nonzero value if there was an error. To interpret the error codes, refer to “Error Handling,” on page 48.
This is the case for cudaMemGetInfo() specifically:
integer function cudaMemGetInfo( free, total )
integer(kind=cuda_count_kind) :: free, total
The two integers for free and total are cuda_count_kind, which if I am not mistaken are effectively unsigned... anyway, I would guess that what you're getting is an error code. Have a look at the Error Handling section on page 48 of the manual.

Is there a safe way to clean up stack-based code when jumping out of a block?

I've been working on Issue 14 on the PascalScript scripting engine, in which using a Goto command to jump out of a Case block produces a compiler error, even though this is perfectly valid (if ugly) Object Pascal code.
Turns out the ProcessCase routine in the compiler calls HasInvalidJumps, which scans for any Gotos that lead outside of the Case block, and gives a compiler error if it finds one. If I comment that check out, it compiles just fine, but ends up crashing at runtime. A disassembly of the bytecode shows why. I've annotated it with the original script code:
[TYPES]
<SNIPPED>
[VARS]
Var [0]: 27 Class TFORM
Var [1]: 28 Class TAPPLICATION
Var [2]: 11 S32 //i: integer
[PROCS]
Proc [0] Export: !MAIN -1
{begin}
[0] ASSIGN GlobalVar[2], [1]
{ i := 1;}
[15] PUSHTYPE 11(S32) // 1
[20] ASSIGN Base[1], GlobalVar[2]
{ case i of}
[31] PUSHTYPE 25(U8) // 2
{ 0:}
[36] COMPARE into Base[2]: [0] = Base[1]
[57] COND_NOT_GOTO currpos + 5 Base[2] [72]
{ end;}
[67] GOTO currpos + 41 [113]
{ 1:}
[72] COMPARE into Base[2]: [1] = Base[1]
[93] COND_NOT_GOTO currpos + 10 Base[2] [113]
{ goto L1;}
[103] GOTO currpos + 8 [116]
{ end;}
[108] GOTO currpos + 0 [113]
{ end; //<-- case}
[113] POP // 1
[114] POP // 0
{ Exit;}
[115] RET
{L1:
Writeln('Label L1');}
[116] PUSHTYPE 17(WideString) // 1
[121] ASSIGN Base[1], ['????????']
[144] CALL 1
{end.}
[149] POP // 0
[150] RET
Proc [1]: External Decl: \00\00 WRITELN
The "goto L1;" statement at 103 skips the cleanup pops at 113 and 114, which leaves the stack in an invalid state.
Delphi doesn't have any trouble with this, because it doesn't use a calculation stack. PascalScript, though, is not as fortunate. I need some way to make this work, as this pattern is very common in some legacy scripts from a much simpler system with little in the way of control structures that I've translated to PascalScript and need to be able to support.
Anyone have any ideas how to patch the codegen so it'll clean up the stack properly?
IIRC the goto rules in classic pascals were:
jumps are only allowed out of a block (iow from a higher to a lower nesting level on the "same" branch of the tree)
from local procedures to their parents.
The later was afaik never supported by Borland derived Pascals, but the first still holds.
So you need to generate exiting code like Martin says, but possibly it can be for multiple block levels, so you can't have a could codegeneration for each goto, but must generate code (to exit the precise number of needed blocks).
A typical test pattern is to exit from multiple nested ifs (possibly within a loop) using a goto, since that was a classic microoptimization that was faster at least up to D7.
Keep in mind that the if evaluation(s) and the begin..end blocks of their branches might have generated temps that need cleanup.
---------- added later
I think the codegenerator needs a way to walk the scopes between the goto and its endpoint, generating the relevant exit code for blocks along the way. That way a fix works for the general case and not just this example.
Since you can only jump out of scopes, and not into it that might not that be that hard.
IOW generate something that is equivalent to (for a hypothetical double case block)
Lgoto1gluecode:
// exit code first block
pop x
pop y
// exit code first block
pop A
pop B
goto real_goto_destination
Additional analysis can be done. E.g. if there is only one scope, and it has already a cleanup exit label, you can jump directly. If you know for certain that the above pop's are only discarded values (and not saves of registers) you can do them at once with add $16,%esp (4*4 byte values) etc.
The straightforward solution would be:
When generating a GOTO for goto statement, prefix the GOTO with the same cleanup code that comes before RET.
It looks to me like the calculation of how far to jump forward is the problem. I would have to spend some time looking at the implementation of the parser to help further, but my guess would be that additional handling must be performed when using a goto and there are values on the stack AND the goto would be placed after those values would be removed from the stack. Of course to determine this you would need to save the current location being parsed (the goto) and the forward parse to the target location watching for stack changes, and if so then to either adjust the goto location backwards, or inject the code as Martin suggested.

Resources