I suspect that the syntax diagram for a plsql_block as given in the
Oracle® Database PL/SQL Language Reference for Relese 2 is wrong.
(For reference, here's the current link to that document)
The following piece of PL/SQL compiles fine:
declare
cursor
cursor_definition
is select * from dual;
variable_declaration number;
begin
null;
end;
The following statements are assumptions that I make based on the piece of PL/SQL above and based on Oracle's syntax diagram.
The declare section (above) consists of a cursor definition followed by a variable declaration (which in turn is an item declaration).
An item declaration can only be an element of an item list 1.
A cursor definition can only be an element of an item list 2.
An item list 2 can never be followed by an item list 1.
Now, the variable declaration following the cursor definition contradicts point 4. Therefore I conclude that the
syntax diagram is wrong.
Maybe I am overlooking something, in which case I'd be very grateful for pointing this out to me.
Please understand that a wrong syntax diagram per se is no big deal to me. But I am in the process of writing a PL/SQL parser
and the parser stumbles for the exact situation given with the example PL/SQL code. So, in order to improve the parser, I'd like
to have a more authorative sequence diagram.
I concur. The syntax diagrams explicitly state that a plsql_block is effectively item_list_2 preceded by an optional item_list_1.
Further, cursor definitions (with the is bit) can only occur in item_list_2 and variable declarations (with or without an =) are part of the item_declaration set and can therefore only be in item_list_1.
Those facts make your code sample incorrect so, if it manages to compile, then either:
the syntax diagrams are wrong; or
the compiler doesn't follow them to the letter; or
your looking at code that's covered by different syntax diagrams.
On that last bullet point, interestingly enough, the syntax diagrams for 11.1 are slightly different.
The declare section can be item_list_1 or item_list_2 or item_list_1 followed by item_list_2.
Where it gets interesting is that item_list_1 can have any number of item_declaration entries and this includes both variable_declaration and cursor_declaration.
In 11.1, a cursor_declaration can be either a declaration or a definition, based on the language elements in 11.2 (in other words, there is no cursor_definition type since the declaration allows both in the declaration).
So what you have is perfectly valid in 11.1 so the first thing I'd check is that you're actually running 11.2 where that successful compilation is taking place.
It's still possible of course that you're running 11.2 and the syntax diagrams are wrong, in which case you should complain bitterly to Oracle but I don't know what sort of a response you'll get from a company whose flagship database product can't tell the difference between an empty varchar and a NULL (a).
(a) I'll never pass up an opportunity to mention this and advance the cause of my beloved DB2 :-)
Related
I am working on a project to automate COBOL to generate a class diagram. I am developing using a .NET console application. I need help tracking down the procedure name where the perform statement in used in the below example.
**Z-POST-COPYRIGHT.
move 0 to RETURN-CODE
perform Z-WRITE-FILE**
How do I track the procedure name 'Z-Post-COPYRIGHT' where the procedure 'Z-write-file' is called? The only idea I could think of in terms of COBOL is through indentation as the procedure names are always indented. Ideally in the database, the code should track the procedure name after the word 'perform' and procedure under which it is called (in this case it is Z-POST-COPYRIGHT).
I assume you want to do this "on your own" without external tools (a faster approach can be found at the end).
You first have to "know" your source:
which compiler was it compiled with (get a manual for this compiler)
which options were used
Then you have to preparse the source:
include copybooks (doing the given REPLACING rules if any)
if the source is in free-form reference format: concatenate contents of last line and current line if you find a - in column 7
check for REPLACE and change the result accordingly
remove all comments (maybe only * and \ in column 7 in fixed-form reference format or similar (extensions like "variable" format / "terminal" format", ... exist, maybe only inline comments - when in free-form reference-format, otherwise maybe inline comments *> or compiler specific extensions like |) - depending on the further re-engineering you want to do it could be a good idea to extract them and store them at least with a line number reference
The you finally can track the procedure name with the following rule:
go backwards to the last separator period (there are more rules but the rule "at least one line break, another period, a space a comma or a semicolon" [I've never seen the last two in real code but it is possible" should be enough)
check if there is only one word between this separator period and the next
if this word is no reserved COBOL word (this depends on your compiler) it is very likely a procedure name
Start from here and check the output, then fine grade the rule with actual false positives or missing entries.
If you want to do more than only extract the procedure-names for PERFORM and GO TO (you should at least check the sources for PERFROM ... THRU) then this can get to a lot of work...
Faster approach with external tools:
run a COBOL compiler on the complete sources and tell it to do the preparsing only - this way you have the big second point solved already
if you have the option: tell the compiler or an external tool to create a symbol table / cross reference - this will tell you in which line a procedure is and its name (you can simply find the correct procedure by comparing the line)
Just a note: You may want to check GnuCOBOL (formerly OpenCOBOL) for the preparsing and/or generation of symbol tables/cross-reference and/or printcbl for a completely external tool doing preparsing and/or cobxref for a complete cross reference generation.
I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!
The goto statement is taboo at my work.
So the following question is born...
Is there a situation possible where a goto is the only valid solution?
Originally GOTO was added to Pascal for error handling, including inter procedural forms that Borland(/Embarcadero) never implemented (example: GOTOing from a inner procedure to the parent), just like Borland never implemented other inner function functionality like passing inner functions to procedure-typed parameters.(*)
In that way GOTO can be considered the precursor to exceptions.
There still some practical uses: The last time I checked, jumping out of a nested IF statement with goto was still faster in Delphi then letting the code exit from a nested if naturally.
Optimizations like these are sometimes used in e.g. compression code, and other complex tree processing code with deeply nested loops or conditional statements.
Such routines often still use goto for errorhandling, because it is faster. (exceptions are not only slow, but their border conditions inhibit some optimizations).
One could see this as part of the plain Pascal level of Object Pascal, just like C++ still allows plain C nearly completely.
(of course, since the optimized compression code in Delphi is only delivered in .o form, it is hard to find examples in the Delphi codebase. The JPEG code has some, but that is a C translation)
(*) Original pascal, and IIRC even Turbo Pascal doesn't allow prematurely exiting a procedure with EXIT. Same for CONTINUE and BREAK.
Is there a situation possible where a GOTO is the only valid solution?
I suppose it depends on what you mean by valid. I suppose you are asking if there exists a program that can only be written with the use of the goto statement. In which case the answer is that there is no such program. Delphi is Turing complete with or without the goto statement.
However, if we are prepared to widen the discussion to include other languages, there are situations where goto is a good solution, even the best solution. The scenario that most commonly comes to mind is implementing tidy-up and error handling in languages without structured exception handling. If you peruse the Linux source code you will find that goto is widely used. I expect that the same is true of the Windows source code.
Goto is very old. It predates sub-routines like functions and procedures! It is also very dangerous and can make your code less readable (to others, or to yourself a few months later).
In theory it's not possible to have a situation where goto is required. I won't repeat the theory about Turing tape machines here, but using selection and iteration, you can re-order the code so in all possible input values the same output comes about.
In practice though, it's sometimes 'handy' and 'better readable' to 'jump away' from the flow of code in certain conditions, and that's where Exceptions come in. raise breaks away from the current execution, and jump to the closest finally or except section. This is safer because they work cascaded, and provide a better way to handle the context in case of one of these border conditions. (And there's also breakand abort and exit)
GOTO is never necessary. Any computable algorithm can be expressed with assignment and the combination of IF...THEN, BEGIN...END, and your choice of WHILE...DO...END or REPEAT...UNTIL. You don't even need subroutines. :)
This is known as the structured program theorem.
For a proof, see the 1966 paper, Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules (PDF) by Corrado Böhm and Giuseppe Jacopini.
Something like 15 years ago I used the goto statement in Delphi to convert one of Bob Jenkins's hash functions from C to Pascal. The C function has a switch() statement without breaks after each case, and you can't do that with Pascal's case statement. So I converted it into a bunch of Pascal labels and gotos. I guess you would still have to do it the same way with the newest Delphi versions.
Edit: I guess using gotos would still be a reasonable way to do this. Gets the job done, easy to understand, limited to a short block of code, not dangerous.
Clarification (sorry the question was not specific): They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. How does luaL_checknumber deal with something that's not a number?
There's also luaL_checklong and luaL_checkinteger. Are they the same as (int)luaL_checknumber and (long)luaL_checknumber respectively?
The reference manual does answer this question. I'm citing the Lua 5.2 Reference Manual, but similar text is found in the 5.1 manual as well. The manual is, however, quite terse. It is rare for any single fact to be restated in more than one sentence. Furthermore, you often need to correlate facts stated in widely separated sections to understand the deeper implications of an API function.
This is not a defect, it is by design. This is the reference manual to the language, and as such its primary goal is to completely (and correctly) describe the language.
For more information about "how" and "why" the general advice is to also read Programming in Lua. The online copy is getting rather long in the tooth as it describes Lua 5.0. The current paper edition describes Lua 5.1, and a new edition describing Lua 5.2 is in process. That said, even the first edition is a good resource, as long as you also pay attention to what has changed in the language since version 5.0.
The reference manual has a fair amount to say about the luaL_check* family of functions.
Each API entry's documentation block is accompanied by a token that describes its use of the stack, and under what conditions (if any) it will throw an error. Those tokens are described at section 4.8:
Each function has an indicator like this: [-o, +p, x]
The first field, o, is how many elements the function pops from the
stack. The second field, p, is how many elements the function pushes
onto the stack. (Any function always pushes its results after popping
its arguments.) A field in the form x|y means the function can push
(or pop) x or y elements, depending on the situation; an interrogation
mark '?' means that we cannot know how many elements the function
pops/pushes by looking only at its arguments (e.g., they may depend on
what is on the stack). The third field, x, tells whether the function
may throw errors: '-' means the function never throws any error; 'e'
means the function may throw errors; 'v' means the function may throw
an error on purpose.
At the head of Chapter 5 which documents the auxiliary library as a whole (all functions in the official API whose names begin with luaL_ rather than just lua_) we find this:
Several functions in the auxiliary library are used to check C
function arguments. Because the error message is formatted for
arguments (e.g., "bad argument #1"), you should not use these
functions for other stack values.
Functions called luaL_check* always throw an error if the check is not
satisfied.
The function luaL_checknumber is documented with the token [-0,+0,v] which means that it does not disturb the stack (it pops nothing and pushes nothing) and that it might deliberately throw an error.
The other functions that have more specific numeric types differ primarily in function signature. All are described similarly to luaL_checkint() "Checks whether the function argument arg is a number and returns this number cast to an int", varying the type named in the cast as appropriate.
The function lua_tonumber() is described with the token [-0,+0,-] meaning it has no effect on the stack and does not throw any errors. It is documented to return the numeric value from the specified stack index, or 0 if the stack index does not contain something sufficiently numeric. It is documented to use the more general function lua_tonumberx() which also provides a flag indicating whether it successfully converted a number or not.
It too has siblings named with more specific numeric types that do all the same conversions but cast their results.
Finally, one can also refer to the source code, with the understanding that the manual is describing the language as it is intended to be, while the source is a particular implementation of that language and might have bugs, or might reveal implementation details that are subject to change in future versions.
The source to luaL_checknumber() is in lauxlib.c. It can be seen to be implemented in terms of lua_tonumberx() and the internal function tagerror() which calls typerror() which is implemented with luaL_argerror() to actually throw the formatted error message.
They both try to convert the item on the stack to a lua_Number. lua_tonumber will also convert a string that represents a number. luaL_checknumber throws a (Lua) error when it fails a conversion - it long jumps and never returns from the POV of the C function. lua_tonumber merely returns 0 (which can be a valid return as well.) So you could write this code which should be faster than checking with lua_isnumber first.
double r = lua_tonumber(_L, idx);
if (r == 0 && !lua_isnumber(_L, idx))
{
// Error handling code
}
return r;
I'm developing an application that parses Cobol programs. In these programs some respect the traditional coding style (programm text from column 8 to 72), and some are newer and don't follow this style.
In my application I need to determine the coding style in order to know if I should parse content after column 72.
I've been able to determine if the program start at column 1 or 8, but prog that start at column 1 can also follow the rule of comments after column 72.
So I'm trying to find rules that will allow me to determine if texts after column 72 are comments or valid code.
I've find some but it's hard to tell if it will work everytime :
dot after column 72, determine the end of sentence but I fear that dot can be in comments too
find the close character of a statement after column 72 : " ' ) }
look for char at columns 71 - 72 - 73, if there is not space then find the whole word, and check if it's a key word or a var. Problem, it can be a var from a COPY or a replacement etc...
I'd like to know what do you think of these rules and if you have any ideas to help me determine the coding style of a Cobol program.
I don't need an API or something just solid rules that I will be able to rely on.
I think you need to know the COBOL compiler for each program. Its documentation should tell you what conventions/configurations/switches it uses to decide if the source code ends at column 72 or not.
So.... which compiler(s)?
And if you think the column 72 issue is a pain, wait till you get around to actually parsing the COBOL itself. If you are not well prepared to handle the lexical issues of the language, you are probably very badly prepared to handle the syntactic ones.
There is no absolutely reliable way to determine if a COBOL program
is in fixed or free format based only on the source code. Heck it is sometimes difficult to identify
the programming language based only on source code. Check out
this classic polyglot - it is valid under 8 different language compilers. That
said, you could try a few heuristics that might yield
the correct answer more often than not.
Compiler directives imbedded in source code
Watch for certain compiler directives that determine code format.
Unfortunately, every compiler vendor uses their own flavour of directive.
For example, Microfocus COBOL uses the
SOURCEFORMAT directive. This directive will appear near the top of the program so a short pre-scan
could be used to find it. On the other hand, OpenCobol uses >>SOURCE FORMAT IS FREE and
>>SOURCE FORMAT IS FIXED to toggle between free and fixed format, different parts of the same program
could be formatted differently!
The bottom line here is that you will have to support the conventions of multiple COBOL compilers.
Compiler switches
Source code format can be also be specified using a compiler switch. In this case, there are no concrete
clues to go on. However, you can be reasonably sure that the entire source program will be either
fixed or free. All you can do here is guess. Unless the programmer is out to "mess with
your head" (and some will), a program in free format will have the keywords IDENTIFICATION DIVISION or ID DIVISION, starting before column 8.
Every COBOL program will begin with these keywords so you can use them as the anchor point for determining code format in the
absence of imbedded compiler directives.
Warning - this is far from fool proof, but might be a good start.
There won't be an algorithm to do this with 100% certainty, because if comments can be anything, they can also be compilable COBOL code. So you could theoretically write a program that means one thing if the comments are ignored, and something else entirely if the comments are treated as part of the COBOL.
But that's extremely unlikely. What's most likely to happen is that if you try to compile the code under the wrong convention, it will simply fail. So the only accurate way to do this is to try compiling/parsing the program one way, and if you come to a line that can't make sense, switch to the other style. You could also support passing an argument to the compiler when the style is already known.
You can try using heuristics like what you've described, but that will never be totally accurate. The most they can give you is a probability that the code is one or the other style, which will increase as they examine more and more lines of code. They could be useful for helping you guess the style before you start compiling, or for figuring out when the problem is really just a typo in the code.
EDIT:
Regarding ideas for heuristics, it's hard to say. If there were a standard comment sigil like // or # in other languages, this would be a lot easier (actually, there is, but it sounds like your code doesn't follow this convention). The only thing I can think of would be to check whether every line (or maybe 99% of lines, and not counting empty lines or lines commented with *) has a period somewhere before position 72.
One thing you DON'T want to do is apply any heuristics to the part after position 72. That is, you don't want to be checking the comments to see if they're valid COBOL. You want to check what you know is COBOL first, and see if that works by itself. There are several reasons for this:
Comments written in English are likely to have periods and quotes in them, so your first and second bullet points are out.
Natural languages are WAY harder to parse than something like COBOL.
The comments could easily have COBOL in them (maybe someone commented out the previous version of the line).
An important rule for comments is that they should never affect what the program does. If changing the comments can change how the program is compiled, you violate that.
All that in mind, my opinion is that you shouldn't use heuristics at all. You should always try to compile the program under both conventions unless one is explicitly specified. There's a chance that code will compile successfully under both conventions, and then you'll have two different programs and no way to tell which one is correct.
If that happens, you need to compare the two results (perhaps with a hash or something) to see if they're the same program. If they're the same, great, but if not, you'll need to force the user to explicitly choose a convention.
Most COBOL compilers will allow you to generate and analyze the post text manipulation phase.
The text preprocessor output can be seen (using OpenCOBOL for the example)
cobc -E program.cob
The text manipulation processor deals with any COPY ... REPLACING compiler directives, as well as converting SOURCE FORMAT IS FIXED (with line continuations, string literal concatenations, comment line removal, among other things) to the actual free format that the compiler lexical analyzer needs. A lot of the OpenCOBOL toolkits (Cross referencer and Animator, to name two) use source code AFTER the preprocessor pass. I don't think you'll lose any street cred if your parser program relies on post processed source code files.