While looking at poor runtime performance of some Purescript code I wrote, I noticed that the generated Javascript code contains dictionary passing for overloaded definitions for e.g. transformers monads. However, all my exported functions are monomorphic, so the compiler should have no trouble specializing these overloaded occurrences.
If this were Haskell compiled with GHC, I'd check optimization flags, and things like making sure nothing is marked NOINLINE and that unfoldings are properly exposed for specialisable definitions. What are the equivalent techniques for Purescript?
PureScript v0.15.3 has an optimization for “common subexpression elimination for expressions created by the compiler in the process of creating and using
typeclass dictionaries.” Maybe that will speed up your code.
Related
I'm playing with F# and the compiler warns me if I don't use some result (same problem described here). Since F# even has the function "Ignore" for that, it seems that it's somewhat important, but I don't really understand why - why doesn't C# care about it, but F# does?
One fundamental difference between C# and F# is that in F# everything is an expression (as opposed to a mix of expressions and statements). This includes things that in C-style languages are statements, like control flow constructs.
When programming in a functional way, you want to have small pieces of referentially transparent code that you can compose together. The fact that everything is an expression plays right into that.
On the other hand, when you do something that gives you a value, and you just leave it there, you are going against that mindset. You are either doing it for some side-effect or you simply have a piece of left-over code somewhere. In either case it's fair game to warn you that you're doing something atypical.
F# discourages, but doesn't disallow side-effects, and lets you have (potentially side-effecting) expressions executed in a sequence, as long as the intermediate ones are of type unit. And this is what ignore does - takes an argument and returns unit.
In F#, most everything is an expression with a value.
If you neglect the value of an expression in F# by either failing to bind it or return it, then it feels like you're making a mistake. Ignoring the value of an expression is an indication that you're depending on the side-effect of an operation and in F# you should be eschewing side-effects.
I have precise and validated descriptions of the behaviors of many X86 instructions in terms amenable to encoding in QF_ABV and solving directly with the standard solver (using no special solving strategies). I wrote an SMT-LIB script whose interface matches my ultimate goal perfectly:
X86State, a record sort describing x86 machine state (registers and flags as bitvectors, and memory as an array).
X86Instr, a record sort describing x86 instructions (enumerated mnemonics, operands as an ML-like discriminated union describing registers, memory expressions, etc.)
A function x86-translate taking an X86State and an X86Instr, and returning a new X86State. It decodes the X86Instr and produces a new X86State in terms of the symbolic effects of the given X86Instr on the input X86State.
It's great for prototyping: the user can write x86 easily and directly. After simplifying a formula built using the library, all functions and extraneous data types are eliminated, leaving a QF_ABV expression. I hoped that users could simply (set-logic QF_ABV) and #include my script (alas, neither the SMT-LIB standard nor Z3 support #include).
Unfortunately, by defining functions and types, the script requires theories such as uninterpreted functions, thus requiring a logic other than QF_ABV (or even QF_AUFBV due to the types). My experience with SMT solvers dictates that the lowest acceptable logic should be specified for best solving time. Also, it is unclear whether I can reuse my SMT-LIB script in a programmatic context (e.g. OCaml, Python, C) as I desire. Finally, the script is a bit verbose given the lack of higher-order functions, and my lack of access to par leading to code duplication.
Thus, despite having accomplished my technical goals, I think that SMT-LIB might be the wrong approach. Is there a more natural avenue for interacting with Z3 to implement my x86 instruction description / QF_ABV translation scheme? Is the SMT-LIB script re-usable at all in these avenues? For example, you can build "custom OCaml top-levels", i.e. interpreters with scripts "burned into them". Something like that could be nice. Or do I have to re-implement the functionality in another language, in a program that interacts with Z3 via a theory extension (C DLL)? What's the best option here?
Well, I don't think that people write .smt2 files by hand. These are usually generated automatically by some program.
I find the Z3 Python interface quite nice, so I guess you could give it a try. But you can always write a simple .smt2 dumper from any language.
BTW, do you plan releasing the specification you wrote for X86? I would be really interested!
Reading sources of Array2D module, I've stumbled upon this interesting construct in implementation of many core functions, for example:
[<CompiledName("Get")>]
let get (array: 'T[,]) (n:int) (m:int) = (# "ldelem.multi 2 !0" type ('T) array n m : 'T #)
I can only assume that this is the syntax to inline CIL and is used here obviously to gain performance benefits. However, when I've tried to use this syntax in my program, I get an error:
warning FS0042: This construct is deprecated: it is only for use in the F# library
What exactly is this? Is there any detailed documentation?
I think that this has 2 purposes:
These functions compile down to exactly 1 CIL instruction which has to be encoded somewhere, so encoding at the source seems best.
It allows for some extra trickery with defining polymorphic Add functions in a high performance way which is hard with the F# type system.
You can actually use this but you have to specify the --compiling-fslib (undocumented) and --standalone flags in your code.
I've found some details in usenet archives: http://osdir.com/ml/lang.fsharp.general/2008-01/msg00009.html
Embedded IL in F# codes. Is this feature officially supported
Not really. The 99.9% purpose of this feature is for operations defined
in FSharp.Core.dll (called fslib.dll in 1.9.2.9 and before).
Historically it has been useful to allow end-users to embed IL in order
to access .NET IL functionality not accessible by F# library or
language constructs using their own embedded IL. The need for this is
becoming much more rare, indeed almost non-existent, now that the F#
library has matured a bit more. We expect this to continue to be the
case. It's even possible that we will make this a library-only feature
in the "product" version of F#, though we have not yet made a final
decision in this regard.
This was a message from Don Syme, dated January of 2008.
Is such type of an error produced during type checking or when input is being parsed?
Under what type should the error be addressed?
The way I see it it is a semantic error, because your language parses just fine even though your are using an identifier which you haven't previously bound--i.e. syntactic analysis only checks the program for well-formed-ness. Semantic analysis actually checks that your program has a valid meaning--e.g. bindings, scoping or typing. As #pst said you can do scope checking during parsing, but this is an implementation detail. AFAIK old compilers used to do this to save some time and space, but I think today such an approach is questionable if you don't have some hard performance/memory constraints.
The program conforms to the language grammar, so it is syntactically correct. A language grammar doesn't contain any statements like 'the identifier must be declared', and indeed doesn't have any way of doing so. An attempt to build a two-level grammar along these lines failed spectacularly in the Algol-68 project, and it has not been attempted since to my knowledge.
The meaning, if any, of each is a semantic issue. Frank deRemer called issues like this 'static semantics'.
In my opinion, this is not strictly a syntax error - nor a semantic one. If I were to implement this for a statically typed, compiled language (like C or C++), then I would not put the check into the parser (because the parser is practically incapable of checking for this mistake), rather into the code generator (the part of the compiler that walks the abstract syntax tree and turns it into assembly code). So in my opinion, it lies between syntax and semantic errors: it's a syntax-related error that can only be checked by performing semantic analysis on the code.
If we consider a primitive scripting language however, where the AST is directly executed (without compilation to bytecode and without JIT), then it's the evaluator/executor function itself that walks the AST and finds the undeclared variable - in this case, it will be a runtime error. The difference lies between the "AST_walk()" routine being in different parts of the program lifecycle (compilation time and runtime), should the language be a scripting or a compiled one.
In the case of languages -- and there are many -- which require identifiers to be declared, a program with undeclared identifiers is ill-formed and thus a missing declaration is clearly a syntax error.
The usual way to deal with this is to incorporate information about symbols in a symbol table, so that the parse can use this information.
Here are a few examples of how identifier type affects parsing:
C / C++
A classic case:
(a)-b;
Depending on a, that's either a cast or a subtraction:
#include <stdio.h>
#if TYPEDEF
typedef double a;
#else
double a = 3.0;
#endif
int main() {
int b = 3;
printf("%g\n", (a)-b);
return 0;
}
Consequently, if a hadn't been declared at all, the compiler must reject the program as syntactically ill-formed (and that is precisely the word the standard uses.)
XML
This one is simple:
<block>Hello, world</blob>
That's ill-formed XML, but it cannot be detected with a CFG. (Nonetheless, all XML parsers will correctly reject it as ill-formed.) In the case of HTML/SGML, where end-tags may be omitted under some well-defined circumstances, parsing is trickier but nonetheless deterministic; again, the precise declaration of a tag will determine the parse of a valid input, and it's easy to come up with inputs which parse differently depending on declaration.
English
OK, not a programming language. I have lots of other programming language examples, but I thought this one might trigger some other intuitions.
Consider the two grammatically correct sentences:
The sheep is in the meadow.
The sheep are in the meadow.
Now, what about:
The cow is in the meadow.
(*) The cow are in the meadow.
The second sentence is intelligible, albeit ambiguous (is the noun or the verb wrong?) but it is certainly not grammatically correct. But in order to know that (and other similar examples), we have to know that sheep has an unmarked plural. Indeed, many animals have unmarked plurals, so I recognize all the following as grammatical:
The caribou are in the meadow.
The antelope are in the meadow.
The buffalo are in the meadow.
But definitely not:
(*) The mouse are in the meadow.
(*) The bird are in the meadow.
etc.
It seems that there is a common misconception that because the syntactic analyzer uses a context free grammar parser, that syntactic analysis is restricted to parsing a context free grammar. This is simply not true.
In the case of C (and family), the syntax analyzer uses a symbol table to help it parse. In the case of XML, it uses the tag stack, and in the case of generalize SGML (including HTML) it also uses tag declarations. Consequently, the syntax analyzer considered as a whole is more powerful than the CFG, which is just a part of the analysis.
The fact that a given program passes the syntax analysis does not mean that it is semantically correct. For example, the syntax analyser needs to know whether a is a type or not in order to correctly parse (a)-b, but it does not need to know whether the cast is in fact possible, in the case that it a is a type, or that a and b can meaningfully be subtracted, in the case that a is a variable. These verifications can happen during type analysis after the parse tree is built, but they are still compile-time errors.
By concept/function/implementation, what are the differences between compilers and parsers?
A compiler is often made up of several components, one of which is a parser.
A common set of components in a compiler is:
Lexer - break the program up into words.
Parser - check that the syntax of the sentences are correct.
Semantic Analysis - check that the sentences make sense.
Optimizer - edit the sentences for brevity.
Code generator - output something with equivalent semantic meaning using another vocabulary.
To add a little bit:
As mentioned elsewhere, small C is a recursive decent compiler that generated code as it parsed. Basically syntactical analysis, semantic analysis, and code generation in one pass. As I recall, it also lexed in the parser.
A long time ago, I wrote a C compiler (actually several: the Introl-C family for microcontrollers) that used recursive descent and did syntax and semantic checking during the parse and produced a tree representation of the program from which code was generated.
Today, I'm working on a compiler that does source -> tokens -> AST -> IR -> code, pretty much as I described above.
A parser just reads a text into an internal, more abstract representation, often a tree or graph of some sort.
A compiler translates such an internal representation into another format. Most often this means converting source code into executable programs. But the target doesn't have to be machine code. It can be another programming language as well; the compiler would still be a compiler. Obviously a compiler needs a parser to actually read its input.
Compiler always have a parser inside. Parser just process the language and return the tree representation of it, compiler generate something from that tree, actual machine codes or another language.
A parser is one element of a compiler.
Are you looking for the differences between an interpreter and a compiler?
A parser takes in raw-data and parses it into a tree structure. This syntax-tree is then passed on to generator, which will turn it into whatever it is supposed to generate.
So, a parser is a part of a compiler.
In general, parser is a part of the compiler, but compiler is designed to convert the received script generally into machine-readable code or sometimes into another language.
A compiler is a special type of computer program that translates a human readable text file into a form that the computer can more easily understand. At its most basic level, a computer can only understand two things, a 1 and a 0. At this level, a human will operate very slowly and find the information contained in the long string of 1s and 0s incomprehensible. A compiler is a computer program that bridges this gap.
A parser is a piece of software that evaluates the syntax of a script when it is executed on a web server. For scripting languages used on the web, the parser works like a compiler might work in other types of application development environments.Parsers are commonly used in script development because they can evaluate code when the script is executed and do not require that the code be compiled first.