Data and functions in Rascal can scatter in different source files, and when imported are merged accordingly. In other words, Rascal supports open data and open functions. So Rascal solves the expression problem? Is it designed so as to do?
I think to write that Rascal "solves" the expression problem, is a bit strong, but you could say that you can easily write openly extensible implementations of expression grammars in it. It was designed exactly for this, see http://www.rascal-mpl.org/from-functions-to-term-rewriting-and-back/
On the one hand, one can write programs which do not suffer from the expression problem in Rascal, precisely for the reason you said: both data and functions are openly extensible, and they work together via dynamically dispatching via pattern matching.
On the other hand, It is pretty easy to write non-extensible implementations as well in Rascal. In particular when using the current visit or switch statements, which are not openly extensible. Also if you write a set of mutually recursive functions, it may be pretty hard to extend them in an unforeseen manner. We are also working on language features to cover extending those kinds of designs. That is for the future.
Related
Using non-deterministic functions is unavoidable in applications that talk to the real world. Making a clear separation between deterministic and non-deterministic is important.
Haskell has the IO monad that sets the impure context by looking at which we know that everything outside of it is pure. Which is nice, if you ask me, when it comes to unit testing one can tell which part of their code is ultimately testable and which is not.
I could not find anything that allows separating the two in F#. Does it mean there is just no way to do that?
The distinction between deterministic and non-deterministic function is not captured by the F# type system, but a typical F# system that needs to deal with non-determinism would use some structure (or "design pattern") that clearly separates the two.
If your core model is some computation that does not interact with the world (you only need to collect inputs and run the computation), then you can write most of your code as functional transformations on immutable data structures and then invoke these from some "main" I/O loop.
If you're writing some highly interactive or reactive application then you can use F# agents (here is an introductory article) and structure your application so that the non-determinism is safely contained in individual agents (see more about agent-based architectures)
F# is based on OCaml, and much like OCaml it isn't pure FP. I don't believe there is away to accomplish your goal in either language.
One way to manage it could be in making up a nominal type that would represent the notion of the real world and make sure each non-deterministic function takes its singleton as a parameter. This way all dependent functions would have to pass it on along the line. This makes a strong distinction between the two at a cost of some discipline and a bit of extra typing. Good thing about this approach is that it can verified by the compiler given necessary conditions are met.
(I'm using the word "workflow" - not in the sense of async workflows - but rather in the "git workflow" sense, that is, how you use it as part of your development)
Having played around with F# for a while, I've started developing my first F# app. I'm from c#/vb. Having watched various demos/talks - rightly or wrongly- I've started off using fsi as the main development "engine" and work up stuff within that area. If I hit a problem which I need to debug, I tend to break out the problematic function into smaller bits and check those work to try and debug the problem.
However, In order to keep the amount of code manageable in fsi, once I am happy with what I have done, I the move it into a .fs and #load the .fs back into fsi. As the app gets bigger, this can begin to feel a bit clunky since when I need to refactor, I end up having to bring back in content from the fs file change it run stuff to get something working again, before pushing the code back out into the .fs file. Further this style isn't really a test first approach and so I am not getting the benefit of building a set of tests. (I can also miss the ability to set breakpoints/step the code which, istm in certain situations e.g. recursion, can be quicker for diagnosing errors than breaking out parts of a function - though maybe this is available in VS11 and I'm not setup right) .. so I think I'm perhaps not doing things optimally or else not thinking about things in the right way.
I was wondering if others could offer how they develop apps. Do you primarily use fsi or do you start with tdd. Should the tdd approach be the primary dev vehicle and FSI used more selectively to aid in the, say, implementation of more complex algorithms, data exploration etc etc
I have looked at this question and obviously it helpfully points to various tdd frameworks for F#, but I'd still be interested to find out the workflow of seasoned F# developers.
Many thx
S
I think you're on the right track.
Development process is a matter of taste. I'll share my approach anyway.
Start by a few fs files. Each file represents a module, which consists of a group of functions closely related to each other. It doesn't have to be precise from beginning; you often move stuffs between modules.
Create a few fsx files for quick testing once skeleton of the modules is ready.
Create a test project and set up NuGet packages. I often use NUnit and FsUnit together.
Whenever fsx scripting gives correct results, move them to test cases. Do this repeatedly.
Include a Program.fs into the main project and compile to executable in order to debug if needed.
In general, F# REPL is the main development engine. It gives me instant feedbacks and allows incremental changes, which are very helpful in prototyping. In F#, TDD is less critical since bug rate is much lower than in other languages. And I don't test everything, just focus on main functionalities and ensure a high test coverage. Using testdriven.net add-in or Visual Studio 2012 Premium and Ultimate can give you useful statistics on test coverage.
Using F# REPL and TDD, I almost never have to use debugging. Whenever there is a wrong behaviour, I stop and think. Since your codes don't have side effects, you can reason on them easily. In many times reasoning and a few printing commands can give me the right answer.
You can use TDD in F# REPL with Unquote and FsCheck. The former offers testing via quotations, which is quite impressive. The latter uses random testing approach which is attractive in handling corner cases of your codes. I find it really useful when your programs have to satisfy certain properties. However, it may take time to learn to use these frameworks properly.
pad gave a great answer that is very practical and useful for a person new to F#. I will give a different means so that others don't think there is only one way F#'ers do it.
Note: If you are very new to programming, then stick with pad's answer, it is much better for a new programmer.
In the Object Oriented world one thinks with objects and in such languages I would start with writing objects down on paper and working with various diagrams such as use-case, state transition, sequence diagram, etc., until I felt I had enough to start creating objects in C# cs files, fleshing out the objects with methods, properties, events, etc.
In the functional world I typically start with abstract concepts and convert them into discriminated unions (DU) in an F# fs file, skipping the use of a REPL, i.e. F# Interactive, and then start adding a few functions. After I have a few functions I set up a test project using NUnit and FsUnit via NuGet. Since the DU are abstract, the test cases are typically harder to write, so I create printers for the DU and then insert them into the test case where I capture result output from the printer in the NUnit tool, for cut and paste back into the test case making changes as necessary. See these for examples of why I don't write them from scratch by hand.
Once I have the abstract DU done, I then can move onto the code to convert the human/concrete form into the abstract DU and convert the abstract DU into human/concrete form. In some cases these would be parsers and pretty printers.
The main point I am trying to make is that I don't focus on the tools I use but on the abstract concept of the problem and bring the tools in when needed.
I will note that I also program in PROLOG and there I do start with the REPL and move the code to a store once the logic works. So I am not opposed to using a REPL, it's just a different way of approaching the problem.
EDIT
Per a request by Ken for an example.
See: Discriminated Unions (F#) and look for the section
Using Discriminated Unions Instead of Object Hierarchies
So instead of a base type of shape with inherited types of Circle, EquilateralTriangle, Square and Rectangle one would create a discriminated Union as noted:
type Shape =
| Circle of float
| EquilateralTriangle of double
| Square of double
| Rectangle of double * double
As your question would make for a much better independent question and get answers with much better detail than I can give, I would suggest you ask it.
Also if you search for info on this also search with the following substitutions for discriminated union (DU):
Algebraic data type
Generalized algebraic data type (GADT)
Tagged union
Variant
variant record
disjoint union
sum type
Out of curiosity, I wonder what can people do with parsers, how they are applied, and what do people usually create with it?
I know it's widely used in programming language industry, however I think this is just a tiny portion of it, right?
Besides special-purpose languages, my most ambitious use of a parser generator yet (with good old yacc back in C, and again later with pyparsing in Python) was to extract, validate and possibly alter certain meta-info from SQL queries -- parsing SQL properly is a real challenge (especially if you hope to support more than one dialect!-), a parser generator (and a lexer it sits on top of) at least remove THAT part of the job!-)
They are used to parse text....
To give a more concrete example, where I work we use lexx/yacc to parse strings coming over sockets.
Also from the name it should give you an idea what javacc is used for (java compiler compiler!)
Generally to parse Domain Specific Languages or scripting languages, or similar support for code snipits.
Previously I have seen it used to parse the command line based output of another software tool. This way the outer tool (VPN software) could re-use the base router IPSec code without modification. As lots of what was being parsed was IP Route tables and other structured repeated text.
Using a parser allowed simple changes when the formatting changed, instead of trying to find and tweak the a hand written parser. And the output did change a few times of the life of the product.
I used parsers to help process +/- 800 Clipper source files into similar PRGs that could be compiled with Alaksa Xbase 32.
You can use it to extend your favorite language by getting its language definition from their repository and then adding what you've always wanted to have. You can pass the regular syntax to your application and handle the extension in your own program.
I'm creating a compiler with Lex and YACC (actually Flex and Bison). The language allows unlimited forward references to any symbol (like C#). The problem is that it's impossible to parse the language without knowing what an identifier is.
The only solution I know of is to lex the entire source, and then do a "breadth-first" parse, so higher level things like class declarations and function declarations get parsed before the functions that use them. However, this would take a large amount of memory for big files, and it would be hard to handle with YACC (I would have to create separate grammars for each type of declaration/body). I would also have to hand-write the lexer (which is not that much of a problem).
I don't care a whole lot about efficiency (although it still is important), because I'm going to rewrite the compiler in itself once I finish it, but I want that version to be fast (so if there are any fast general techniques that can't be done in Lex/YACC but can be done by hand, please suggest them also). So right now, ease of development is the most important factor.
Are there any good solutions to this problem? How is this usually done in compilers for languages like C# or Java?
It's entirely possible to parse it. Although there is an ambiguity between identifiers and keywords, lex will happily cope with that by giving the keywords priority.
I don't see what other problems there are. You don't need to determine if identifiers are valid during the parsing stage. You are constructing either a parse tree or an abstract syntax tree (the difference is subtle, but irrelevant for the purposes of this discussion) as you parse. After that you build your nested symbol table structures by performing a pass over the AST you generated during the parse. Then you do another pass over the AST to check that identifiers used are valid. Follow this with one or more additional parses over the AST to generate the output code, or some other intermediate datastructure and you're done!
EDIT: If you want to see how it's done, check the source code for the Mono C# compiler. This is actually written in C# rather than C or C++, but it does use .NET port of Jay which is very similar to yacc.
One option is to deal with forward references by just scanning and caching tokens till you hit something you know how to real with (sort of like "panic-mode" error recovery). Once you have run thought the full file, go back and try to re parse the bits that didn't parse before.
As to having to hand write the lexer; don't, use lex to generate a normal parser and just read from it via a hand written shim that lets you go back and feed the parser from a cache as well as what lex makes.
As to making several grammars, a little fun with a preprocessor on the yacc file and you should be able to make them all out of the same original source
I want to build a parser for a C like language. The interesting aspect about it is that I want to build it in such a way that someone who has access to the source can easily modified it to extend the language (a new expression type of instance) with the extensions being runtime configurable (they can be turned on and off).
My current intent is to build a recursive decent parser as an object. Each production will be a method of an object. The method of extension will be to derive classes from this base replacing methods (and production definitions) as needed. I'm still trying to figure out how to mix and match extensions. One idea is to play games with the v-tbl. Objects would be constructed with a v-tbl that is a copy of the base but with methods replaced from derived classes.
Aside from the bit-twiddling nature of the solution the only issues I have with it is
a reasonable way to do the v-tbl mixup
what to do when 2 extensions alter the same productions (as most replacements will end up calling the original having one replacement call the other would work but the mechanics of setting this up are the issue)
how to allow the extension of extensions (this might end up looking like a standard MI system, but I've never got how they work)
Another solution (a slightly more mundane version of the same same approach) would be to use static member variables to store function-pointers and call them for the same effect.
Edit: I have already built a system that lets me build productions from BNF definitions. I can alter it to support whatever I decide on.
These are some of the challenges the Perl 6 design effort has faced. You may find it worthwhile looking into some of the solutions they came up with. Or you may find that to be gross overkill.
I made a configurable parser I uploadei it some time ago at
http://code.google.com/p/compparser/
The project there is not up-to-date but is working fine.
If I recall my university courses correctly, recursive descent parsers have some limitations that might bite you, especially since you're allowing extensions - somebody elses language extension could cause issues.
A proper compiler toolkit - such as the open source ANTLR - might make things easier, and might also provide some different approaches for you.
another option is to express the parsing rules in XML or something, instead of in code; less efficient, but far more dynamically configurable; each language or variant can just use its own (XML) file, and even include/reference other files as 'base' files...
Frankly, I am not even sure I understood everything you wrote... :-)
But when I see parser and flexibility, I think about LPeg - Parsing Expression Grammars For Lua. It might not fit your needs but it is well worth a look... ;-)