Is Well-Founded recursion safe? - agda

In the question about non-termination With clauses obscuring termination an answer suggests to recourse to <-wellFounded.
I looked at the definition of <-wellFounded before, and it strikes me there is a --safe in OPTIONS. Is it meant to work without this option? That is, is using --safe some optimisation, or is it working around some fundamental problem? So in this case we just delegate the termination problem to a function marked as "safe"?

It is completely safe. --safe holds a module to a stricter standard than default. It does not mean that something is unsafe, it means that something is safe. Using well-founded recursion from any module, safe or not, will not introduce non-termination. Termination is rather strongly baked into the inductive definition of accessibility.

Related

Why are compilers non-reentrant at the beginning?

I'm reading some yacc and lex related stuff and some other compiler implementations, it seems like they are all using global state and hence really unsafe to use multithreaded situations so it is hard to embed them in other programs. I know GNU Bison and Flex could be used for re-entrancy but why are they not on by default?
Because when the interface for Lex and Yacc was defined, many many many years ago, the use of globals was much more common. Reentrancy changes the interface, and the reentrant interfaces have never been formally standardised (which is probably just as well, given the state of play). At the time, multithreading was not very common, largely because a typical computer just barely had the resources to do one compilation (and sometimes not even that; it was also pretty common for compilation passes to be sequentially loaded executables).
So the default continues to be the non-reentrant, standardised interface. And it probably will remain that way, whether or not we like it.

Can F# be refactored into a pointfree style?

In researching a topic related to programming I came across a pointfree refactoring tool for Haskell in the lambdabot and was wondering if F# can be refactored into a pointfree style?
I am not advocating the use of pointfree style, but see it as a means to better comprehend a function.
Note: pad answered an earlier version of this question, but I reworded this question as the answer is of value to others learning and using F# and I did not want this to be deleted because of some close votes.
Note: Just because I changed the question, don't take the answer to mean that one can not code in a point free style using F#. It can be done in many cases but there are restrictions you have to follow.
Short answer
No.
Long answer
There are a few things in F# that make such a tool impractical. (1) Due to .NET interop, F# code often has side effects and automatic code transformation becomes really difficult when side effects come in play. It's not the case with Haskell; equational reasoning is much easier in Haskell and you can rewrite left-hand sides by right-hand sides without altering their evaluations. (2) Point-free programming in F# is limited by value restriction. I'm not sure you can do code transformation aggressively without hitting this issue.
I think it's more practical to assume that F# code is pure and value restriction doesn't occur in particular cases in order that we can give users some hints. Users can apply suggestions discretely after evaluating that the suggestions are actually correct. It's closer to HLint approach than the one you referred to. FSharpLint has added some linting rules in this direction.

Why is -compile(export_all) bad practice?

All the erlang books seem to say export_all is bad practice but don't give a reason. In the end most modules spend a majority of their time with compile(export_all) because constantly updating the list of modules to remove the helper functions is a hassle. Is it bad practice because I'm supposed to care about the functions I expose to other developers? Or is it bad practice because there's some kind of performance cost in the number of functions a module has, because of maybe things like hot code loading. If there is a performance hit to stuffing a module with a lot of functions, how bad is it?
For several reasons:
Clarity: it's easier to see which functions are intended to be used outside the module.
When you tab complete in the Erlang shell you get a list of only the exported functions and no others. When you refactor the module, you know which functions you can safely rename without external users depending on them.
Code smell: you get warnings for unused functions.
Therefore you'll avoid dead code.
Optimization: the compiler might be able to make more aggressive optimizations knowing that not all functions have to be exported.
While I don't know for sure if there are any practical performance implications of using -compile(export_all)., I doubt they are significant enough to care.
However, there a benefit of declaring the list of exports explicitly. By doing this, everyone can figure out the interface of the module by looking at the first page of the .erl file. Also, as with many other things that we tend to write down, explicit declaration of the module interface helps to maintain its clarity.
With that said, when I start working on a new Erlang module I always type -module(...). -compile(export_all). After the interface becomes mature enough I add an explicit -export([...]) while keeping the export_all compile option.
Having a defined list of which functions are external, and therefore which ones are internal, is extremely useful for anyone who will work on your code in the future. I've recently been refactoring some old code, and the use of export_all in most of the modules has been a continual source of annoyance.

Why is F#'s type inference so fickle?

The F# compiler appears to perform type inference in a (fairly) strict top-to-bottom, left-to-right fashion. This means you must do things like put all definitions before their use, order of file compilation is significant, and you tend to need to rearrange stuff (via |> or what have you) to avoid having explicit type annotations.
How hard is it to make this more flexible, and is that planned for a future version of F#? Obviously it can be done, since Haskell (for example) has no such limitations with equally powerful inference. Is there anything inherently different about the design or ideology of F# that is causing this?
Regarding "Haskell's equally powerful inference", I don't think Haskell has to deal with
OO-style dynamic subtyping (type classes can do some similar stuff, but type classes are easier to type/infer)
method overloading (type classes can do some similar stuff, but type classes are easier to type/infer)
That is, I think F# has to deal with some hard stuff that Haskell does not. (Almost certainly, Haskell has to deal with some hard stuff that F# does not.)
As is mentioned by other answers, most of the major .NET languages have the Visual Studio tooling as a major language design influence (see e.g. how LINQ has "from ... select" rather than the SQL-y "select... from", motivated by getting intellisense from a program prefix). Intellisense, error squiggles, and error-message comprehensibility are all tooling factors that inform the F# design.
It may well be possible to do better and infer more (without sacrificing on other experiences), but I don't think it's among our high priorities for future versions of the language. (The Haskellers may see F# type inference as somewhat weak, but they are probably outnumbered by the C#ers who see F# type inference as very strong. :) )
It might also be hard to extend the type inference in a non-breaking fashion; it is ok to change illegal programs into legal ones in a future version, but you have to be very careful to ensure previously-legal programs do not change semantics under new inference rules, and name resolution (an awful nightmare in every language) is likely to interact with type-inference-changes in surprising ways.
I think that the algorithm used by F# has the benefit that it is easy to (at least roughly) explain how it works, so once you understand it, you can have some expectations about the result.
The algorithm will always have some limitations. Currently, it is quite easy to understand them. For more complicated algorithms, this could be difficult. For example, I think you could run into situations where you think that the algorithm should be able to deduce something - but if it was general enough to cover the case, it would be non-decidable (e.g. could keep looping forever).
Another thought on this is that checking the code from the top to the bottom corresponds to how we read code (at least sometimes). So, maybe the fact that we tend to write the code in a way that enables type-inference also makes the code more readable for people...
F# uses one pass compilation such that
you can only reference types or
functions which have been defined
either earlier in the file you're
currently in or appear in a file which
is specified earlier in the
compilation order.
I recently asked Don Syme about making
multiple source passes to improve the
type inference process. His reply was
"Yes, it’s possible to do multi-pass
type inference. There are also
single-pass variations that generate a
finite set of constraints.
However these approaches tend to give
bad error messages and poor
intellisense results in a visual
editor."
http://www.markhneedham.com/blog/2009/05/02/f-stuff-i-get-confused-about/#comment-16153
The short answer is that F# is based on the tradition of SML and OCaml, whereas Haskell comes from a slightly different world of Miranda, Gofer, and the like. The differences in historical tradition are subtle, but pervasive. This distinction is paralleled in other modern languages too, such as the ML-like Coq which has the same ordering restrictions vs the Haskell-like Agda which doesn't.
This difference is related to lazy vs strict evaluation. The Haskell side of the universe believes in laziness, and once you already believe in laziness the idea of adding laziness to things like type inference is a no-brainer. Whereas in the ML side of the universe whenever laziness or mutual recursion is necessary it must be explicitly noted by the use of keywords like with, and, rec, etc. I prefer the Haskell approach because it results in less boilerplate code, but there are a lot of folks who think it's better to make these things explicit.

Does F# provide you automatic parallelism?

By this I meant: when you design your app side effects free, etc, will F# code be automatically distributed across all cores?
No, I'm afraid not. Given that F# isn't a pure functional language (in the strictest sense), it would be rather difficult to do so I believe. The primary way to make good use of parallelism in F# is to use Async Workflows (mainly via the Async module I believe). The TPL (Task Parallel Library), which is being introduced with .NET 4.0, is going to fulfil a similar role in F# (though notably it can be used in all .NET languages equally well), though I can't say I'm sure exactly how it's going to integrate with the existing async framework. Perhaps Microsoft will simply advise the use of the TPL for everything, or maybe they will leave both as an option and one will eventually become the de facto standard...
Anyway, here are a few articles on asynchronous programming/workflows in F# to get you started.
http://blogs.msdn.com/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx
http://strangelights.com/blog/archive/2007/09/29/1597.aspx
http://www.infoq.com/articles/pickering-fsharp-async
F# does not make it automatic, it just makes it easy.
Yet another chance to link to Luca's PDC talk. Eight minutes starting at 52:20 are an awesome demo of F# async workflows. It rocks!
No, I'm pretty sure that it won't automatically parallelise for you. It would have to know that your code was side-effect free, which could be hard to prove, for one thing.
Of course, F# can make it easier to parallelise your code, particularly if you don't have any side effects... but that's a different matter.
Like the others mentioned, F# will not automatically scale across cores and will still require a framework such as the port of ParallelFX that Josh mentioned.
F# is commonly associated with potential for parallel processing because it defaults to objects being immutable, removing the need for locking for many scenarios.
On purity annotations: Code Contracts have a Pure attribute. I remember hearing the some parts of the BCL already use this. Potentially, this attribute could be used by parallellization frameworks as well, but I'm not aware of such work at this point. Also, I' not even sure how well code contacts are usable from within F#, so a lot of unknowns here.
Still, it will be interesting to see how all this stuff comes together.
No it will not. You must still explicitly marshal calls to other threads via one of the many mechanisms supported by F#.
My understanding is that it won't but Parallel Extensions is being modified to make it consumable by F#. Which won't make it automatically multi-thread it, should make it very easy to achieve.
Well, you have your answer, but I just wanted to add that I think this is the most significant limitation of F# stemming from the fact that it is a hybrid imperative/functional language.
I would like to see some extension to F# that declares a function to be pure. That is, it has no side-effects that are not denoted by the function's type. The idea would be that a function is pure only if it references other "known-pure" functions. Of course, this would only be useful if it were then possible to require that a delegate passed as a function parameter references a pure function.

Resources