F# mutual recursion between modules - f#

For recursion in F#, existing documentation is clear about how to do it in the special case where it's just one function calling itself, or a group of physically adjacent functions calling each other.
But in the general case where a group of functions in different modules need to call each other, how do you do it?

I don't think there is a way to achieve this in F#. It is usually possible to structure the application in a way that doesn't require this, so perhaps if you described your scenario, you may get some useful comments.
Anyway, there are various ways to workaround the issue - you can declare a record or an interface to hold the functions that you need to export from the module. Interfaces allow you to export polymorphic functions too, so they are probably a better choice:
// Before the declaration of modules
type Module1Funcs =
abstract Foo : int -> int
type Module2Funcs =
abstract Bar : int -> int
The modules can then export a value that implements one of the interfaces and functions that require the other module can take it as an argument (or you can store it in a mutable value).
module Module1 =
// Import functions from Module2 (needs to be initialized before using!)
let mutable module2 = Unchecked.defaultof<Module2Funcs>
// Sample function that references Module2
let foo a = module2.Bar(a)
// Export functions of the module
let impl =
{ new Module1Funcs with
member x.Foo(a) = foo a }
// Somewhere in the main function
Module1.module2 <- Module2.impl
Module2.module1 <- Module1.impl
The initializationcould be also done automatically using Reflection, but that's a bit ugly, however if you really need it frequently, I could imagine developing some reusable library for this.
In many cases, this feels a bit ugly and restructuring the application to avoid recursive references is a better approach (in fact, I find recursive references between classes in object-oriented programming often quite confusing). However, if you really need something like this, then exporting functions using interfaces/records is probably the only option.

This is not supported. One evidence is that, in visual stuido, you need to order the project files correctly for F#.
It would be really rare to recursively call two functions in two different modules.
If this case does happen, you'd better factor the common part of the two functions out.

I don't think that there's any way for functions in different modules to directly refer to functions in other modules. Is there a reason that functions whose behavior is so tightly intertwined need to be in separate modules?
If you need to keep them separated, one possible workaround is to make your functions higher order so that they take a parameter representing the recursive call, so that you can manually "tie the knot" later.

If you were talking about C#, and methods in two different assemblies needed to mutually recursively call each other, I'd pull out the type signatures they both needed to know into a third, shared, assembly. I don't know however how well those concepts map to F#.

Definetely solution here would use module signatures. A signature file contains information about the public signatures of a set of F# program elements, such as types, namespaces, and modules.
For each F# code file, you can have a signature file, which is a file that has the same name as the code file but with the extension .fsi instead of .fs.

Related

Arrow KT: Reader Monad vs #extension for Dependency Injection

I've read about Reader Monad from this article by Jorge Castillo himself and I've also got this article by Paco. It seems that both tackles the idea of Dependency Injection just in a different way. (Or am I wrong?)
I'm really confused whether I understand the whole Reader Monad and how it relates to the Simple Depenency Injection that Paco is talking about.
Can anyone help me understand these two things? Would I ever need both of them in one project depending on situations?
Your doubt is understandable, since yes both approaches share the same outcome: Passing dependencies implicitly for you all the way across your call stack, so you don't need to pass them explicitly at every level. With both approaches you will pass your dependencies once from the outer edge, and that's it.
Let's say you have the functions a(), b(), c() and d(), and let's say each one calls the next one: a() -> b() -> c() -> d(). That is our program.
If you didn't use any of the mentioned mechanisms, and you needed some dependencies in d(), you would end up forwarding your dependencies (let's call them ctx) all the way down on every single level:
a(ctx) -> b(ctx) -> c(ctx) -> d(ctx)
While after using any of the mentioned two approaches, it'd be like:
a(ctx) -> b() -> c() -> d()
But still, and this is the important thing to remember, you'd have your dependencies accessible in the scope of each one of those functions. This is possible because with the described approaches you enable an enclosing context that automatically forwards them on every level, and that each one of the functions runs within. So, being into that context, the function gets visibility of those dependencies.
Reader: It's a data type. I encourage you to read and try to understand this glossary where data types are explained, since the difference between both approaches requires understanding what type classes and data types are, and how they play together:
https://arrow-kt.io/docs/patterns/glossary/
As a summary, data types represent a context for the program's data. In this case, Reader stands for a computation that requires some dependencies to run. I.e. a computation like (D) -> A. Thanks to it's flatMap / map / and other of its functions and how they are encoded, D will be passed implicitly on every level, and since you will define every one of your program functions as a Reader, you will always be operating within the Reader context hence get access to the required dependencies (ctx). I.e:
a(): Reader<D, A>
b(): Reader<D, A>
c(): Reader<D, A>
d(): Reader<D, A>
So chaining them with the Reader available combinators like flatMap or map you'll get D being implicitly passed all the way down and enabled (accessible) for each of those levels.
In the other hand, the approach described by Paco's post looks different, but ends up achieving the same. This approach is about leveraging Kotlin extension functions, since by defining a program to work over a receiver type (let's call it Context) at all levels will mean every level will have access to the mentioned context and its properties. I.e:
Context.a()
Context.b()
Context.c()
Context.d()
Note that an extension function receiver is a parameter that without extension function support you'd need to manually pass as an additional function argument on every call, so in that way is a dependency, or a "context" that the function requires to run. Understanding those this way and understanding how Kotlin interprets extension functions, the receiver will not need to be forwarded manually on every level but just passed to the entry edge:
ctx.a() -> b() -> c() -> d()
B, c, and d would be called implicitly without the need for you to explicitly call each level function over the receiver since each function is already running inside that context, hence it has access to its properties (dependencies) enabled automatically.
So once we understand both we'd need to pick one, or any other DI approach. That's quite subjective, since in the functional world there are also other alternatives for injecting dependencies like the tagless final approach which relies on type classes and their compile time resolution, or the EnvIO which is still not available in Arrow but will be soon (or an equivalent alternative). But I don't want to get you more confused here. In my opinion the Reader is a bit "noisy" in combination with other common data types like IO, and I usually aim for tagless final approaches, since those allow to keep program constraints determined by injected type classes and rely on IO runtime for the complete your program.
Hopefully this helped a bit, otherwise feel free to ask again and we'll be back to answer 👍

How to swap functions (e.g.for tests) in pure functional programming

I'm trying to understand what is an FP-alternative to good old dependency injection from OOP.
Say I have the following app (pseudocode)
app() is where application starts. It allows user to register and list user posts (whatever). These two functions are composed out of several other functions (register does it step by step, imperatively, while list posts really composes them (at least this is how I understand function composition).
app()
registerUser(u)
validate(u)
persist(u)
callSaveToDB(u)
notify(u)
sendsEmail
listPosts(u)
postsToView(loadUserPosts(findUser(u)))
Now I'd like to test this stuff (registerUser and listPosts) and would like to have stubbed functions so that I don't call db etc - you know, usual testing stuff.
I know it's possible to pass functions to functions e.g
registerUser(validateFn, persistFn, notifyFn, u)
and have it partially applied so it looks like registerUser(u) with other functions closed over and so on. But it all needs to be done on app boot level as it was in OOP (wiring dependencies and bootstraping an app). It looks like manually doing this will take ages and tons of boilerplate code. Is there something obvious I'm missing there? Is there any other way of doing that?
EDIT:
I see having IO there is not a good example. So what if I have function composed of several other functions and one of them is really heavy (in terms of computations) and I'd like to swap it?
Simply - I'm looking for FP way of doing DI stuff.
The way to answer this is to drop the phrase "dependency injection" and think about it more fundamentally. Write down interfaces as types for each component. Implement functions that have those types. Replace them as needed. There's no magic, and language features like type classes make it easy for the compiler to ensure you can substitute methods in an interface.
The previous Haskell-specific answer, shows how to use Haskell types for the API: https://stackoverflow.com/a/14329487/83805

Using Dart as a DSL

I am trying to use Dart to tersely define entities in an application, following the idiom of code = configuration. Since I will be defining many entities, I'd like to keep the code as trim and concise and readable as possible.
In an effort to keep boilerplate as close to 0 lines as possible, I recently wrote some code like this:
// man.dart
part of entity_component_framework;
var _man = entity('man', (entityBuilder) {
entityBuilder.add([TopHat, CrookedTeeth]);
})
// test.dart
part of entity_component_framework;
var man = EntityBuilder.entities['man']; // null, since _man wasn't ever accessed.
The entity method associates the entityBuilder passed into the function with a name ('man' in this case). var _man exists because only variable assignments can be top-level in Dart. This seems to be the most concise way possible to use Dart as a DSL.
One thing I wasn't counting on, though, is lazy initialization. If I never access _man -- and I had no intention to, since the entity function neatly stored all the relevant information I required in another data structure -- then the entity function is never run. This is a feature, not a bug.
So, what's the cleanest way of using Dart as a DSL given the lazy initialization restriction?
So, as you point out, it's a feature that Dart doesn't run any code until it's told to. So if you want something to happen, you need to do it in code that runs. Some possibilities
Put your calls to entity() inside the main() function. I assume you don't want to do that, and probably that you want people to be able to add more of these in additional files without modifying the originals.
If you're willing to incur the overhead of mirrors, which is probably not that much if they're confined to this library, use them to find all the top-level variables in that library and access them. Or define them as functions or getters. But I assume that you like the property that variables are automatically one-shot. You'd want to use a MirrorsUsed annotation.
A variation on that would be to use annotations to mark the things you want to be initialized. Though this is similar in that you'd have to iterate over the annotated things, which I think would also require mirrors.

Pluggability in a functional paradigm

What is the proper functional way to handle pluggability in projects? I am working on a new opensource project in F# and can not seem to get the object oriented idea of plugins and interfaces out of my mind. Things I would like to be able to swap out are loggers, datastoring, and authentication.
I have been searching quite a bit for an answer to this and havent come up with much except for this:
http://flyingfrogblog.blogspot.com/2010/12/extensibility-in-functional-programming.html
The answer to this question would be different for different functional languages. F# is not purely functional - it takes the best from functional, imperative and also object-oriented worlds.
For things like logging and authentication, the most pragmatic approach would be to use interfaces (in F#, it is perfectly fine to use interfaces, but people do not generally use inheritance and prefer composition instead).
A simple interface makes sense when you have multiple different functions that you can invoke:
type IAuthentication =
abstract Authenticate : string * string -> bool
abstract ResetPassword : string * string -> void
You can use object expressions, which is a really nice way to implement interfaces in F#.
If you have just a single function (like logging a message), then you can parameterize your code by a function (which is like an interface with just a single method):
type Logger = string -> unit
For things like authentication and logging (that probably do not change while the application is running), you can use a global mutable value. Although if you want to synchronize requests from multiple threads and there is some mutable state, it might be a good idea to write an F# agent.

Regarding F# Object Oriented Programming

There's this dichotomy in the way we can create classes in f# which really bothers me. I can create classes using either an implicit format or an explicit one. But some of the features that I want are only available for use with the implicit format and some are only available for use with the explicit format.
For example:
I can't use let inline* (or let alone) inside an explicitly defined class.
The only way (that I know) to define immutable public fields (not properties*) inside an implicitly defined class is the val bla : bla syntax.
But there's a redundancy here. Since I'll end up with two copy of the same immutable data, one private, one public (because in the implicit mode the constructor parameters persist throughout the class existence)
(Not so relevant) The need to use attributes for method overloading and for field's defaults is rather off putting.
Is there anyway I can work around this?
*For performance reasons
EDIT: Turns out I'm wrong about both points (Thanks Ganesh Sittampalam & MichaelGG).
While I can't use let inline in both implicit & explicit class definition, I can use member inline just fine, which I assume does the same thing.
Apparently with the latest F# there's no longer any redundancy since any parameters not used in the class body are local to the constructor.
Will be gone in the next F# release.
This might not help, but you can make members inline. "member inline private" works fine.
For let inline, you can work around by moving it outside the class and explicitly passing any values you need from inside the scope of the class when calling it. Since it'll be inlined, there'll be no performance penalty for doing this.

Resources