Are protocol buffers usable with F#? - f#

Just curious - are protocol buffers usable with F#? Any caveats etc.?

I'm just trying to answer this question myself.
Marc Gravell's protobuf-net project worked out-of-the-box with F# because it uses standard .NET idioms. You can use attributes to get serializing without having to write .proto files or do any two-stage compilation, or you can generate the necessary code from standard .proto files. Performance is good for .NET but a lot slower than alternatives like OCaml's built-in Marshal module. However, this library forces you to make every field in every message type mutable. This is really counter-productive because messages should be immutable. Also, the documentation leaves a lot to be desired but, then, this is free software.
I haven't managed to get Jon Skeet's protobuf-csharp-port library to work at all yet.
Ideally, you'd be able to serialize all of the built-in F# types (tuples, records, unions, lists, sets, maps, ...) to this wire format out-of-the-box but none of the existing open source solutions are capable of this. I'm also concerned by the complexity of these solutions: Jon Skeet's is 88,000 lines of C# code and comments (!).
As an aside, I am disappointed to see that Google protocol buffers do not specify standard formats for DateTime or decimal numbers.
I haven't looked at Proto# yet and cannot even find a download for Froto. There is also ProtoParser but it just parses .proto files and cannot actually serialize anything.

There isn't an F# specific one listed here, but there is an OCaml one, or there is a .NET "general" one (protobuf-net).
In all honesty, I simply haven't gotten around to trying protobuf-net with F# objects, in part because I simply don't know enough F#, but if you can create POCOs they should work. They would need to have some kind of mutability (perhaps even just private mutability) to work with protobuf-net, though.
If you are happy to generate a C# DTO and just consume that from F#, then protobuf-net or Jon's port should work just fine.

I'd expect both my own port and Marc Gravell's to work just fine with F#, to the same extent that any other .NET library does. In other words, neither port is written in a way which is likely to produce idiomatic F# code, but they should work.
My port will generate C# code, so you'll need to build that as a separate project for your serialization model - but that should interoperate with F# without any problems. The generated types are immutable (with mutable builders) so that should help in an F# context.
Of course, you could always take the core parts of either project and come up with an idiomatic F# solution too - whether you port the whole project to F# or use the existing libraries with an F# code generator and helper functions, or something like that.

Related

Is PSeq the correct approach to use in F# 3.0?

I've been using the F# powerpack for some time and it's got the great PSeq module in and it works really well.
However it doesn't smell correct as this is an extension to the F# libraries, rather than a standard library. Is there a more 'correct' way to perform general parallelism in F#?
I know this is quite a general question, but I can't seem to find a definitive answer anywhere else that is recent (eg 2013)
As mentioned in the comments the PSeq module is just a thin wrapper over the Parallel LINQ library. The main purpose is to give you a nice syntax, compatible with pipelining and the style used with Seq and other standard modules.
So, if Parallel LINQ is the right thing in your case, then PSeq module is the best way to use it from F#. That said, PLINQ only improves the performance if your collections have the right structure - you need to process fairly large collections and the operations that you are running are non-trivial (otherwise, there is a lot of overhead).
Also, the easiest way to use PSeq is probably just to copy PSeq.fs and PSeq.fsi into your project (they are part of the F# PowerPack, but if you only care about PSeq, then copying the two files is all you need).

How to build FParsec for .NET Compact Framework?

I’m writing a small application based on FParsec.
Today, I’m looking for an opportunity to make a version for Compact Framework.
Apparently, it is not that simple to build FParsec sources for .NET CF. The FParsecCS library has unsafe code and some references to the types that are not available in CF. Namely, System.Collections.Generic.HashSet, System.Text.DecoderFallbackException, and more.
I’m wondering if there’s any way to make it built. Obviously, I’m trying not to alter the code as it would be hard to update when further versions of FParsec released.
I don’t really care about performance. If there is a generic CharStream that can be used instead of high-performance one you have, that would be quite sufficient.
Thank you for your help.
I don’t have any experience with .NET CF and never tried to make FParsec run on it. However, there’s a Silverlight version of FParsec, which might be a good starting point for a port to .NET CF. The Silverlight version builds on the LOW_TRUST version of FParsec, which doesn’t use any "unsafe" code. Hopefully, the stream size limitation of the LOW_TRUST version won’t be an issue for your application.
The easiest way to deal with the HashSet dependency probably is to implement you own simple HashSet type (based on a Dictionary) that implements the few methods that FParsec actually uses for its error handling. If the DecoderFallbackException is not supported, you can just comment out the respective exception handlers.
If you track your changes with HG, it shouldn’t be difficult to merge in updates to FParsec. Depending on how extensive the changes for .NET CF are, I could also include them in the main source tree for another conditional compiler symbol.

Is it possible to convert F# code to C# code?

The reason why I am asking is that I'm learning F# and would like to attend TopCoder competitions. However, F# is not among the list of languages supported there. But C# is on the list (to be honest, this is the case for almost all online coding competitions, except Google Code Jam and Facebook Hacker cup).
The possible workarounds I can think of at this moment are
1) find a translator that can translate F# source code directly into C#
2) compile F# code into .net executable first, then disassemble it back to C# code
The minimum requirement is that the generated C# must be able to compile into a runnable .net executable, preferable as less external dependency as possible.
The first approach seems unlikely, a quick google search turns out nothing relevant.
Approach two looks more promising, there are .net disassemblers exist.
I tried the most popular one --- Reflector from Red Gate. While it can perfectly dissemble C# executables, it appears to have problems with executables compiled from F#: it happily disassembled, but the resulting C# code has some special characters such as adding a leading $ sign to a class name and other weird stuffs, so it cannot be compiled. I was using Visual Studio 2010 Professional, the latest Reflector beta version (which is free).
Am I missing anything here? Is it possible?
Update:
It looks like this is still impossible. For now, I'll use C# instead.
As others already pointed out in the comments - if there is some way to do that, there will be quite a few nasty cases where it probably won't quite work and it will be very fragile...
One way to deal with the problem (for you) is to just write the solution in F# and then rewrite it to C#. This may sound stupid, but there are some advantages:
In F#, you can easily prototype the solution, so you'll be able to find the right solution faster.
When translating code to C#, you'll probably find yourself using features like lambda expressions more often, so it may even improve your C# skills...
If you rely on .NET libraries, then this part of code will be easy to translate.
Of course, the best thing would be to convince the organizers that they should support F# (which probably wouldn't be too difficult if they allow C# already), but I understand that this may be a challange.

Why should I care about RTTI in Delphi?

I've heard a lot about the new/improved RTTI capabilities of Delphi 2010, but I must admit my ignorance...I don't understand it. I know every version of Delphi has supported RTTI...and I know that RTTI (Runtime Type Information) allows me to access type information while my application is running.
But what exactly does that mean? Is Delphi 2010's RTTI support the same thing as reflection in .NET?
Could someone please explain why RTTI is useful? Pretend I'm your pointy haired boss and help me understand why RTTI is cool. How might I use it in a real-world application?
RTTI in Delphi is still not quite as full-featured as Reflection in .NET or other managed languages, because it is operating on compiled code, not an Intermediate Language (bytecode). However, it is a very similar concept, and the new RTTI system in Delphi 2010 brings it a lot closer to reflection, exposing an entire object-oriented API.
Pre-D2010, the RTTI was pretty limited. About the only thing I ever remember doing with it was converting an enumerated type to a string (or vice versa) for use in drop-down lists. I may have used it at one point for control persistence.
With the new RTTI in D2010 you can do a lot more things:
XML Serialization
Attribute-based metadata (TCustomAttribute). Typical use cases would be automatic validation of properties and automated permission checks, two things that you normally have to write a lot of code for.
Adding Active Scripting support (i.e. using the Windows script control)
Building a plug-in system; you could do this before, but there were a lot of headaches. I wasn't able to find a really good example of someone doing this from top to bottom, but all of the necessary functions are available now.
It looks like someone's even trying to implement Spring (DI framework) for Delphi 2010.
So it's definitely very useful, although I'm not sure how well you'd be able to explain it to a PHB; most of its usefulness is probably going to be realized through 3rd-party libraries and frameworks, much the same way it works in the .NET community today - it's rare to see reflection code sitting in the business logic, but a typical app will make use of several reflection-based components like an Object-Relational Mapper or IoC Container.
Have I answered the question?
D2010's extended RTTI is a lot like C#'s reflection. It gives you the ability to get at any field of an object, or inspect its methods. This has all sorts of potential uses. For example, if you can read any field of an object, you can write serialization code that can work with any object. And the ability to inspect methods and obtain their name and signature makes a class much easier to register with a scripting engine.
To me, that's the primary advantage of extended RTTI: The ability to write code that works with any class by examining its members, instead of writing different versions of the same code over and over, tailored to each individual class.
Most people probably won't use it in a real world application.
The people who will use it are the framework builders. Frameworks such as DUnit make extensive use of RTTI.
With the new RTTI capabilities we should expect to start seeing more advanced frameworks and tools appearing, similar to what is available for .NET. These frameworks will revolutionise your development moreso than RTTI will on it's own.
RTTI in Delphi has always been important since version 1.0. Classic RTTI features include the "published" properties section of Classes, which allowed the Object Inspector and the component designtime features to work. For my purposes, I would often use Published class properties to allow for enumeration of those properties at runtime. To store things from my objects to disk, for persistence.
The Delphi 2010 RTTI extends this classic RTTI massively, so much so that you could be forgiven for thinking Delphi did not even have RTTI until delphi 2010.
I would say the #1 most useful applications of "The New RTTI" are (as several other answers already state) going to be in Frameworks written by the gurus, that:
Handle persistence to files or databases. Database and configuration or document saving/loading frameworks and components would use this under the hood.
Handle pickling/marshalling/encoding/decoding to and from various
over-the-wire formats, like JSON, XML, EDI, and other things.
Unit testing was mentioned by someone else (JUnit), but I think perhaps the same frameworks could be really handy for debug and error reporting tools. Given an object passed as a parameter, on the stack, why not have bug reports that can dump all the data that was passed along to a function that failed, and not just a list of functions?
As you can see, some creative people are likely to think of even more uses for this. You could say, that though it does not bring parity to .NET reflection (which another answer speaks more about), it does bring a lot of "dynamic language" features (Think of Perl, Python, JavaScript) to an otherwise strongly typed static-type systems world of Delphi.
For me, personally, extended RTTI gave a possibility to retrieve calling convention from the method pointer. However, currently, that code is under conditional directive, because i am not satisfied with it.
(Critics and suggestions on workarond with basic RTTI are welcome, tho)
Look for TMS Aurelius and you will see that RTTI Attributes are very useful in terms of creating an ORM DataBase Framework and XML Serialization into pure objects and the opposite too.
You are supposed to care because they put it on the box. Clearly they think that some people will care.
Whether you actually have a use for it is entirely dependent on the nature of your projects. Since you didn't have it before and don't understand why having it now is a benefit, this would suggest to me that you don't have a use for it. It's then up to you whether to spend the time researching the subject further in order to discover whether you might be able to find a use for it.
Whether that is the most productive use of your time in relation to your projects, again is something only you can know.

Does F# provide you automatic parallelism?

By this I meant: when you design your app side effects free, etc, will F# code be automatically distributed across all cores?
No, I'm afraid not. Given that F# isn't a pure functional language (in the strictest sense), it would be rather difficult to do so I believe. The primary way to make good use of parallelism in F# is to use Async Workflows (mainly via the Async module I believe). The TPL (Task Parallel Library), which is being introduced with .NET 4.0, is going to fulfil a similar role in F# (though notably it can be used in all .NET languages equally well), though I can't say I'm sure exactly how it's going to integrate with the existing async framework. Perhaps Microsoft will simply advise the use of the TPL for everything, or maybe they will leave both as an option and one will eventually become the de facto standard...
Anyway, here are a few articles on asynchronous programming/workflows in F# to get you started.
http://blogs.msdn.com/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx
http://strangelights.com/blog/archive/2007/09/29/1597.aspx
http://www.infoq.com/articles/pickering-fsharp-async
F# does not make it automatic, it just makes it easy.
Yet another chance to link to Luca's PDC talk. Eight minutes starting at 52:20 are an awesome demo of F# async workflows. It rocks!
No, I'm pretty sure that it won't automatically parallelise for you. It would have to know that your code was side-effect free, which could be hard to prove, for one thing.
Of course, F# can make it easier to parallelise your code, particularly if you don't have any side effects... but that's a different matter.
Like the others mentioned, F# will not automatically scale across cores and will still require a framework such as the port of ParallelFX that Josh mentioned.
F# is commonly associated with potential for parallel processing because it defaults to objects being immutable, removing the need for locking for many scenarios.
On purity annotations: Code Contracts have a Pure attribute. I remember hearing the some parts of the BCL already use this. Potentially, this attribute could be used by parallellization frameworks as well, but I'm not aware of such work at this point. Also, I' not even sure how well code contacts are usable from within F#, so a lot of unknowns here.
Still, it will be interesting to see how all this stuff comes together.
No it will not. You must still explicitly marshal calls to other threads via one of the many mechanisms supported by F#.
My understanding is that it won't but Parallel Extensions is being modified to make it consumable by F#. Which won't make it automatically multi-thread it, should make it very easy to achieve.
Well, you have your answer, but I just wanted to add that I think this is the most significant limitation of F# stemming from the fact that it is a hybrid imperative/functional language.
I would like to see some extension to F# that declares a function to be pure. That is, it has no side-effects that are not denoted by the function's type. The idea would be that a function is pure only if it references other "known-pure" functions. Of course, this would only be useful if it were then possible to require that a delegate passed as a function parameter references a pure function.

Resources