Differences in Macros and Reflection in Meta-Programming - ruby-on-rails

I'm currently studying about Meta-Programming. I guess Ruby on Rails uses meta-programming heavily. Here is what I understand so far.
Macros: Happens in compile-time, uses code to generate code (i.e. Rails uses attr_reader to automatically set up getters)
Reflection: Happens in run-time (I read reflection uses it's own language as a meta-language, but not sure what this means)
Meta-Programming uses a program as a data type to generate code; Macros and Reflection are techniques of Meta-Programming in some sense.
I have total 3 questions.
I am having hard time to understand what Reflection is. Can someone provide me a good definition of it with maybe applicable examples?
What's the difference between Macros and Reflection
Whether I can see Macros and Reflection as a subset of Meta-Programming since the definition of Macros and Meta-Programming almost seems identical to me.
If you can explain this in using Ruby/Rails, that would help me a lot. Thanks!

Meta-programming is using a program to manipulate a program. It is a very broad definition indeed, and pushing it to the extreme may even include automatic debugging and binary patch development.
The most obvious difference between macro and reflection is that the former uses a meta-language different than the object language (language which is manipulated), while the latter uses the object language itself as the meta-language. Besides, macros are more often associated with generation, such as conditional compilation and template expansion; reflection with inspection and manipulation, such as member iteration and bypassing privacy restriction. But the more one dives in the more blurry the boundary is.
In terms of Ruby on Rails, the most popular meta-programming pattern is probably ActiveRecord. It uses reflection to list classes and class members it needs to map to database tables and columns. Similarly, any ORM project involves a fair bit of meta-programming.
Rails uses meta-programming in a lot of other ways, but I am not familiar enough with it to know. I stumbled upon this blogpost in research which might help: Metaprogramming Ruby and Rails Antipatterns (Part 1 of 2). You can define methods with names only known at runtime and define attributes on-demand with meta-programming.
Because meta-programming is such an overarching concept, I generally summarize it as "customizing compiler / runtime". In the end, if you are trying to do something that needs information from the compiler or the runtime, a.k.a. from the running code itself, that is probably meta-programming.

Related

Categories library for Agda?

Are there any "recommended" libraries that provide a easy-to-use formalisation of basic category theory in Agda? The Agda standard library seems to provide very little in this regard.
I'm looking for something with a low barrier to entry, similar to how one uses the algebraic structures such as Semigroup defined in the standard library.
For example, there are several notions of morphism in my current project, and overloading syntax for composition and identity gets awkward. The natural thing to do would be to introduce a suitable record type and use Agda's "instance arguments" mechanism to emulate a Morphism type class. But no doubt this must be a wheel that has been invented many times. (Ok, there is a structure called Morphism in the standard library which perhaps could be adapted to this purpose, so this isn't necessarily the best example, but you get the idea.)
I'm aware of this library, which looks comprehensive, but doesn't seem to be particularly active.
This is an old question, but it still gets hits on google and other searches, so: the de facto library is now agda-categories.
I'm using the Categories library as mentioned above, and although I'm only using its basic features, it seems fine so far.

Standards for when to use custom operators

I'm currently reading Real-World Functional Programming, and it briefly mentions in-fix operators as being one of the main benefits of custom operators. Are there any sort of standards for when to use or not use custom operators in F#? I'm looking for answers equivalent to this.
For reference, here is the quote to which #JohnPalmer is referring from here:
3.8 Operator Definitions 
Avoid defining custom symbolic operators in F#-facing library designs.
Custom operators are essential in some
situations and are highly useful notational devices within a large
body of implementation code. For new users of a library, named
functions are often easier to use. In addition custom symbolic
operators can be hard to document, and users find it more difficult to
lookup help on operators, due to existing limitations in IDE and
search engines.
As a result, it is generally best to publish your
functionality as named functions and members.
Custom infix operators are a nice feature in some situations, but when you use them, you have to be very careful to keep your code readable - so the recommendation from F# design guidelines applies most of the time. If I was writing Real-World Functional Programming again, I would be a bit less enthusiastic about them, because they really should be used carefuly :-).
That said, there are some F# libraries that make good use of custom operators and sometimes they work quite nicely. I think FParsec (parser combinator library) is one case - though maybe they have too many of them. Another example is a XML DSL which uses #=.
In general, when you're writing ordinary F# library, you probably do not want to expose them. However, when you're writing a domain specific language, custom operators might be useful.

F# Type Providers vs. Lisp macros

I've been reading about F# 3.0 type providers (e.g. here) and it seems that they are based on a kind of compile-time code generation. In that respect I was wondering how they compare against Lisp macros. It would seem that both F# 3.0 type providers and Lisp macros allow user code to execute at compile time and introduce new types available to the compiler. Can anyone shed some light on the issue and nuances involved?
There is some overlap between F# type providers and meta-programming techniques from other languages, but I agree with Daniel that they do not have much in common. F# has some other meta-programming techniques like quotations that are perhaps closer to LISP macros.
In particular:
LISP macros are typically used to transform expressions (you can take a LISP expression and either interpret it or transform it and then execute it). Note that the transformation takes a LISP expression as an input - on the other hand, type providers can only take very limited parameters (strings, integers).
Quotations are more similar. They can be used to process F# expression - you can treat a piece of F# code as data and interpret it or transform it. The transformation takes (a sub-set of) an F# expression, but it typically does not execute it.
Type providers are used purely to generate types. Since LISP is dynamically typed, this is not really a problem that you'd have in LISP. However, it is a sort of code-generation (a form of metaprogramming that you can certainly do in LISP).
An interesting aspect of F# type providers is that they work not only at compile-time, but at design-time, that is, in a way that interacts with the full IDE tooling. Type Providers provide 'types' from an external schematized data source, but the implementation mechanism also enables lots of IDE tooling, including IntelliSense (identifier auto-completion), documentation, data tooltips, etc. Combined with the interactive REPL, this affords easy exploration of unfamiliar data sets in a way that is not quite like the experience in any other language.
F# Type providers are a very specific case of compile time code generation i.e they are meant to solve a specific kind of problem by compile time code generation. They allow you to generate new types at compile time.
LISP macros are a more generic approach to meta-programming and hence cater to a lot of use cases. Macro's basically take input as S-expression (code or data) and emit other S-expression .
So a type provider can be implemented using macro easily, whereas you cannot cover the whole range of "what macros can do" with type providers.
I'm not familiar with Lisp macros, but macros in general are used for meta programming (to save typing and add control constructs to the language). Type providers, on the other hand, generate strongly-typed APIs for external data sources.
I can't think of anything besides compile time "expansion" that they have in common.

How does "Language Oriented Programming" compare to OOP/Functional in the real world

I recently began to read some F# related literature, speaking of "Real World Functional Programming" and "Expert F#" e. g.. At the beginning it's easy, because I have some background in Haskell, and know C#. But when it comes to "Language Oriented Programming" I just don't get it. - I read some explanations and it's like reading an academic paper that gets more abstract and strange with every sentence.
Does anybody have an easy example for that kind of stuff and how it compares to existing paradigms? It's not just academic fantasy, isn't it? ;)
Thanks,
wishi
Language oriented program (LOP) can be used to describe any of the following.
Creating an external language (DSL)
This is perhaps the most common use of LOP, and is where you have a specific domain - such as UPS shipping packages via transit types through routes, etc. Rather than try to encode all of these domain-specific entities inside of program code, you rather create a separate programming language for just that domain. So you can encode your problem in a separate, external language.
Creating an internal language
Sometimes you want your program code to look less like 'code' and map more closely to the problem domain. That is, have the code 'read more naturally'. A fluent interface is an example of this: Fluent Interface. Also, F# has Active Patterns which support this quite well.
I wrote a blog post on LOP a while back that provides some code examples.
F# has a few mechanisms for doing programming in a style one might call "language-oriented".
First, the syntax niceties (function calls don't need parentheses, can define own infix operators, ...) make it so that many user-defined libraries have the appearance of embedded DSLs.
Second, the F# "quotations" mechanism can enable you to quote code and then run it with an alternative semantics/evaluation engine.
Third, F# "computation expressions" (aka workflows, monads, ...) also provide a way to provide a type of alternative semantics for certain blocks of code.
All of these kinda fall into the EDSL category.
In Object Oriented Programming, you try to model a problem using Objects. You can then connect those Objects together to perform functions...and in the end solve the original problem.
In Language Oriented Programming, rather than use an existing Object Oriented or Functional Programming Language, you design a new Domain Specific Language that is best suited to efficiently solve your problem.
The term language Oriented Programming may be overloaded in that it might have different meanings to different people.
But in terms of how I've used it, it means that you create a DSL(http://en.wikipedia.org/wiki/Domain_Specific_Language) before you start to solve your problem.
Once your DSL is created you would then write your program in terms of the DSL.
The idea being that your DSL is more suited to expressing the problem than a General purpose language would be.
Some examples would be the make file syntax or Ruby on Rails ActiveRecord class.
I haven't directly used language oriented programming in real-world situations (creating an actual language), but it is useful to think about and helps design better domain-driven objects.
In a sense, any real-world development in Lisp or Scheme can be considered "language-oriented," since you are developing the "language" of your application and its abstract tree as you code along. Cucumber is another real-world example I've heard about.
Please note that there are some problems to this approach (and any domain-driven approach) in real-world development. One major problem that I've dealt with before is mismatch between the logic that makes sense in the domain and the logic that makes sense in software. Domain (business) logic can be extremely convoluted and senseless - and causes domain models to break down.
An easy example of a domain-specific language, mentioned here, is SQL. Also: UNIX shell scripts.
Of course, if you are doing a lot of basic ops and have a lot of overlap with the underlying language, it is probably overengineering.

If you already know LISP, why would you also want to learn F#?

What is the added value for learning F# when you are already familiar with LISP?
Static typing (with type inference)
Algebraic data types
Pattern matching
Extensible pattern matching with active patterns.
Currying (with a nice syntax)
Monadic programming, called 'workflows', provides a nice way to do asynchronous programming.
A lot of these are relatively recent developments in the programming language world. This is something you'll see in F# that you won't in Lisp, especially Common Lisp, because the F# standard is still under development. As a result, you'll find there is a quite a bit to learn. Of course things like ADTs, pattern matching, monads and currying can be built as a library in Lisp, but it's nicer to learn how to use them in a language where they are conveniently built-in.
The biggest advantage of learning F# for real-world use is its integration with .NET.
Comparing Lisp directly to F# isn't really fair, because at the end of the day with enough time you could write the same app in either language.
However, you should learn F# for the same reasons that a C# or Java developer should learn it - because it allows functional programming on the .NET platform. I'm not 100% familiar with Lisp, but I assume it has some of the same problems as OCaml in that there isn't stellar library support. How do you do Database access in Lisp? What about high-performance graphics?
If you want to learn more about 'Why .NET', check out this SO question.
If you knew F# and Lisp, you'd find this a rather strange question to ask.
As others have pointed out, Lisp is dynamically typed. More importantly, the unique feature of Lisp is that it's homoiconic: Lisp code is a fundamental Lisp data type (a list). The macro system takes advantage of that by letting you write code which executes at compile-time and modifies other code.
F# has nothing like this - it's a statically typed language which borrows a lot of ideas from ML and Haskell, and runs it on .NET
What you are asking is akin to "Why do I need to learn to use a spoon if I know how to use a fork?"
Given that LISP is dynamically typed and F# is statically typed, I find such comparisons strange.
If I were switching from Lisp to F#, it would be solely because I had a task on my hands that hugely benefitted from some .NET-only library.
But I don't, so I'm not.
Money. F# code is already more valuable than Lisp code and this gap will widen very rapidly as F# sees widespread adoption.
In other words, you have a much better chance of earning a stable income using F# than using Lisp.
Cheers,
Jon Harrop.
F# is a very different language compared to most Lisp dialects. So F# gives you a very different angle of programming - an angle that you won't learn from Lisp. Most Lisp dialects are best used for incremental, interactive development of symbolic software. At the same time most Lisp dialects are not Functional Programming Languages, but more like multi-paradigm languages - with different dialects placing different weight on supporting FPL features (free of side effects, immutable data structures, algebraic data types, ...). Thus most Lisp dialects either lack static typing or don't put much emphasis on it.
So, if you know some Lisp dialect, then learning F# can make a lot of sense. Just don't think that much of your Lisp knowledge applies to F#, since F# is a very different language. As much as an imperative programming used to C or Java needs to unlearn some ideas when learning Lisp, one also needs to unlearn Lisp habits (no types, side effects, macros, ...) when using F#. F# is also driven by Microsoft and taking advantage of the .net framework.
F# has the benefit that .NET development (in general) is very widely adopted, easily available, and more mass market.
If you want to code F#, you can get Visual Studio, which many developers will already have...as opposed to getting the LISP environment up and running.
Additionally, existing .NET developers are much more likely to look at F# than LISP, if that means anything to you.
(This is coming from a .NET developer who coded, and loved, LISP, while in college).
I'm not sure if you would? If you find F# interesting that would be a reason. If you work requires it, it would be a reason. If you think it would make you more productive or bring you added value over your current knowledge, that would be a reason.
But if you don't find F# interesting, your work doesn't require it and you don't think it would make you more productive or bring you added value, then why would you?
If the question on the other hand is what F# gives that lisp don't, then type inference, pattern matching and integration with the rest of the .NET framework should be considered.
I know this thread is old but since I stumbled on this one I just wanted to comment on my reasons. I am learning F# simply for professional opportunities since .NET carries a lot of weight in a category of companies that dominate my field. The functional paradigm has been growing in use among more quantitatively and data oriented companies and I'd like to be one of the early comers to this trend. Currently there doesn't an exist a strong functional language that fully and safely integrates with the .NET library. I actually attempted to port some .NET from Lisp code and it's really a pain b/c the FFI only supports C primitives and .NET interoperability requires an 'interface' construct and even though I know how to do this in C it's really a huge pain. It would be really, really, good if Lisp went the extra mile in it's next standard and required a c++ class (including virtual functions w/ vtables), and a C# style interface type in it's FFI. Maybe even throw in a Java interface style type too. This would allow complete interoperability with the .NET library and make Lisp a strong contender as a large-scale language. However with that said, coming from a Lisp background made learning F# rather easy. And I like how F# has gone the extra mile to provide types that you would commonly see it quantitative type work. I believe F# was created with mathematical work in mind and that in itself has value over Lisp.
One way to look at this (the original question) is to match up the language (and associated tools and platforms) to the immediate task. If the task requires an overwhelming percentage of .NET code, and it would require less shoe-horning in one language than another to meet the task head-on, then take the path of least resistance (F#). If you don't need .NET capabilities, and you're comfortable working with LISP and there's no arm-bending to move away from it, keep using it.
Not really much different from comparing a hammer with a wrench. Pick the tool that fits the job most effectively. Trying to pick a tool that's objectively "best" is nonsense. And in any case, in 20 years, all of the currently "hot" languages might be outdated anyway.

Resources