Lua - Overloading of Comparison Operators - lua

I was just wondering if there is any future plan for Lua to allow users to return userdata type when overloading comparison operators, such as >, <, ==. Currently, all comparison operators defaults to boolean type and in my opinion this prevents many shortcut expressions especially when trying to design code for scientific purposes, such as matrix1<matrix2 and you want to return a matrix.
It is possible to overcome this by designing matrix3=matrix1:less(matrix2); however, many engineers and scientists dont enjoy this type of syntax and prefer the simple way: matrix3=matrix1<matrix2.
C++ allows and as I know Python allows this kind of flexibility. I was just wondering whether this is how not Lua has been designed from the very beginning or it was just for some reason a preference and therefore in the future versions it is possible for Lua to allow this flexibility.

I'm not lua author, but can guarantee that >, < and == in lua will return boolean even in future.
As you have mentioned, C++ and Python allows this. Python is close to C++ as Lua is close to C, I mean some kind of philosophy. So for those who like C++ described thing is lack of feature in Lua, while for C programmers it is the freaking part of OOP languages.
Just imaging that someone will read your program :)
matrix3 = matrix1 < matrix2
What? < operator builds matrix instead of comparison?
matrix3 = m1 > m2
matrix3 = m1 == m1
matrix3 = m1 ~= m2
If language supports such things just kill it with fire (as of C philosophy point)
Lua is the only one I know scripting language with perfect designed abstractions, I mean, though Lua has metatables and metamethods, programmer still can read code and understand what is going on here.
My point is that well designed language must rescue programmers from themselves.

Related

Why is it that Lua doesn't have augmented assignment? [duplicate]

This is a question I've been mildly irritated about for some time and just never got around to search the answer to.
However I thought I might at least ask the question and perhaps someone can explain.
Basically many languages I've worked in utilize syntactic sugar to write (using syntax from C++):
int main() {
int a = 2;
a += 3; // a=a+3
}
while in lua the += is not defined, so I would have to write a=a+3, which again is all about syntactical sugar. when using a more "meaningful" variable name such as: bleed_damage_over_time or something it starts getting tedious to write:
bleed_damage_over_time = bleed_damage_over_time + added_bleed_damage_over_time
instead of:
bleed_damage_over_time += added_bleed_damage_over_time
So I would like to know not how to solve this if you don't have a nice solution, in that case I would of course be interested in hearing it; but rather why lua doesn't implement this syntactical sugar.
This is just guesswork on my part, but:
1. It's hard to implement this in a single-pass compiler
Lua's bytecode compiler is implemented as a single-pass recursive descent parser that immediately generates code. It does not parse to a separate AST structure and then in a second pass convert that to bytecode.
This forces some limitations on the grammar and semantics. In particular, anything that requires arbitrary lookahead or forward references is really hard to support in this model. This means assignments are already hard to parse. Given something like:
foo.bar.baz = "value"
When you're parsing foo.bar.baz, you don't realize you're actually parsing an assignment until you hit the = after you've already parsed and generated code for that. Lua's compiler has a good bit of complexity just for handling assignments because of this.
Supporting self-assignment would make that even harder. Something like:
foo.bar.baz += "value"
Needs to get translated to:
foo.bar.baz = foo.bar.baz + "value"
But at the point that the compiler hits the =, it's already forgotten about foo.bar.baz. It's possible, but not easy.
2. It may not play nice with the grammar
Lua doesn't actually have any statement or line separators in the grammar. Whitespace is ignored and there are no mandatory semicolons. You can do:
io.write("one")
io.write("two")
Or:
io.write("one") io.write("two")
And Lua is equally happy with both. Keeping a grammar like that unambiguous is tricky. I'm not sure, but self-assignment operators may make that harder.
3. It doesn't play nice with multiple assignment
Lua supports multiple assignment, like:
a, b, c = someFnThatReturnsThreeValues()
It's not even clear to me what it would mean if you tried to do:
a, b, c += someFnThatReturnsThreeValues()
You could limit self-assignment operators to single assignment, but then you've just added a weird corner case people have to know about.
With all of this, it's not at all clear that self-assignment operators are useful enough to be worth dealing with the above issues.
I think you could just rewrite this question as
Why doesn't <languageX> have <featureY> from <languageZ>?
Typically it's a trade-off that the language designers make based on their vision of what the language is intended for, and their goals.
In Lua's case, the language is intended to be an embedded scripting language, so any changes that make the language more complex or potentially make the compiler/runtime even slightly larger or slower may go against this objective.
If you implement each and every tiny feature, you can end up with a 'kitchen sink' language: ADA, anyone?
And as you say, it's just syntactic sugar.
Another reason why Lua doesn't have self-assignment operators is that table access can be overloaded with metatables to have arbitrary side effects. For self assignment you would need to choose to desugar
foo.bar.baz += 2
into
foo.bar.baz = foo.bar.baz + 2
or into
local tmp = foo.bar
tmp.baz = tmp.baz + 2
The first version runs the __index metamethod for foo twice, while the second one does so only once. Not including self-assignment in the language and forcing you to be explicit helps avoid this ambiguity.

F# special quotes? (##)

I just ran across
http://frankniemeyer.blogspot.com/2010/04/minimalistic-native-64-bit-array.html
Which contains the line
(# "sizeof !0" type('T) : nativeint #)
I believe the technical phrase is "what the heck?" I have never in my (~8 months) of F# programming run across something even resembling that...
FSI tells me something about deprecated constructs, used only for F# libs...
And google with (# does uh...well, not much
Any direction in this?
This is the notation for inline IL emission. It used to be a more prominent feature during F#'s earlier years, but has been deprecated. A gentleman named Brian from the F# team has indicated that it is currently used mainly to bootstrap the F# compiler, and that the team had intended to mark this construct as an error, not merely a warning.
See his post here for the full story.
It's inline IL (intermediate language) code. This construct is used internally by the F# team to implement bits of the F# core library you just can't do any other way. This code will admit a warning saying it shouldn't be used any where other than the F# core libraries, so you probably don't have to worry about it too much as it should never appear in production code.
Fascinating. But I think F# already gives us the conversion operations (for this particular operation!) you need without resorting to IL.
[<Unverifiable>]
let inline ArrayOffset (itemSize:int64) (length:int64) (start:int64) (idx:int64) =
if idx < 0L || idx >= length then raise(IndexOutOfRangeException())
NativePtr.ofNativeInt(nativeint(start + (idx * itemSize)))

What are advantages and disadvantages of "point free" style in functional programming?

I know that in some languages (Haskell?) the striving is to achieve point-free style, or to never explicitly refer to function arguments by name. This is a very difficult concept for me to master, but it might help me to understand what the advantages (or maybe even disadvantages) of that style are. Can anyone explain?
The point-free style is considered by some author as the ultimate functional programming style. To put things simply, a function of type t1 -> t2 describes a transformation from one element of type t1 into another element of type t2. The idea is that "pointful" functions (written using variables) emphasize elements (when you write \x -> ... x ..., you're describing what's happening to the element x), while "point-free" functions (expressed without using variables) emphasize the transformation itself, as a composition of simpler transforms. Advocates of the point-free style argue that transformations should indeed be the central concept, and that the pointful notation, while easy to use, distracts us from this noble ideal.
Point-free functional programming has been available for a very long time. It was already known by logicians which have studied combinatory logic since the seminal work by Moses Schönfinkel in 1924, and has been the basis for the first study on what would become ML type inference by Robert Feys and Haskell Curry in the 1950s.
The idea to build functions from an expressive set of basic combinators is very appealing and has been applied in various domains, such as the array-manipulation languages derived from APL, or the parser combinator libraries such as Haskell's Parsec. A notable advocate of point-free programming is John Backus. In his 1978 speech "Can Programming Be Liberated From the Von Neumann Style ?", he wrote:
The lambda expression (with its substitution rules) is capable of
defining all possible computable functions of all possible types
and of any number of arguments. This freedom and power has its
disadvantages as well as its obvious advantages. It is analogous
to the power of unrestricted control statements in conventional
languages: with unrestricted freedom comes chaos. If one
constantly invents new combining forms to suit the occasion, as
one can in the lambda calculus, one will not become familiar with
the style or useful properties of the few combining forms that
are adequate for all purposes. Just as structured programming
eschews many control statements to obtain programs with simpler
structure, better properties, and uniform methods for
understanding their behavior, so functional programming eschews
the lambda expression, substitution, and multiple function
types. It thereby achieves programs built with familiar
functional forms with known useful properties. These programs are
so structured that their behavior can often be understood and
proven by mechanical use of algebraic techniques similar to those
used in solving high school algebra problems.
So here they are. The main advantage of point-free programming are that they force a structured combinator style which makes equational reasoning natural. Equational reasoning has been particularly advertised by the proponents of the "Squiggol" movement (see [1] [2]), and indeed use a fair share of point-free combinators and computation/rewriting/reasoning rules.
[1] "An introduction to the Bird-Merteens Formalism", Jeremy Gibbons, 1994
[2] "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire", Erik Meijer, Maarten Fokkinga and Ross Paterson, 1991
Finally, one cause for the popularity of point-free programming among Haskellites is its relation to category theory. In category theory, morphisms (which could be seen as "transformations between objects") are the basic object of study and computation. While partial results allow reasoning in specific categories to be performed in a pointful style, the common way to build, examine and manipulate arrows is still the point-free style, and other syntaxes such as string diagrams also exhibit this "pointfreeness". There are rather tight links between the people advocating "algebra of programming" methods and users of categories in programming (for example the authors of the banana paper [2] are/were hardcore categorists).
You may be interested in the Pointfree page of the Haskell wiki.
The downside of pointfree style is rather obvious: it can be a real pain to read. The reason why we still love to use variables, despite the numerous horrors of shadowing, alpha-equivalence etc., is that it's a notation that's just so natural to read and think about. The general idea is that a complex function (in a referentially transparent language) is like a complex plumbing system: the inputs are the parameters, they get into some pipes, are applied to inner functions, duplicated (\x -> (x,x)) or forgotten (\x -> (), pipe leading nowhere), etc. And the variable notation is nicely implicit about all that machinery: you give a name to the input, and names on the outputs (or auxiliary computations), but you don't have to describe all the plumbing plan, where the small pipes will go not to be a hindrance for the bigger ones, etc. The amount of plumbing inside something as short as \(f,x,y) -> ((x,y), f x y) is amazing. You may follow each variable individually, or read each intermediate plumbing node, but you never have to see the whole machinery together. When you use a point-free style, all the plumbing is explicit, you have to write everything down, and look at it afterwards, and sometimes it's just plain ugly.
PS: this plumbing vision is closely related to the stack programming languages, which are probably the least pointful programming languages (barely) in use. I would recommend trying to do some programming in them just to get of feeling of it (as I would recommend logic programming). See Factor, Cat or the venerable Forth.
I believe the purpose is to be succinct and to express pipelined computations as a composition of functions rather than thinking of threading arguments through. Simple example (in F#) - given:
let sum = List.sum
let sqr = List.map (fun x -> x * x)
Used like:
> sum [3;4;5]
12
> sqr [3;4;5]
[9;16;25]
We could express a "sum of squares" function as:
let sumsqr x = sum (sqr x)
And use like:
> sumsqr [3;4;5]
50
Or we could define it by piping x through:
let sumsqr x = x |> sqr |> sum
Written this way, it's obvious that x is being passed in only to be "threaded" through a sequence of functions. Direct composition looks much nicer:
let sumsqr = sqr >> sum
This is more concise and it's a different way of thinking of what we're doing; composing functions rather than imagining the process of arguments flowing through. We're not describing how sumsqr works. We're describing what it is.
PS: An interesting way to get your head around composition is to try programming in a concatenative language such as Forth, Joy, Factor, etc. These can be thought of as being nothing but composition (Forth : sumsqr sqr sum ;) in which the space between words is the composition operator.
PPS: Perhaps others could comment on the performance differences. It seems to me that composition may reduce GC pressure by making it more obvious to the compiler that there is no need to produce intermediate values as in pipelining; helping make the so-called "deforestation" problem more tractable.
While I'm attracted to the point-free concept and used it for some things, and agree with all the positives said before, I found these things with it as negative (some are detailed above):
The shorter notation reduces redundancy; in a heavily structured composition (ramda.js style, or point-free in Haskell, or whatever concatenative language) the code reading is more complex than linearly scanning through a bunch of const bindings and using a symbol highlighter to see which binding goes into what other downstream calculation. Besides the tree vs linear structure, the loss of descriptive symbol names makes the function hard to intuitively grasp. Of course both the tree structure and the loss of named bindings also have a lot of positives as well, for example, functions will feel more general - not bound to some application domain via the chosen symbol names - and the tree structure is semantically present even if bindings are laid out, and can be comprehended sequentially (lisp let/let* style).
Point-free is simplest when just piping through or composing a series of functions, as this also results in a linear structure that we humans find easy to follow. However, threading some interim calculation through multiple recipients is tedious. There are all kinds of wrapping into tuples, lensing and other painstaking mechanisms go into just making some calculation accessible, that would otherwise be just the multiple use of some value binding. Of course the repeated part can be extracted out as a separate function and maybe it's a good idea anyway, but there are also arguments for some non-short functions and even if it's extracted, its arguments will have to be somehow threaded through both applications, and then there may be a need for memoizing the function to not actually repeat the calculation. One will use a lot of converge, lens, memoize, useWidth etc.
JavaScript specific: harder to casually debug. With a linear flow of let bindings, it's easy to add a breakpoint wherever. With the point-free style, even if a breakpoint is somehow added, the value flow is hard to read, eg. you can't just query or hover over some variable in the dev console. Also, as point-free is not native in JS, library functions of ramda.js or similar will obscure the stack quite a bit, especially with the obligate currying.
Code brittleness, especially on nontrivial size systems and in production. If a new piece of requirement comes in, then the above disadvantages get into play (eg. harder to read the code for the next maintainer who may be yourself a few weeks down the line, and also harder to trace the dataflow for inspection). But most importantly, even something seemingly small and innocent new requirement can necessitate a whole different structuring of the code. It may be argued that it's a good thing in that it'll be a crystal clear representation of the new thing, but rewriting large swaths of point-free code is very time consuming and then we haven't mentioned testing. So it feels that the looser, less structured, lexical assignment based coding can be more quickly repurposed. Especially if the coding is exploratory, and in the domain of human data with weird conventions (time etc.) that can rarely be captured 100% accurately and there may always be an upcoming request for handling something more accurately or more to the needs of the customer, whichever method leads to faster pivoting matters a lot.
To the pointfree variant, the concatenative programming language, i have to write:
I had a little experience with Joy. Joy is a very simple and beautiful concept with lists. When converting a problem into a Joy function, you have to split your brain into a part for the stack plumbing work and a part for the solution in the Joy syntax. The stack is always handled from the back. Since the composition is contained in Joy, there is no computing time for a composition combiner.

F# - Should I learn with or without #light?

I'm in the process of learning F# and am enjoying it so far. Almost all of the examples online use the lightweight syntax (#light); however, also give a comment about it being on for said example in most cases.
Is it better to learn F# using #light enabled or disabled? I'm planning on eventually learning it w/o it turned on but am curious on if it would be better to learn it at the beginning or work on applying it after I know the core language more.
I'd definitely prefer learning F# with the #light syntax. The non-light version is sometimes useful for understanding some tricks about the F# syntax, but the #light syntax gives you much pleasant experience.
For example - using #light
let add a b c =
let ab = a + b
printfn "%d" ab
c - ab
Using non-light you can write the same thing like this:
let add a b c =
let ab = a + b in // 'in' keyword specifies where the binding (value 'ab') is valid
printfn "%d" ab; // ';' is operator for sequencing expressions
c - ab;; // ';;' is end of a function declaration
This for example shows that you cannot write something like:
let doNothing a b =
let sum = a + b in
There is an 'in' keyword at the end but the function doesn't have any body (because there is no expression following 'in'). In this case non-light syntax is sometimes interesting to understand what's going on... But as you can see, the #light code is a lot simpler.
The "#light" will probably become the default in a future release of the language, so I would learn it that way. I think it's rare for anyone to use the heavier syntax except for OCaml-compatibility (either when cross-compiling, or because the human sitting at the keyboard knows OCaml and is making a smoother transition to F#).
Because I learned F# from an OCaml book (and I use an OCaml mode for Emacs to edit F# code), I prefer to use the "heavy" syntax. I have worked with #light code, and of course most of the F# examples are written using the light syntax so having some general familiarity is useful. That said, it's quite a bit easier to switch from heavy to light than the other way around, so it's certainly not a bad idea to learn it using the heavy syntax.
I have come across the occasional annoying bug with heavy syntax being treated as a second class citizen (combine was broken for computation expressions a couple releases back), but these are pretty rare. Generally speaking, I don't think the differences are very significant and I need to look close to determine which syntax is being used when looking at code in isolation. YMMV.
If I remember correctly, book "Expert C#" mentions that #light will be the default when F# ships and that non-light syntax is intended for compatibility only.

Do concepts like Map and Reduce apply to all Functional Programming Languages?

I have just started delving into the world of functional programming.
A lot of OOP (Object Oriented Programming) concepts such as inheritance and polymorphism apply to most modern OO languages like C#, Java and VB.NET.
But how about concepts such as Map, Reduce, Tuples and Sets, do they apply to all FP (Functional Programming) languages?
I have just started with F#. But do aforementioned concepts apply to other FP like Haskell, Nemerle, Lisp, etc.?
You bet. The desirable thing about function programming is that the mathematical concepts you describe are more naturally expressed in an FP.
It's a bit of tough going, but John Backus' Turing Award paper in which he described functional (or "applicative") programming is a good read. The Wikipedia article is good too.
Yes; higher-order functions, algebraic data types, folds/catamorphisms, etc are common to almost all functional languages (though they sometimes go by slightly different names in each language).
Functional tools apply to all programming, not just languages that handle that explicitly. For example, python has map and reduce builtin functions that do exactly what you expect, besides out of order evaluation. you'll need something like the multiprocessing module to get really clever.
Even if the language doesn't provide the exact primitives, most modern languages still make it possible to get the desired effect with a bit more work. This is similar to the way a class-like concept can be coded in pure C.
I would interpret what you're asking as, "Are higher-order functions (map, reduce, filter, ...) and immutable data structures (tuples, cons lists, records, maps, sets, ...) common across FP languages?" and I would say, absolutely yes.
Like you say, OOP has well known pillars (encapsulation, inheritance, polymorphism). The "pillars" of functional programming I'd say are 1) Using functions as first-class values and 2) Expressing yourself without side effects.
You'll likely find common tools to apply these ideas across various FP languages (F# is an excellent choice BTW!) and you'll see them finding their way into more mainstream languages; maybe in a less recognizable form (e.g. LINQ's Select = map, Aggregate = reduce/fold, Where = filter, C# has light weight lambda syntax, System.Tuple, etc.).
As an aside, the thing that seems to be generally missing from non-explicitly-FP languages is good immutable data structures and syntax support for them (not merely a library) which makes it hard to stick to pillar #2 in those languages. F# lists, records, tuples, etc. all are good examples of great language and library combined support for this.
If you really want to jump into the deep end and understand why these concepts are not just conventional but, ahem, foundational, check out the paper "Functional programming with bananas, lenses, envelopes and barbed wire".
They apply to all languages that contain data types that can be "mapped" and "reduced", i.e maps, arrays/vectors, or lists.
In a "pure lambda calculus" language, where every data structure is defined via function application, you can of course apply functions in parallel (i.e., in a call fn(expr1, expr2), you can evaluate expr1 and expr2 in parallel), but that isn't really what map/reduce is about.

Resources